Jan 10 16:23:51 localhost kernel: Linux version 5.14.0-655.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-69.el9) #1 SMP PREEMPT_DYNAMIC Mon Dec 29 08:24:22 UTC 2025
Jan 10 16:23:51 localhost kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Jan 10 16:23:51 localhost kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-655.el9.x86_64 root=UUID=f2a0a5c1-133f-4977-b837-e40b31cbd9cc ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Jan 10 16:23:51 localhost kernel: BIOS-provided physical RAM map:
Jan 10 16:23:51 localhost kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Jan 10 16:23:51 localhost kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Jan 10 16:23:51 localhost kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Jan 10 16:23:51 localhost kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Jan 10 16:23:51 localhost kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Jan 10 16:23:51 localhost kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Jan 10 16:23:51 localhost kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Jan 10 16:23:51 localhost kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Jan 10 16:23:51 localhost kernel: NX (Execute Disable) protection: active
Jan 10 16:23:51 localhost kernel: APIC: Static calls initialized
Jan 10 16:23:51 localhost kernel: SMBIOS 2.8 present.
Jan 10 16:23:51 localhost kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Jan 10 16:23:51 localhost kernel: Hypervisor detected: KVM
Jan 10 16:23:51 localhost kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Jan 10 16:23:51 localhost kernel: kvm-clock: using sched offset of 3268518381 cycles
Jan 10 16:23:51 localhost kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Jan 10 16:23:51 localhost kernel: tsc: Detected 2800.000 MHz processor
Jan 10 16:23:51 localhost kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Jan 10 16:23:51 localhost kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Jan 10 16:23:51 localhost kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Jan 10 16:23:51 localhost kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Jan 10 16:23:51 localhost kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Jan 10 16:23:51 localhost kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Jan 10 16:23:51 localhost kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Jan 10 16:23:51 localhost kernel: Using GB pages for direct mapping
Jan 10 16:23:51 localhost kernel: RAMDISK: [mem 0x2d461000-0x32a28fff]
Jan 10 16:23:51 localhost kernel: ACPI: Early table checksum verification disabled
Jan 10 16:23:51 localhost kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Jan 10 16:23:51 localhost kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 10 16:23:51 localhost kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 10 16:23:51 localhost kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 10 16:23:51 localhost kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Jan 10 16:23:51 localhost kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 10 16:23:51 localhost kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 10 16:23:51 localhost kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Jan 10 16:23:51 localhost kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Jan 10 16:23:51 localhost kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Jan 10 16:23:51 localhost kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Jan 10 16:23:51 localhost kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Jan 10 16:23:51 localhost kernel: No NUMA configuration found
Jan 10 16:23:51 localhost kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Jan 10 16:23:51 localhost kernel: NODE_DATA(0) allocated [mem 0x23ffd3000-0x23fffdfff]
Jan 10 16:23:51 localhost kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Jan 10 16:23:51 localhost kernel: Zone ranges:
Jan 10 16:23:51 localhost kernel:   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Jan 10 16:23:51 localhost kernel:   DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Jan 10 16:23:51 localhost kernel:   Normal   [mem 0x0000000100000000-0x000000023fffffff]
Jan 10 16:23:51 localhost kernel:   Device   empty
Jan 10 16:23:51 localhost kernel: Movable zone start for each node
Jan 10 16:23:51 localhost kernel: Early memory node ranges
Jan 10 16:23:51 localhost kernel:   node   0: [mem 0x0000000000001000-0x000000000009efff]
Jan 10 16:23:51 localhost kernel:   node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Jan 10 16:23:51 localhost kernel:   node   0: [mem 0x0000000100000000-0x000000023fffffff]
Jan 10 16:23:51 localhost kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Jan 10 16:23:51 localhost kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Jan 10 16:23:51 localhost kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Jan 10 16:23:51 localhost kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Jan 10 16:23:51 localhost kernel: ACPI: PM-Timer IO Port: 0x608
Jan 10 16:23:51 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Jan 10 16:23:51 localhost kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Jan 10 16:23:51 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Jan 10 16:23:51 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Jan 10 16:23:51 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Jan 10 16:23:51 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Jan 10 16:23:51 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Jan 10 16:23:51 localhost kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Jan 10 16:23:51 localhost kernel: TSC deadline timer available
Jan 10 16:23:51 localhost kernel: CPU topo: Max. logical packages:   8
Jan 10 16:23:51 localhost kernel: CPU topo: Max. logical dies:       8
Jan 10 16:23:51 localhost kernel: CPU topo: Max. dies per package:   1
Jan 10 16:23:51 localhost kernel: CPU topo: Max. threads per core:   1
Jan 10 16:23:51 localhost kernel: CPU topo: Num. cores per package:     1
Jan 10 16:23:51 localhost kernel: CPU topo: Num. threads per package:   1
Jan 10 16:23:51 localhost kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Jan 10 16:23:51 localhost kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Jan 10 16:23:51 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Jan 10 16:23:51 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Jan 10 16:23:51 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Jan 10 16:23:51 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Jan 10 16:23:51 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Jan 10 16:23:51 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Jan 10 16:23:51 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Jan 10 16:23:51 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Jan 10 16:23:51 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Jan 10 16:23:51 localhost kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Jan 10 16:23:51 localhost kernel: Booting paravirtualized kernel on KVM
Jan 10 16:23:51 localhost kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Jan 10 16:23:51 localhost kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Jan 10 16:23:51 localhost kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Jan 10 16:23:51 localhost kernel: pcpu-alloc: s225280 r8192 d28672 u262144 alloc=1*2097152
Jan 10 16:23:51 localhost kernel: pcpu-alloc: [0] 0 1 2 3 4 5 6 7 
Jan 10 16:23:51 localhost kernel: kvm-guest: PV spinlocks disabled, no host support
Jan 10 16:23:51 localhost kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-655.el9.x86_64 root=UUID=f2a0a5c1-133f-4977-b837-e40b31cbd9cc ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Jan 10 16:23:51 localhost kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-655.el9.x86_64", will be passed to user space.
Jan 10 16:23:51 localhost kernel: random: crng init done
Jan 10 16:23:51 localhost kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Jan 10 16:23:51 localhost kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Jan 10 16:23:51 localhost kernel: Fallback order for Node 0: 0 
Jan 10 16:23:51 localhost kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Jan 10 16:23:51 localhost kernel: Policy zone: Normal
Jan 10 16:23:51 localhost kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Jan 10 16:23:51 localhost kernel: software IO TLB: area num 8.
Jan 10 16:23:51 localhost kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Jan 10 16:23:51 localhost kernel: ftrace: allocating 49414 entries in 194 pages
Jan 10 16:23:51 localhost kernel: ftrace: allocated 194 pages with 3 groups
Jan 10 16:23:51 localhost kernel: Dynamic Preempt: voluntary
Jan 10 16:23:51 localhost kernel: rcu: Preemptible hierarchical RCU implementation.
Jan 10 16:23:51 localhost kernel: rcu:         RCU event tracing is enabled.
Jan 10 16:23:51 localhost kernel: rcu:         RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Jan 10 16:23:51 localhost kernel:         Trampoline variant of Tasks RCU enabled.
Jan 10 16:23:51 localhost kernel:         Rude variant of Tasks RCU enabled.
Jan 10 16:23:51 localhost kernel:         Tracing variant of Tasks RCU enabled.
Jan 10 16:23:51 localhost kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Jan 10 16:23:51 localhost kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Jan 10 16:23:51 localhost kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Jan 10 16:23:51 localhost kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Jan 10 16:23:51 localhost kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Jan 10 16:23:51 localhost kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Jan 10 16:23:51 localhost kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Jan 10 16:23:51 localhost kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Jan 10 16:23:51 localhost kernel: Console: colour VGA+ 80x25
Jan 10 16:23:51 localhost kernel: printk: console [ttyS0] enabled
Jan 10 16:23:51 localhost kernel: ACPI: Core revision 20230331
Jan 10 16:23:51 localhost kernel: APIC: Switch to symmetric I/O mode setup
Jan 10 16:23:51 localhost kernel: x2apic enabled
Jan 10 16:23:51 localhost kernel: APIC: Switched APIC routing to: physical x2apic
Jan 10 16:23:51 localhost kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Jan 10 16:23:51 localhost kernel: Calibrating delay loop (skipped) preset value.. 5600.00 BogoMIPS (lpj=2800000)
Jan 10 16:23:51 localhost kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Jan 10 16:23:51 localhost kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Jan 10 16:23:51 localhost kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Jan 10 16:23:51 localhost kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Jan 10 16:23:51 localhost kernel: Spectre V2 : Mitigation: Retpolines
Jan 10 16:23:51 localhost kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Jan 10 16:23:51 localhost kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Jan 10 16:23:51 localhost kernel: RETBleed: Mitigation: untrained return thunk
Jan 10 16:23:51 localhost kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Jan 10 16:23:51 localhost kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Jan 10 16:23:51 localhost kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Jan 10 16:23:51 localhost kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Jan 10 16:23:51 localhost kernel: x86/bugs: return thunk changed
Jan 10 16:23:51 localhost kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Jan 10 16:23:51 localhost kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Jan 10 16:23:51 localhost kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Jan 10 16:23:51 localhost kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Jan 10 16:23:51 localhost kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Jan 10 16:23:51 localhost kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Jan 10 16:23:51 localhost kernel: Freeing SMP alternatives memory: 40K
Jan 10 16:23:51 localhost kernel: pid_max: default: 32768 minimum: 301
Jan 10 16:23:51 localhost kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Jan 10 16:23:51 localhost kernel: landlock: Up and running.
Jan 10 16:23:51 localhost kernel: Yama: becoming mindful.
Jan 10 16:23:51 localhost kernel: SELinux:  Initializing.
Jan 10 16:23:51 localhost kernel: LSM support for eBPF active
Jan 10 16:23:51 localhost kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Jan 10 16:23:51 localhost kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Jan 10 16:23:51 localhost kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Jan 10 16:23:51 localhost kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Jan 10 16:23:51 localhost kernel: ... version:                0
Jan 10 16:23:51 localhost kernel: ... bit width:              48
Jan 10 16:23:51 localhost kernel: ... generic registers:      6
Jan 10 16:23:51 localhost kernel: ... value mask:             0000ffffffffffff
Jan 10 16:23:51 localhost kernel: ... max period:             00007fffffffffff
Jan 10 16:23:51 localhost kernel: ... fixed-purpose events:   0
Jan 10 16:23:51 localhost kernel: ... event mask:             000000000000003f
Jan 10 16:23:51 localhost kernel: signal: max sigframe size: 1776
Jan 10 16:23:51 localhost kernel: rcu: Hierarchical SRCU implementation.
Jan 10 16:23:51 localhost kernel: rcu:         Max phase no-delay instances is 400.
Jan 10 16:23:51 localhost kernel: smp: Bringing up secondary CPUs ...
Jan 10 16:23:51 localhost kernel: smpboot: x86: Booting SMP configuration:
Jan 10 16:23:51 localhost kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Jan 10 16:23:51 localhost kernel: smp: Brought up 1 node, 8 CPUs
Jan 10 16:23:51 localhost kernel: smpboot: Total of 8 processors activated (44800.00 BogoMIPS)
Jan 10 16:23:51 localhost kernel: node 0 deferred pages initialised in 8ms
Jan 10 16:23:51 localhost kernel: Memory: 7763860K/8388068K available (16384K kernel code, 5796K rwdata, 13908K rodata, 4196K init, 7200K bss, 618248K reserved, 0K cma-reserved)
Jan 10 16:23:51 localhost kernel: devtmpfs: initialized
Jan 10 16:23:51 localhost kernel: x86/mm: Memory block size: 128MB
Jan 10 16:23:51 localhost kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Jan 10 16:23:51 localhost kernel: futex hash table entries: 2048 (131072 bytes on 1 NUMA nodes, total 128 KiB, linear).
Jan 10 16:23:51 localhost kernel: pinctrl core: initialized pinctrl subsystem
Jan 10 16:23:51 localhost kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Jan 10 16:23:51 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Jan 10 16:23:51 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Jan 10 16:23:51 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Jan 10 16:23:51 localhost kernel: audit: initializing netlink subsys (disabled)
Jan 10 16:23:51 localhost kernel: audit: type=2000 audit(1768062229.520:1): state=initialized audit_enabled=0 res=1
Jan 10 16:23:51 localhost kernel: thermal_sys: Registered thermal governor 'fair_share'
Jan 10 16:23:51 localhost kernel: thermal_sys: Registered thermal governor 'step_wise'
Jan 10 16:23:51 localhost kernel: thermal_sys: Registered thermal governor 'user_space'
Jan 10 16:23:51 localhost kernel: cpuidle: using governor menu
Jan 10 16:23:51 localhost kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Jan 10 16:23:51 localhost kernel: PCI: Using configuration type 1 for base access
Jan 10 16:23:51 localhost kernel: PCI: Using configuration type 1 for extended access
Jan 10 16:23:51 localhost kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Jan 10 16:23:51 localhost kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Jan 10 16:23:51 localhost kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Jan 10 16:23:51 localhost kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Jan 10 16:23:51 localhost kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Jan 10 16:23:51 localhost kernel: Demotion targets for Node 0: null
Jan 10 16:23:51 localhost kernel: cryptd: max_cpu_qlen set to 1000
Jan 10 16:23:51 localhost kernel: ACPI: Added _OSI(Module Device)
Jan 10 16:23:51 localhost kernel: ACPI: Added _OSI(Processor Device)
Jan 10 16:23:51 localhost kernel: ACPI: Added _OSI(Processor Aggregator Device)
Jan 10 16:23:51 localhost kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Jan 10 16:23:51 localhost kernel: ACPI: Interpreter enabled
Jan 10 16:23:51 localhost kernel: ACPI: PM: (supports S0 S3 S4 S5)
Jan 10 16:23:51 localhost kernel: ACPI: Using IOAPIC for interrupt routing
Jan 10 16:23:51 localhost kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Jan 10 16:23:51 localhost kernel: PCI: Using E820 reservations for host bridge windows
Jan 10 16:23:51 localhost kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Jan 10 16:23:51 localhost kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Jan 10 16:23:51 localhost kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Jan 10 16:23:51 localhost kernel: acpiphp: Slot [3] registered
Jan 10 16:23:51 localhost kernel: acpiphp: Slot [4] registered
Jan 10 16:23:51 localhost kernel: acpiphp: Slot [5] registered
Jan 10 16:23:51 localhost kernel: acpiphp: Slot [6] registered
Jan 10 16:23:51 localhost kernel: acpiphp: Slot [7] registered
Jan 10 16:23:51 localhost kernel: acpiphp: Slot [8] registered
Jan 10 16:23:51 localhost kernel: acpiphp: Slot [9] registered
Jan 10 16:23:51 localhost kernel: acpiphp: Slot [10] registered
Jan 10 16:23:51 localhost kernel: acpiphp: Slot [11] registered
Jan 10 16:23:51 localhost kernel: acpiphp: Slot [12] registered
Jan 10 16:23:51 localhost kernel: acpiphp: Slot [13] registered
Jan 10 16:23:51 localhost kernel: acpiphp: Slot [14] registered
Jan 10 16:23:51 localhost kernel: acpiphp: Slot [15] registered
Jan 10 16:23:51 localhost kernel: acpiphp: Slot [16] registered
Jan 10 16:23:51 localhost kernel: acpiphp: Slot [17] registered
Jan 10 16:23:51 localhost kernel: acpiphp: Slot [18] registered
Jan 10 16:23:51 localhost kernel: acpiphp: Slot [19] registered
Jan 10 16:23:51 localhost kernel: acpiphp: Slot [20] registered
Jan 10 16:23:51 localhost kernel: acpiphp: Slot [21] registered
Jan 10 16:23:51 localhost kernel: acpiphp: Slot [22] registered
Jan 10 16:23:51 localhost kernel: acpiphp: Slot [23] registered
Jan 10 16:23:51 localhost kernel: acpiphp: Slot [24] registered
Jan 10 16:23:51 localhost kernel: acpiphp: Slot [25] registered
Jan 10 16:23:51 localhost kernel: acpiphp: Slot [26] registered
Jan 10 16:23:51 localhost kernel: acpiphp: Slot [27] registered
Jan 10 16:23:51 localhost kernel: acpiphp: Slot [28] registered
Jan 10 16:23:51 localhost kernel: acpiphp: Slot [29] registered
Jan 10 16:23:51 localhost kernel: acpiphp: Slot [30] registered
Jan 10 16:23:51 localhost kernel: acpiphp: Slot [31] registered
Jan 10 16:23:51 localhost kernel: PCI host bridge to bus 0000:00
Jan 10 16:23:51 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Jan 10 16:23:51 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Jan 10 16:23:51 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Jan 10 16:23:51 localhost kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Jan 10 16:23:51 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Jan 10 16:23:51 localhost kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Jan 10 16:23:51 localhost kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Jan 10 16:23:51 localhost kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Jan 10 16:23:51 localhost kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Jan 10 16:23:51 localhost kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Jan 10 16:23:51 localhost kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Jan 10 16:23:51 localhost kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Jan 10 16:23:51 localhost kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Jan 10 16:23:51 localhost kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Jan 10 16:23:51 localhost kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Jan 10 16:23:51 localhost kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Jan 10 16:23:51 localhost kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Jan 10 16:23:51 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Jan 10 16:23:51 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Jan 10 16:23:51 localhost kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Jan 10 16:23:51 localhost kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Jan 10 16:23:51 localhost kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Jan 10 16:23:51 localhost kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Jan 10 16:23:51 localhost kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Jan 10 16:23:51 localhost kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Jan 10 16:23:51 localhost kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Jan 10 16:23:51 localhost kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Jan 10 16:23:51 localhost kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Jan 10 16:23:51 localhost kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Jan 10 16:23:51 localhost kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Jan 10 16:23:51 localhost kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Jan 10 16:23:51 localhost kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Jan 10 16:23:51 localhost kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Jan 10 16:23:51 localhost kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Jan 10 16:23:51 localhost kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Jan 10 16:23:51 localhost kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Jan 10 16:23:51 localhost kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Jan 10 16:23:51 localhost kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Jan 10 16:23:51 localhost kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Jan 10 16:23:51 localhost kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Jan 10 16:23:51 localhost kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Jan 10 16:23:51 localhost kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Jan 10 16:23:51 localhost kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Jan 10 16:23:51 localhost kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Jan 10 16:23:51 localhost kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Jan 10 16:23:51 localhost kernel: iommu: Default domain type: Translated
Jan 10 16:23:51 localhost kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Jan 10 16:23:51 localhost kernel: SCSI subsystem initialized
Jan 10 16:23:51 localhost kernel: ACPI: bus type USB registered
Jan 10 16:23:51 localhost kernel: usbcore: registered new interface driver usbfs
Jan 10 16:23:51 localhost kernel: usbcore: registered new interface driver hub
Jan 10 16:23:51 localhost kernel: usbcore: registered new device driver usb
Jan 10 16:23:51 localhost kernel: pps_core: LinuxPPS API ver. 1 registered
Jan 10 16:23:51 localhost kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Jan 10 16:23:51 localhost kernel: PTP clock support registered
Jan 10 16:23:51 localhost kernel: EDAC MC: Ver: 3.0.0
Jan 10 16:23:51 localhost kernel: NetLabel: Initializing
Jan 10 16:23:51 localhost kernel: NetLabel:  domain hash size = 128
Jan 10 16:23:51 localhost kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Jan 10 16:23:51 localhost kernel: NetLabel:  unlabeled traffic allowed by default
Jan 10 16:23:51 localhost kernel: PCI: Using ACPI for IRQ routing
Jan 10 16:23:51 localhost kernel: PCI: pci_cache_line_size set to 64 bytes
Jan 10 16:23:51 localhost kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff]
Jan 10 16:23:51 localhost kernel: e820: reserve RAM buffer [mem 0xbffdb000-0xbfffffff]
Jan 10 16:23:51 localhost kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Jan 10 16:23:51 localhost kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Jan 10 16:23:51 localhost kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Jan 10 16:23:51 localhost kernel: vgaarb: loaded
Jan 10 16:23:51 localhost kernel: clocksource: Switched to clocksource kvm-clock
Jan 10 16:23:51 localhost kernel: VFS: Disk quotas dquot_6.6.0
Jan 10 16:23:51 localhost kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Jan 10 16:23:51 localhost kernel: pnp: PnP ACPI init
Jan 10 16:23:51 localhost kernel: pnp 00:03: [dma 2]
Jan 10 16:23:51 localhost kernel: pnp: PnP ACPI: found 5 devices
Jan 10 16:23:51 localhost kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Jan 10 16:23:51 localhost kernel: NET: Registered PF_INET protocol family
Jan 10 16:23:51 localhost kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Jan 10 16:23:51 localhost kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Jan 10 16:23:51 localhost kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Jan 10 16:23:51 localhost kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Jan 10 16:23:51 localhost kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Jan 10 16:23:51 localhost kernel: TCP: Hash tables configured (established 65536 bind 65536)
Jan 10 16:23:51 localhost kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Jan 10 16:23:51 localhost kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Jan 10 16:23:51 localhost kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Jan 10 16:23:51 localhost kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Jan 10 16:23:51 localhost kernel: NET: Registered PF_XDP protocol family
Jan 10 16:23:51 localhost kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Jan 10 16:23:51 localhost kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Jan 10 16:23:51 localhost kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Jan 10 16:23:51 localhost kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Jan 10 16:23:51 localhost kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Jan 10 16:23:51 localhost kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Jan 10 16:23:51 localhost kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Jan 10 16:23:51 localhost kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Jan 10 16:23:51 localhost kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 87131 usecs
Jan 10 16:23:51 localhost kernel: PCI: CLS 0 bytes, default 64
Jan 10 16:23:51 localhost kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Jan 10 16:23:51 localhost kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Jan 10 16:23:51 localhost kernel: Trying to unpack rootfs image as initramfs...
Jan 10 16:23:51 localhost kernel: ACPI: bus type thunderbolt registered
Jan 10 16:23:51 localhost kernel: Initialise system trusted keyrings
Jan 10 16:23:51 localhost kernel: Key type blacklist registered
Jan 10 16:23:51 localhost kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Jan 10 16:23:51 localhost kernel: zbud: loaded
Jan 10 16:23:51 localhost kernel: integrity: Platform Keyring initialized
Jan 10 16:23:51 localhost kernel: integrity: Machine keyring initialized
Jan 10 16:23:51 localhost kernel: Freeing initrd memory: 87840K
Jan 10 16:23:51 localhost kernel: NET: Registered PF_ALG protocol family
Jan 10 16:23:51 localhost kernel: xor: automatically using best checksumming function   avx       
Jan 10 16:23:51 localhost kernel: Key type asymmetric registered
Jan 10 16:23:51 localhost kernel: Asymmetric key parser 'x509' registered
Jan 10 16:23:51 localhost kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Jan 10 16:23:51 localhost kernel: io scheduler mq-deadline registered
Jan 10 16:23:51 localhost kernel: io scheduler kyber registered
Jan 10 16:23:51 localhost kernel: io scheduler bfq registered
Jan 10 16:23:51 localhost kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Jan 10 16:23:51 localhost kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Jan 10 16:23:51 localhost kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Jan 10 16:23:51 localhost kernel: ACPI: button: Power Button [PWRF]
Jan 10 16:23:51 localhost kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Jan 10 16:23:51 localhost kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Jan 10 16:23:51 localhost kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Jan 10 16:23:51 localhost kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Jan 10 16:23:51 localhost kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Jan 10 16:23:51 localhost kernel: Non-volatile memory driver v1.3
Jan 10 16:23:51 localhost kernel: rdac: device handler registered
Jan 10 16:23:51 localhost kernel: hp_sw: device handler registered
Jan 10 16:23:51 localhost kernel: emc: device handler registered
Jan 10 16:23:51 localhost kernel: alua: device handler registered
Jan 10 16:23:51 localhost kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Jan 10 16:23:51 localhost kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Jan 10 16:23:51 localhost kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Jan 10 16:23:51 localhost kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Jan 10 16:23:51 localhost kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Jan 10 16:23:51 localhost kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Jan 10 16:23:51 localhost kernel: usb usb1: Product: UHCI Host Controller
Jan 10 16:23:51 localhost kernel: usb usb1: Manufacturer: Linux 5.14.0-655.el9.x86_64 uhci_hcd
Jan 10 16:23:51 localhost kernel: usb usb1: SerialNumber: 0000:00:01.2
Jan 10 16:23:51 localhost kernel: hub 1-0:1.0: USB hub found
Jan 10 16:23:51 localhost kernel: hub 1-0:1.0: 2 ports detected
Jan 10 16:23:51 localhost kernel: usbcore: registered new interface driver usbserial_generic
Jan 10 16:23:51 localhost kernel: usbserial: USB Serial support registered for generic
Jan 10 16:23:51 localhost kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Jan 10 16:23:51 localhost kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Jan 10 16:23:51 localhost kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Jan 10 16:23:51 localhost kernel: mousedev: PS/2 mouse device common for all mice
Jan 10 16:23:51 localhost kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Jan 10 16:23:51 localhost kernel: rtc_cmos 00:04: RTC can wake from S4
Jan 10 16:23:51 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Jan 10 16:23:51 localhost kernel: rtc_cmos 00:04: registered as rtc0
Jan 10 16:23:51 localhost kernel: rtc_cmos 00:04: setting system clock to 2026-01-10T16:23:50 UTC (1768062230)
Jan 10 16:23:51 localhost kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Jan 10 16:23:51 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Jan 10 16:23:51 localhost kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Jan 10 16:23:51 localhost kernel: hid: raw HID events driver (C) Jiri Kosina
Jan 10 16:23:51 localhost kernel: usbcore: registered new interface driver usbhid
Jan 10 16:23:51 localhost kernel: usbhid: USB HID core driver
Jan 10 16:23:51 localhost kernel: drop_monitor: Initializing network drop monitor service
Jan 10 16:23:51 localhost kernel: Initializing XFRM netlink socket
Jan 10 16:23:51 localhost kernel: NET: Registered PF_INET6 protocol family
Jan 10 16:23:51 localhost kernel: Segment Routing with IPv6
Jan 10 16:23:51 localhost kernel: NET: Registered PF_PACKET protocol family
Jan 10 16:23:51 localhost kernel: mpls_gso: MPLS GSO support
Jan 10 16:23:51 localhost kernel: IPI shorthand broadcast: enabled
Jan 10 16:23:51 localhost kernel: AVX2 version of gcm_enc/dec engaged.
Jan 10 16:23:51 localhost kernel: AES CTR mode by8 optimization enabled
Jan 10 16:23:51 localhost kernel: sched_clock: Marking stable (1267005970, 143396570)->(1524862889, -114460349)
Jan 10 16:23:51 localhost kernel: registered taskstats version 1
Jan 10 16:23:51 localhost kernel: Loading compiled-in X.509 certificates
Jan 10 16:23:51 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: cff02aed51f99e4030f8d5c362e1fce40d054fe7'
Jan 10 16:23:51 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Jan 10 16:23:51 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Jan 10 16:23:51 localhost kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Jan 10 16:23:51 localhost kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Jan 10 16:23:51 localhost kernel: Demotion targets for Node 0: null
Jan 10 16:23:51 localhost kernel: page_owner is disabled
Jan 10 16:23:51 localhost kernel: Key type .fscrypt registered
Jan 10 16:23:51 localhost kernel: Key type fscrypt-provisioning registered
Jan 10 16:23:51 localhost kernel: Key type big_key registered
Jan 10 16:23:51 localhost kernel: Key type encrypted registered
Jan 10 16:23:51 localhost kernel: ima: No TPM chip found, activating TPM-bypass!
Jan 10 16:23:51 localhost kernel: Loading compiled-in module X.509 certificates
Jan 10 16:23:51 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: cff02aed51f99e4030f8d5c362e1fce40d054fe7'
Jan 10 16:23:51 localhost kernel: ima: Allocated hash algorithm: sha256
Jan 10 16:23:51 localhost kernel: ima: No architecture policies found
Jan 10 16:23:51 localhost kernel: evm: Initialising EVM extended attributes:
Jan 10 16:23:51 localhost kernel: evm: security.selinux
Jan 10 16:23:51 localhost kernel: evm: security.SMACK64 (disabled)
Jan 10 16:23:51 localhost kernel: evm: security.SMACK64EXEC (disabled)
Jan 10 16:23:51 localhost kernel: evm: security.SMACK64TRANSMUTE (disabled)
Jan 10 16:23:51 localhost kernel: evm: security.SMACK64MMAP (disabled)
Jan 10 16:23:51 localhost kernel: evm: security.apparmor (disabled)
Jan 10 16:23:51 localhost kernel: evm: security.ima
Jan 10 16:23:51 localhost kernel: evm: security.capability
Jan 10 16:23:51 localhost kernel: evm: HMAC attrs: 0x1
Jan 10 16:23:51 localhost kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Jan 10 16:23:51 localhost kernel: Running certificate verification RSA selftest
Jan 10 16:23:51 localhost kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Jan 10 16:23:51 localhost kernel: Running certificate verification ECDSA selftest
Jan 10 16:23:51 localhost kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Jan 10 16:23:51 localhost kernel: clk: Disabling unused clocks
Jan 10 16:23:51 localhost kernel: Freeing unused decrypted memory: 2028K
Jan 10 16:23:51 localhost kernel: Freeing unused kernel image (initmem) memory: 4196K
Jan 10 16:23:51 localhost kernel: Write protecting the kernel read-only data: 30720k
Jan 10 16:23:51 localhost kernel: Freeing unused kernel image (rodata/data gap) memory: 428K
Jan 10 16:23:51 localhost kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Jan 10 16:23:51 localhost kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Jan 10 16:23:51 localhost kernel: usb 1-1: Product: QEMU USB Tablet
Jan 10 16:23:51 localhost kernel: usb 1-1: Manufacturer: QEMU
Jan 10 16:23:51 localhost kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Jan 10 16:23:51 localhost kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Jan 10 16:23:51 localhost kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Jan 10 16:23:51 localhost kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Jan 10 16:23:51 localhost kernel: Run /init as init process
Jan 10 16:23:51 localhost kernel:   with arguments:
Jan 10 16:23:51 localhost kernel:     /init
Jan 10 16:23:51 localhost kernel:   with environment:
Jan 10 16:23:51 localhost kernel:     HOME=/
Jan 10 16:23:51 localhost kernel:     TERM=linux
Jan 10 16:23:51 localhost kernel:     BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-655.el9.x86_64
Jan 10 16:23:51 localhost systemd[1]: systemd 252-64.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Jan 10 16:23:51 localhost systemd[1]: Detected virtualization kvm.
Jan 10 16:23:51 localhost systemd[1]: Detected architecture x86-64.
Jan 10 16:23:51 localhost systemd[1]: Running in initrd.
Jan 10 16:23:51 localhost systemd[1]: No hostname configured, using default hostname.
Jan 10 16:23:51 localhost systemd[1]: Hostname set to <localhost>.
Jan 10 16:23:51 localhost systemd[1]: Initializing machine ID from VM UUID.
Jan 10 16:23:51 localhost systemd[1]: Queued start job for default target Initrd Default Target.
Jan 10 16:23:51 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Jan 10 16:23:51 localhost systemd[1]: Reached target Local Encrypted Volumes.
Jan 10 16:23:51 localhost systemd[1]: Reached target Initrd /usr File System.
Jan 10 16:23:51 localhost systemd[1]: Reached target Local File Systems.
Jan 10 16:23:51 localhost systemd[1]: Reached target Path Units.
Jan 10 16:23:51 localhost systemd[1]: Reached target Slice Units.
Jan 10 16:23:51 localhost systemd[1]: Reached target Swaps.
Jan 10 16:23:51 localhost systemd[1]: Reached target Timer Units.
Jan 10 16:23:51 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Jan 10 16:23:51 localhost systemd[1]: Listening on Journal Socket (/dev/log).
Jan 10 16:23:51 localhost systemd[1]: Listening on Journal Socket.
Jan 10 16:23:51 localhost systemd[1]: Listening on udev Control Socket.
Jan 10 16:23:51 localhost systemd[1]: Listening on udev Kernel Socket.
Jan 10 16:23:51 localhost systemd[1]: Reached target Socket Units.
Jan 10 16:23:51 localhost systemd[1]: Starting Create List of Static Device Nodes...
Jan 10 16:23:51 localhost systemd[1]: Starting Journal Service...
Jan 10 16:23:51 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Jan 10 16:23:51 localhost systemd[1]: Starting Apply Kernel Variables...
Jan 10 16:23:51 localhost systemd[1]: Starting Create System Users...
Jan 10 16:23:51 localhost systemd[1]: Starting Setup Virtual Console...
Jan 10 16:23:51 localhost systemd[1]: Finished Create List of Static Device Nodes.
Jan 10 16:23:51 localhost systemd[1]: Finished Apply Kernel Variables.
Jan 10 16:23:51 localhost systemd-journald[309]: Journal started
Jan 10 16:23:51 localhost systemd-journald[309]: Runtime Journal (/run/log/journal/a9d7d54472dd4b089e5e495057bde287) is 8.0M, max 153.6M, 145.6M free.
Jan 10 16:23:51 localhost systemd[1]: Started Journal Service.
Jan 10 16:23:51 localhost systemd-sysusers[313]: Creating group 'users' with GID 100.
Jan 10 16:23:51 localhost systemd-sysusers[313]: Creating group 'dbus' with GID 81.
Jan 10 16:23:51 localhost systemd-sysusers[313]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Jan 10 16:23:51 localhost systemd[1]: Finished Create System Users.
Jan 10 16:23:51 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Jan 10 16:23:51 localhost systemd[1]: Starting Create Volatile Files and Directories...
Jan 10 16:23:51 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Jan 10 16:23:51 localhost systemd[1]: Finished Setup Virtual Console.
Jan 10 16:23:51 localhost systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Jan 10 16:23:51 localhost systemd[1]: Starting dracut cmdline hook...
Jan 10 16:23:51 localhost dracut-cmdline[328]: dracut-9 dracut-057-102.git20250818.el9
Jan 10 16:23:51 localhost dracut-cmdline[328]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-655.el9.x86_64 root=UUID=f2a0a5c1-133f-4977-b837-e40b31cbd9cc ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Jan 10 16:23:51 localhost systemd[1]: Finished Create Volatile Files and Directories.
Jan 10 16:23:51 localhost systemd[1]: Finished dracut cmdline hook.
Jan 10 16:23:51 localhost systemd[1]: Starting dracut pre-udev hook...
Jan 10 16:23:51 localhost kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Jan 10 16:23:51 localhost kernel: device-mapper: uevent: version 1.0.3
Jan 10 16:23:51 localhost kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Jan 10 16:23:51 localhost kernel: RPC: Registered named UNIX socket transport module.
Jan 10 16:23:51 localhost kernel: RPC: Registered udp transport module.
Jan 10 16:23:51 localhost kernel: RPC: Registered tcp transport module.
Jan 10 16:23:51 localhost kernel: RPC: Registered tcp-with-tls transport module.
Jan 10 16:23:51 localhost kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Jan 10 16:23:51 localhost rpc.statd[448]: Version 2.5.4 starting
Jan 10 16:23:51 localhost rpc.statd[448]: Initializing NSM state
Jan 10 16:23:51 localhost rpc.idmapd[453]: Setting log level to 0
Jan 10 16:23:51 localhost systemd[1]: Finished dracut pre-udev hook.
Jan 10 16:23:51 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Jan 10 16:23:51 localhost systemd-udevd[466]: Using default interface naming scheme 'rhel-9.0'.
Jan 10 16:23:51 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Jan 10 16:23:51 localhost systemd[1]: Starting dracut pre-trigger hook...
Jan 10 16:23:51 localhost systemd[1]: Finished dracut pre-trigger hook.
Jan 10 16:23:51 localhost systemd[1]: Starting Coldplug All udev Devices...
Jan 10 16:23:52 localhost systemd[1]: Created slice Slice /system/modprobe.
Jan 10 16:23:52 localhost systemd[1]: Starting Load Kernel Module configfs...
Jan 10 16:23:52 localhost systemd[1]: Finished Coldplug All udev Devices.
Jan 10 16:23:52 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jan 10 16:23:52 localhost systemd[1]: Finished Load Kernel Module configfs.
Jan 10 16:23:52 localhost systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Jan 10 16:23:52 localhost systemd[1]: Reached target Network.
Jan 10 16:23:52 localhost systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Jan 10 16:23:52 localhost systemd[1]: Starting dracut initqueue hook...
Jan 10 16:23:52 localhost kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Jan 10 16:23:52 localhost kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Jan 10 16:23:52 localhost kernel: libata version 3.00 loaded.
Jan 10 16:23:52 localhost kernel:  vda: vda1
Jan 10 16:23:52 localhost kernel: ata_piix 0000:00:01.1: version 2.13
Jan 10 16:23:52 localhost kernel: scsi host0: ata_piix
Jan 10 16:23:52 localhost kernel: scsi host1: ata_piix
Jan 10 16:23:52 localhost kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Jan 10 16:23:52 localhost kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Jan 10 16:23:52 localhost systemd[1]: Found device /dev/disk/by-uuid/f2a0a5c1-133f-4977-b837-e40b31cbd9cc.
Jan 10 16:23:52 localhost systemd[1]: Reached target Initrd Root Device.
Jan 10 16:23:52 localhost systemd[1]: Mounting Kernel Configuration File System...
Jan 10 16:23:52 localhost systemd[1]: Mounted Kernel Configuration File System.
Jan 10 16:23:52 localhost systemd[1]: Reached target System Initialization.
Jan 10 16:23:52 localhost systemd[1]: Reached target Basic System.
Jan 10 16:23:52 localhost kernel: ata1: found unknown device (class 0)
Jan 10 16:23:52 localhost kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Jan 10 16:23:52 localhost kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Jan 10 16:23:52 localhost systemd-udevd[494]: Network interface NamePolicy= disabled on kernel command line.
Jan 10 16:23:52 localhost kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Jan 10 16:23:52 localhost kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Jan 10 16:23:52 localhost kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Jan 10 16:23:52 localhost kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0
Jan 10 16:23:52 localhost systemd[1]: Finished dracut initqueue hook.
Jan 10 16:23:52 localhost systemd[1]: Reached target Preparation for Remote File Systems.
Jan 10 16:23:52 localhost systemd[1]: Reached target Remote Encrypted Volumes.
Jan 10 16:23:52 localhost systemd[1]: Reached target Remote File Systems.
Jan 10 16:23:52 localhost systemd[1]: Starting dracut pre-mount hook...
Jan 10 16:23:52 localhost systemd[1]: Finished dracut pre-mount hook.
Jan 10 16:23:52 localhost systemd[1]: Starting File System Check on /dev/disk/by-uuid/f2a0a5c1-133f-4977-b837-e40b31cbd9cc...
Jan 10 16:23:52 localhost systemd-fsck[558]: /usr/sbin/fsck.xfs: XFS file system.
Jan 10 16:23:52 localhost systemd[1]: Finished File System Check on /dev/disk/by-uuid/f2a0a5c1-133f-4977-b837-e40b31cbd9cc.
Jan 10 16:23:52 localhost systemd[1]: Mounting /sysroot...
Jan 10 16:23:53 localhost kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Jan 10 16:23:53 localhost kernel: XFS (vda1): Mounting V5 Filesystem f2a0a5c1-133f-4977-b837-e40b31cbd9cc
Jan 10 16:23:53 localhost kernel: XFS (vda1): Ending clean mount
Jan 10 16:23:53 localhost systemd[1]: Mounted /sysroot.
Jan 10 16:23:53 localhost systemd[1]: Reached target Initrd Root File System.
Jan 10 16:23:53 localhost systemd[1]: Starting Mountpoints Configured in the Real Root...
Jan 10 16:23:53 localhost systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Jan 10 16:23:53 localhost systemd[1]: Finished Mountpoints Configured in the Real Root.
Jan 10 16:23:53 localhost systemd[1]: Reached target Initrd File Systems.
Jan 10 16:23:53 localhost systemd[1]: Reached target Initrd Default Target.
Jan 10 16:23:53 localhost systemd[1]: Starting dracut mount hook...
Jan 10 16:23:53 localhost systemd[1]: Finished dracut mount hook.
Jan 10 16:23:53 localhost systemd[1]: Starting dracut pre-pivot and cleanup hook...
Jan 10 16:23:53 localhost rpc.idmapd[453]: exiting on signal 15
Jan 10 16:23:53 localhost systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Jan 10 16:23:53 localhost systemd[1]: Finished dracut pre-pivot and cleanup hook.
Jan 10 16:23:53 localhost systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Jan 10 16:23:53 localhost systemd[1]: Stopped target Network.
Jan 10 16:23:53 localhost systemd[1]: Stopped target Remote Encrypted Volumes.
Jan 10 16:23:53 localhost systemd[1]: Stopped target Timer Units.
Jan 10 16:23:53 localhost systemd[1]: dbus.socket: Deactivated successfully.
Jan 10 16:23:53 localhost systemd[1]: Closed D-Bus System Message Bus Socket.
Jan 10 16:23:53 localhost systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Jan 10 16:23:53 localhost systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Jan 10 16:23:53 localhost systemd[1]: Stopped target Initrd Default Target.
Jan 10 16:23:53 localhost systemd[1]: Stopped target Basic System.
Jan 10 16:23:53 localhost systemd[1]: Stopped target Initrd Root Device.
Jan 10 16:23:53 localhost systemd[1]: Stopped target Initrd /usr File System.
Jan 10 16:23:53 localhost systemd[1]: Stopped target Path Units.
Jan 10 16:23:53 localhost systemd[1]: Stopped target Remote File Systems.
Jan 10 16:23:53 localhost systemd[1]: Stopped target Preparation for Remote File Systems.
Jan 10 16:23:53 localhost systemd[1]: Stopped target Slice Units.
Jan 10 16:23:53 localhost systemd[1]: Stopped target Socket Units.
Jan 10 16:23:53 localhost systemd[1]: Stopped target System Initialization.
Jan 10 16:23:53 localhost systemd[1]: Stopped target Local File Systems.
Jan 10 16:23:53 localhost systemd[1]: Stopped target Swaps.
Jan 10 16:23:53 localhost systemd[1]: dracut-mount.service: Deactivated successfully.
Jan 10 16:23:53 localhost systemd[1]: Stopped dracut mount hook.
Jan 10 16:23:53 localhost systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Jan 10 16:23:53 localhost systemd[1]: Stopped dracut pre-mount hook.
Jan 10 16:23:53 localhost systemd[1]: Stopped target Local Encrypted Volumes.
Jan 10 16:23:53 localhost systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Jan 10 16:23:53 localhost systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Jan 10 16:23:53 localhost systemd[1]: dracut-initqueue.service: Deactivated successfully.
Jan 10 16:23:53 localhost systemd[1]: Stopped dracut initqueue hook.
Jan 10 16:23:53 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully.
Jan 10 16:23:53 localhost systemd[1]: Stopped Apply Kernel Variables.
Jan 10 16:23:53 localhost systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Jan 10 16:23:53 localhost systemd[1]: Stopped Create Volatile Files and Directories.
Jan 10 16:23:53 localhost systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Jan 10 16:23:53 localhost systemd[1]: Stopped Coldplug All udev Devices.
Jan 10 16:23:53 localhost systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Jan 10 16:23:53 localhost systemd[1]: Stopped dracut pre-trigger hook.
Jan 10 16:23:53 localhost systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Jan 10 16:23:53 localhost systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Jan 10 16:23:53 localhost systemd[1]: Stopped Setup Virtual Console.
Jan 10 16:23:53 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Jan 10 16:23:53 localhost systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Jan 10 16:23:53 localhost systemd[1]: systemd-udevd.service: Deactivated successfully.
Jan 10 16:23:53 localhost systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Jan 10 16:23:53 localhost systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Jan 10 16:23:53 localhost systemd[1]: Closed udev Control Socket.
Jan 10 16:23:53 localhost systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Jan 10 16:23:53 localhost systemd[1]: Closed udev Kernel Socket.
Jan 10 16:23:53 localhost systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Jan 10 16:23:53 localhost systemd[1]: Stopped dracut pre-udev hook.
Jan 10 16:23:53 localhost systemd[1]: dracut-cmdline.service: Deactivated successfully.
Jan 10 16:23:53 localhost systemd[1]: Stopped dracut cmdline hook.
Jan 10 16:23:53 localhost systemd[1]: Starting Cleanup udev Database...
Jan 10 16:23:53 localhost systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Jan 10 16:23:53 localhost systemd[1]: Stopped Create Static Device Nodes in /dev.
Jan 10 16:23:53 localhost systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Jan 10 16:23:53 localhost systemd[1]: Stopped Create List of Static Device Nodes.
Jan 10 16:23:53 localhost systemd[1]: systemd-sysusers.service: Deactivated successfully.
Jan 10 16:23:53 localhost systemd[1]: Stopped Create System Users.
Jan 10 16:23:53 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Jan 10 16:23:53 localhost systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Jan 10 16:23:53 localhost systemd[1]: initrd-cleanup.service: Deactivated successfully.
Jan 10 16:23:53 localhost systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Jan 10 16:23:53 localhost systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Jan 10 16:23:53 localhost systemd[1]: Finished Cleanup udev Database.
Jan 10 16:23:53 localhost systemd[1]: Reached target Switch Root.
Jan 10 16:23:53 localhost systemd[1]: Starting Switch Root...
Jan 10 16:23:53 localhost systemd[1]: Switching root.
Jan 10 16:23:53 localhost systemd-journald[309]: Journal stopped
Jan 10 16:23:54 localhost systemd-journald[309]: Received SIGTERM from PID 1 (systemd).
Jan 10 16:23:54 localhost kernel: audit: type=1404 audit(1768062233.571:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Jan 10 16:23:54 localhost kernel: SELinux:  policy capability network_peer_controls=1
Jan 10 16:23:54 localhost kernel: SELinux:  policy capability open_perms=1
Jan 10 16:23:54 localhost kernel: SELinux:  policy capability extended_socket_class=1
Jan 10 16:23:54 localhost kernel: SELinux:  policy capability always_check_network=0
Jan 10 16:23:54 localhost kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 10 16:23:54 localhost kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 10 16:23:54 localhost kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 10 16:23:54 localhost kernel: audit: type=1403 audit(1768062233.700:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Jan 10 16:23:54 localhost systemd[1]: Successfully loaded SELinux policy in 132.940ms.
Jan 10 16:23:54 localhost systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 26.798ms.
Jan 10 16:23:54 localhost systemd[1]: systemd 252-64.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Jan 10 16:23:54 localhost systemd[1]: Detected virtualization kvm.
Jan 10 16:23:54 localhost systemd[1]: Detected architecture x86-64.
Jan 10 16:23:54 localhost systemd-rc-local-generator[638]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 16:23:54 localhost systemd[1]: initrd-switch-root.service: Deactivated successfully.
Jan 10 16:23:54 localhost systemd[1]: Stopped Switch Root.
Jan 10 16:23:54 localhost systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Jan 10 16:23:54 localhost systemd[1]: Created slice Slice /system/getty.
Jan 10 16:23:54 localhost systemd[1]: Created slice Slice /system/serial-getty.
Jan 10 16:23:54 localhost systemd[1]: Created slice Slice /system/sshd-keygen.
Jan 10 16:23:54 localhost systemd[1]: Created slice User and Session Slice.
Jan 10 16:23:54 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Jan 10 16:23:54 localhost systemd[1]: Started Forward Password Requests to Wall Directory Watch.
Jan 10 16:23:54 localhost systemd[1]: Set up automount Arbitrary Executable File Formats File System Automount Point.
Jan 10 16:23:54 localhost systemd[1]: Reached target Local Encrypted Volumes.
Jan 10 16:23:54 localhost systemd[1]: Stopped target Switch Root.
Jan 10 16:23:54 localhost systemd[1]: Stopped target Initrd File Systems.
Jan 10 16:23:54 localhost systemd[1]: Stopped target Initrd Root File System.
Jan 10 16:23:54 localhost systemd[1]: Reached target Local Integrity Protected Volumes.
Jan 10 16:23:54 localhost systemd[1]: Reached target Path Units.
Jan 10 16:23:54 localhost systemd[1]: Reached target rpc_pipefs.target.
Jan 10 16:23:54 localhost systemd[1]: Reached target Slice Units.
Jan 10 16:23:54 localhost systemd[1]: Reached target Swaps.
Jan 10 16:23:54 localhost systemd[1]: Reached target Local Verity Protected Volumes.
Jan 10 16:23:54 localhost systemd[1]: Listening on RPCbind Server Activation Socket.
Jan 10 16:23:54 localhost systemd[1]: Reached target RPC Port Mapper.
Jan 10 16:23:54 localhost systemd[1]: Listening on Process Core Dump Socket.
Jan 10 16:23:54 localhost systemd[1]: Listening on initctl Compatibility Named Pipe.
Jan 10 16:23:54 localhost systemd[1]: Listening on udev Control Socket.
Jan 10 16:23:54 localhost systemd[1]: Listening on udev Kernel Socket.
Jan 10 16:23:54 localhost systemd[1]: Mounting Huge Pages File System...
Jan 10 16:23:54 localhost systemd[1]: Mounting POSIX Message Queue File System...
Jan 10 16:23:54 localhost systemd[1]: Mounting Kernel Debug File System...
Jan 10 16:23:54 localhost systemd[1]: Mounting Kernel Trace File System...
Jan 10 16:23:54 localhost systemd[1]: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Jan 10 16:23:54 localhost systemd[1]: Starting Create List of Static Device Nodes...
Jan 10 16:23:54 localhost systemd[1]: Starting Load Kernel Module configfs...
Jan 10 16:23:54 localhost systemd[1]: Starting Load Kernel Module drm...
Jan 10 16:23:54 localhost systemd[1]: Starting Load Kernel Module efi_pstore...
Jan 10 16:23:54 localhost systemd[1]: Starting Load Kernel Module fuse...
Jan 10 16:23:54 localhost systemd[1]: Starting Read and set NIS domainname from /etc/sysconfig/network...
Jan 10 16:23:54 localhost systemd[1]: systemd-fsck-root.service: Deactivated successfully.
Jan 10 16:23:54 localhost systemd[1]: Stopped File System Check on Root Device.
Jan 10 16:23:54 localhost systemd[1]: Stopped Journal Service.
Jan 10 16:23:54 localhost systemd[1]: Starting Journal Service...
Jan 10 16:23:54 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Jan 10 16:23:54 localhost systemd[1]: Starting Generate network units from Kernel command line...
Jan 10 16:23:54 localhost systemd[1]: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Jan 10 16:23:54 localhost systemd[1]: Starting Remount Root and Kernel File Systems...
Jan 10 16:23:54 localhost systemd[1]: Repartition Root Disk was skipped because no trigger condition checks were met.
Jan 10 16:23:54 localhost systemd[1]: Starting Apply Kernel Variables...
Jan 10 16:23:54 localhost systemd-journald[679]: Journal started
Jan 10 16:23:54 localhost systemd-journald[679]: Runtime Journal (/run/log/journal/bfa963f84c4f244b9e78b91a43b5e88e) is 8.0M, max 153.6M, 145.6M free.
Jan 10 16:23:54 localhost systemd[1]: Queued start job for default target Multi-User System.
Jan 10 16:23:54 localhost systemd[1]: systemd-journald.service: Deactivated successfully.
Jan 10 16:23:54 localhost systemd[1]: Starting Coldplug All udev Devices...
Jan 10 16:23:54 localhost kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Jan 10 16:23:54 localhost kernel: fuse: init (API version 7.37)
Jan 10 16:23:54 localhost systemd[1]: Started Journal Service.
Jan 10 16:23:54 localhost systemd[1]: Mounted Huge Pages File System.
Jan 10 16:23:54 localhost systemd[1]: Mounted POSIX Message Queue File System.
Jan 10 16:23:54 localhost systemd[1]: Mounted Kernel Debug File System.
Jan 10 16:23:54 localhost systemd[1]: Mounted Kernel Trace File System.
Jan 10 16:23:54 localhost systemd[1]: Finished Create List of Static Device Nodes.
Jan 10 16:23:54 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jan 10 16:23:54 localhost systemd[1]: Finished Load Kernel Module configfs.
Jan 10 16:23:54 localhost systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Jan 10 16:23:54 localhost systemd[1]: Finished Load Kernel Module efi_pstore.
Jan 10 16:23:54 localhost systemd[1]: modprobe@fuse.service: Deactivated successfully.
Jan 10 16:23:54 localhost systemd[1]: Finished Load Kernel Module fuse.
Jan 10 16:23:54 localhost systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Jan 10 16:23:54 localhost systemd[1]: Finished Generate network units from Kernel command line.
Jan 10 16:23:54 localhost systemd[1]: Finished Remount Root and Kernel File Systems.
Jan 10 16:23:54 localhost systemd[1]: Finished Apply Kernel Variables.
Jan 10 16:23:54 localhost kernel: ACPI: bus type drm_connector registered
Jan 10 16:23:54 localhost systemd[1]: Mounting FUSE Control File System...
Jan 10 16:23:54 localhost systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Jan 10 16:23:54 localhost systemd[1]: Starting Rebuild Hardware Database...
Jan 10 16:23:54 localhost systemd[1]: Starting Flush Journal to Persistent Storage...
Jan 10 16:23:54 localhost systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Jan 10 16:23:54 localhost systemd[1]: Starting Load/Save OS Random Seed...
Jan 10 16:23:54 localhost systemd[1]: Starting Create System Users...
Jan 10 16:23:54 localhost systemd[1]: modprobe@drm.service: Deactivated successfully.
Jan 10 16:23:54 localhost systemd[1]: Finished Load Kernel Module drm.
Jan 10 16:23:54 localhost systemd[1]: Mounted FUSE Control File System.
Jan 10 16:23:54 localhost systemd-journald[679]: Runtime Journal (/run/log/journal/bfa963f84c4f244b9e78b91a43b5e88e) is 8.0M, max 153.6M, 145.6M free.
Jan 10 16:23:54 localhost systemd-journald[679]: Received client request to flush runtime journal.
Jan 10 16:23:54 localhost systemd[1]: Finished Flush Journal to Persistent Storage.
Jan 10 16:23:54 localhost systemd[1]: Finished Load/Save OS Random Seed.
Jan 10 16:23:54 localhost systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Jan 10 16:23:54 localhost systemd[1]: Finished Create System Users.
Jan 10 16:23:54 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Jan 10 16:23:54 localhost systemd[1]: Finished Coldplug All udev Devices.
Jan 10 16:23:54 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Jan 10 16:23:54 localhost systemd[1]: Reached target Preparation for Local File Systems.
Jan 10 16:23:54 localhost systemd[1]: Reached target Local File Systems.
Jan 10 16:23:54 localhost systemd[1]: Starting Rebuild Dynamic Linker Cache...
Jan 10 16:23:54 localhost systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Jan 10 16:23:54 localhost systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Jan 10 16:23:54 localhost systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Jan 10 16:23:54 localhost systemd[1]: Starting Automatic Boot Loader Update...
Jan 10 16:23:54 localhost systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Jan 10 16:23:54 localhost systemd[1]: Starting Create Volatile Files and Directories...
Jan 10 16:23:54 localhost bootctl[697]: Couldn't find EFI system partition, skipping.
Jan 10 16:23:54 localhost systemd[1]: Finished Automatic Boot Loader Update.
Jan 10 16:23:54 localhost systemd[1]: Finished Create Volatile Files and Directories.
Jan 10 16:23:54 localhost systemd[1]: Starting Security Auditing Service...
Jan 10 16:23:54 localhost systemd[1]: Starting RPC Bind...
Jan 10 16:23:54 localhost systemd[1]: Starting Rebuild Journal Catalog...
Jan 10 16:23:54 localhost auditd[702]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Jan 10 16:23:54 localhost systemd[1]: Finished Rebuild Dynamic Linker Cache.
Jan 10 16:23:54 localhost auditd[702]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Jan 10 16:23:54 localhost systemd[1]: Finished Rebuild Journal Catalog.
Jan 10 16:23:54 localhost augenrules[708]: /sbin/augenrules: No change
Jan 10 16:23:54 localhost systemd[1]: Started RPC Bind.
Jan 10 16:23:54 localhost augenrules[723]: No rules
Jan 10 16:23:54 localhost augenrules[723]: enabled 1
Jan 10 16:23:54 localhost augenrules[723]: failure 1
Jan 10 16:23:54 localhost augenrules[723]: pid 702
Jan 10 16:23:54 localhost augenrules[723]: rate_limit 0
Jan 10 16:23:54 localhost augenrules[723]: backlog_limit 8192
Jan 10 16:23:54 localhost augenrules[723]: lost 0
Jan 10 16:23:54 localhost augenrules[723]: backlog 4
Jan 10 16:23:54 localhost augenrules[723]: backlog_wait_time 60000
Jan 10 16:23:54 localhost augenrules[723]: backlog_wait_time_actual 0
Jan 10 16:23:54 localhost augenrules[723]: enabled 1
Jan 10 16:23:54 localhost augenrules[723]: failure 1
Jan 10 16:23:54 localhost augenrules[723]: pid 702
Jan 10 16:23:54 localhost augenrules[723]: rate_limit 0
Jan 10 16:23:54 localhost augenrules[723]: backlog_limit 8192
Jan 10 16:23:54 localhost augenrules[723]: lost 0
Jan 10 16:23:54 localhost augenrules[723]: backlog 4
Jan 10 16:23:54 localhost augenrules[723]: backlog_wait_time 60000
Jan 10 16:23:54 localhost augenrules[723]: backlog_wait_time_actual 0
Jan 10 16:23:54 localhost augenrules[723]: enabled 1
Jan 10 16:23:54 localhost augenrules[723]: failure 1
Jan 10 16:23:54 localhost augenrules[723]: pid 702
Jan 10 16:23:54 localhost augenrules[723]: rate_limit 0
Jan 10 16:23:54 localhost augenrules[723]: backlog_limit 8192
Jan 10 16:23:54 localhost augenrules[723]: lost 0
Jan 10 16:23:54 localhost augenrules[723]: backlog 4
Jan 10 16:23:54 localhost augenrules[723]: backlog_wait_time 60000
Jan 10 16:23:54 localhost augenrules[723]: backlog_wait_time_actual 0
Jan 10 16:23:54 localhost systemd[1]: Started Security Auditing Service.
Jan 10 16:23:54 localhost systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Jan 10 16:23:54 localhost systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Jan 10 16:23:54 localhost systemd[1]: Finished Rebuild Hardware Database.
Jan 10 16:23:54 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Jan 10 16:23:54 localhost systemd[1]: Starting Update is Completed...
Jan 10 16:23:54 localhost systemd[1]: Finished Update is Completed.
Jan 10 16:23:55 localhost systemd-udevd[731]: Using default interface naming scheme 'rhel-9.0'.
Jan 10 16:23:55 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Jan 10 16:23:55 localhost systemd[1]: Reached target System Initialization.
Jan 10 16:23:55 localhost systemd[1]: Started dnf makecache --timer.
Jan 10 16:23:55 localhost systemd[1]: Started Daily rotation of log files.
Jan 10 16:23:55 localhost systemd[1]: Started Daily Cleanup of Temporary Directories.
Jan 10 16:23:55 localhost systemd[1]: Reached target Timer Units.
Jan 10 16:23:55 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Jan 10 16:23:55 localhost systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Jan 10 16:23:55 localhost systemd[1]: Reached target Socket Units.
Jan 10 16:23:55 localhost systemd[1]: Starting D-Bus System Message Bus...
Jan 10 16:23:55 localhost systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Jan 10 16:23:55 localhost systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Jan 10 16:23:55 localhost systemd[1]: Starting Load Kernel Module configfs...
Jan 10 16:23:55 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jan 10 16:23:55 localhost systemd[1]: Finished Load Kernel Module configfs.
Jan 10 16:23:55 localhost systemd-udevd[740]: Network interface NamePolicy= disabled on kernel command line.
Jan 10 16:23:55 localhost kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Jan 10 16:23:55 localhost systemd[1]: Started D-Bus System Message Bus.
Jan 10 16:23:55 localhost systemd[1]: Reached target Basic System.
Jan 10 16:23:55 localhost kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Jan 10 16:23:55 localhost kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Jan 10 16:23:55 localhost dbus-broker-lau[744]: Ready
Jan 10 16:23:55 localhost systemd[1]: Starting NTP client/server...
Jan 10 16:23:55 localhost systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Jan 10 16:23:55 localhost kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Jan 10 16:23:55 localhost systemd[1]: Starting Restore /run/initramfs on shutdown...
Jan 10 16:23:55 localhost chronyd[785]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Jan 10 16:23:55 localhost chronyd[785]: Loaded 0 symmetric keys
Jan 10 16:23:55 localhost chronyd[785]: Using right/UTC timezone to obtain leap second data
Jan 10 16:23:55 localhost chronyd[785]: Loaded seccomp filter (level 2)
Jan 10 16:23:55 localhost systemd[1]: Starting IPv4 firewall with iptables...
Jan 10 16:23:55 localhost systemd[1]: Started irqbalance daemon.
Jan 10 16:23:55 localhost systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Jan 10 16:23:55 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 10 16:23:55 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 10 16:23:55 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 10 16:23:55 localhost systemd[1]: Reached target sshd-keygen.target.
Jan 10 16:23:55 localhost systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Jan 10 16:23:55 localhost systemd[1]: Reached target User and Group Name Lookups.
Jan 10 16:23:55 localhost systemd[1]: Starting User Login Management...
Jan 10 16:23:55 localhost systemd[1]: Started NTP client/server.
Jan 10 16:23:55 localhost systemd[1]: Finished Restore /run/initramfs on shutdown.
Jan 10 16:23:55 localhost kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Jan 10 16:23:55 localhost kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Jan 10 16:23:55 localhost kernel: kvm_amd: TSC scaling supported
Jan 10 16:23:55 localhost kernel: kvm_amd: Nested Virtualization enabled
Jan 10 16:23:55 localhost kernel: kvm_amd: Nested Paging enabled
Jan 10 16:23:55 localhost kernel: kvm_amd: LBR virtualization supported
Jan 10 16:23:55 localhost systemd-logind[798]: Watching system buttons on /dev/input/event0 (Power Button)
Jan 10 16:23:55 localhost systemd-logind[798]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Jan 10 16:23:55 localhost kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Jan 10 16:23:55 localhost kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Jan 10 16:23:55 localhost kernel: Console: switching to colour dummy device 80x25
Jan 10 16:23:55 localhost systemd-logind[798]: New seat seat0.
Jan 10 16:23:55 localhost systemd[1]: Started User Login Management.
Jan 10 16:23:55 localhost kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Jan 10 16:23:55 localhost kernel: [drm] features: -context_init
Jan 10 16:23:55 localhost kernel: [drm] number of scanouts: 1
Jan 10 16:23:55 localhost kernel: [drm] number of cap sets: 0
Jan 10 16:23:55 localhost kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Jan 10 16:23:55 localhost kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Jan 10 16:23:55 localhost kernel: Console: switching to colour frame buffer device 128x48
Jan 10 16:23:55 localhost kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Jan 10 16:23:55 localhost iptables.init[789]: iptables: Applying firewall rules: [  OK  ]
Jan 10 16:23:55 localhost systemd[1]: Finished IPv4 firewall with iptables.
Jan 10 16:23:55 localhost cloud-init[841]: Cloud-init v. 24.4-8.el9 running 'init-local' at Sat, 10 Jan 2026 16:23:55 +0000. Up 6.36 seconds.
Jan 10 16:23:55 localhost kernel: ISO 9660 Extensions: Microsoft Joliet Level 3
Jan 10 16:23:55 localhost kernel: ISO 9660 Extensions: RRIP_1991A
Jan 10 16:23:55 localhost systemd[1]: run-cloud\x2dinit-tmp-tmpnpub8fvx.mount: Deactivated successfully.
Jan 10 16:23:55 localhost systemd[1]: Starting Hostname Service...
Jan 10 16:23:56 localhost systemd[1]: Started Hostname Service.
Jan 10 16:23:56 np0005580781.novalocal systemd-hostnamed[855]: Hostname set to <np0005580781.novalocal> (static)
Jan 10 16:23:56 np0005580781.novalocal systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Jan 10 16:23:56 np0005580781.novalocal systemd[1]: Reached target Preparation for Network.
Jan 10 16:23:56 np0005580781.novalocal systemd[1]: Starting Network Manager...
Jan 10 16:23:56 np0005580781.novalocal NetworkManager[859]: <info>  [1768062236.2253] NetworkManager (version 1.54.2-1.el9) is starting... (boot:bad47697-514b-4229-8b29-23921a9a6958)
Jan 10 16:23:56 np0005580781.novalocal NetworkManager[859]: <info>  [1768062236.2260] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Jan 10 16:23:56 np0005580781.novalocal NetworkManager[859]: <info>  [1768062236.2334] manager[0x56209a672000]: monitoring kernel firmware directory '/lib/firmware'.
Jan 10 16:23:56 np0005580781.novalocal NetworkManager[859]: <info>  [1768062236.2388] hostname: hostname: using hostnamed
Jan 10 16:23:56 np0005580781.novalocal NetworkManager[859]: <info>  [1768062236.2388] hostname: static hostname changed from (none) to "np0005580781.novalocal"
Jan 10 16:23:56 np0005580781.novalocal NetworkManager[859]: <info>  [1768062236.2393] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Jan 10 16:23:56 np0005580781.novalocal NetworkManager[859]: <info>  [1768062236.2514] manager[0x56209a672000]: rfkill: Wi-Fi hardware radio set enabled
Jan 10 16:23:56 np0005580781.novalocal NetworkManager[859]: <info>  [1768062236.2515] manager[0x56209a672000]: rfkill: WWAN hardware radio set enabled
Jan 10 16:23:56 np0005580781.novalocal NetworkManager[859]: <info>  [1768062236.2553] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.2-1.el9/libnm-device-plugin-team.so)
Jan 10 16:23:56 np0005580781.novalocal NetworkManager[859]: <info>  [1768062236.2554] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Jan 10 16:23:56 np0005580781.novalocal NetworkManager[859]: <info>  [1768062236.2555] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Jan 10 16:23:56 np0005580781.novalocal NetworkManager[859]: <info>  [1768062236.2555] manager: Networking is enabled by state file
Jan 10 16:23:56 np0005580781.novalocal NetworkManager[859]: <info>  [1768062236.2556] settings: Loaded settings plugin: keyfile (internal)
Jan 10 16:23:56 np0005580781.novalocal NetworkManager[859]: <info>  [1768062236.2565] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.2-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Jan 10 16:23:56 np0005580781.novalocal NetworkManager[859]: <info>  [1768062236.2582] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Jan 10 16:23:56 np0005580781.novalocal NetworkManager[859]: <info>  [1768062236.2595] dhcp: init: Using DHCP client 'internal'
Jan 10 16:23:56 np0005580781.novalocal NetworkManager[859]: <info>  [1768062236.2597] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Jan 10 16:23:56 np0005580781.novalocal NetworkManager[859]: <info>  [1768062236.2609] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 10 16:23:56 np0005580781.novalocal systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Jan 10 16:23:56 np0005580781.novalocal NetworkManager[859]: <info>  [1768062236.2615] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Jan 10 16:23:56 np0005580781.novalocal NetworkManager[859]: <info>  [1768062236.2625] device (lo): Activation: starting connection 'lo' (d627873a-279e-4130-ac7c-6a2872dc6445)
Jan 10 16:23:56 np0005580781.novalocal NetworkManager[859]: <info>  [1768062236.2633] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Jan 10 16:23:56 np0005580781.novalocal NetworkManager[859]: <info>  [1768062236.2636] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 10 16:23:56 np0005580781.novalocal NetworkManager[859]: <info>  [1768062236.2660] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Jan 10 16:23:56 np0005580781.novalocal NetworkManager[859]: <info>  [1768062236.2665] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Jan 10 16:23:56 np0005580781.novalocal NetworkManager[859]: <info>  [1768062236.2668] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Jan 10 16:23:56 np0005580781.novalocal NetworkManager[859]: <info>  [1768062236.2670] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Jan 10 16:23:56 np0005580781.novalocal NetworkManager[859]: <info>  [1768062236.2673] device (eth0): carrier: link connected
Jan 10 16:23:56 np0005580781.novalocal NetworkManager[859]: <info>  [1768062236.2677] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Jan 10 16:23:56 np0005580781.novalocal NetworkManager[859]: <info>  [1768062236.2684] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Jan 10 16:23:56 np0005580781.novalocal NetworkManager[859]: <info>  [1768062236.2691] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 10 16:23:56 np0005580781.novalocal NetworkManager[859]: <info>  [1768062236.2695] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 10 16:23:56 np0005580781.novalocal NetworkManager[859]: <info>  [1768062236.2695] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 10 16:23:56 np0005580781.novalocal NetworkManager[859]: <info>  [1768062236.2698] manager: NetworkManager state is now CONNECTING
Jan 10 16:23:56 np0005580781.novalocal NetworkManager[859]: <info>  [1768062236.2699] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 10 16:23:56 np0005580781.novalocal NetworkManager[859]: <info>  [1768062236.2710] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 10 16:23:56 np0005580781.novalocal NetworkManager[859]: <info>  [1768062236.2713] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 10 16:23:56 np0005580781.novalocal NetworkManager[859]: <info>  [1768062236.2749] dhcp4 (eth0): state changed new lease, address=38.102.83.74
Jan 10 16:23:56 np0005580781.novalocal NetworkManager[859]: <info>  [1768062236.2755] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Jan 10 16:23:56 np0005580781.novalocal NetworkManager[859]: <info>  [1768062236.2773] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 10 16:23:56 np0005580781.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 10 16:23:56 np0005580781.novalocal systemd[1]: Started Network Manager.
Jan 10 16:23:56 np0005580781.novalocal systemd[1]: Reached target Network.
Jan 10 16:23:56 np0005580781.novalocal systemd[1]: Starting Network Manager Wait Online...
Jan 10 16:23:56 np0005580781.novalocal systemd[1]: Starting GSSAPI Proxy Daemon...
Jan 10 16:23:56 np0005580781.novalocal systemd[1]: Started GSSAPI Proxy Daemon.
Jan 10 16:23:56 np0005580781.novalocal systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Jan 10 16:23:56 np0005580781.novalocal systemd[1]: Reached target NFS client services.
Jan 10 16:23:56 np0005580781.novalocal systemd[1]: Reached target Preparation for Remote File Systems.
Jan 10 16:23:56 np0005580781.novalocal NetworkManager[859]: <info>  [1768062236.3197] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Jan 10 16:23:56 np0005580781.novalocal NetworkManager[859]: <info>  [1768062236.3200] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 10 16:23:56 np0005580781.novalocal NetworkManager[859]: <info>  [1768062236.3204] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Jan 10 16:23:56 np0005580781.novalocal systemd[1]: Reached target Remote File Systems.
Jan 10 16:23:56 np0005580781.novalocal NetworkManager[859]: <info>  [1768062236.3218] device (lo): Activation: successful, device activated.
Jan 10 16:23:56 np0005580781.novalocal NetworkManager[859]: <info>  [1768062236.3227] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 10 16:23:56 np0005580781.novalocal NetworkManager[859]: <info>  [1768062236.3233] manager: NetworkManager state is now CONNECTED_SITE
Jan 10 16:23:56 np0005580781.novalocal systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Jan 10 16:23:56 np0005580781.novalocal NetworkManager[859]: <info>  [1768062236.3241] device (eth0): Activation: successful, device activated.
Jan 10 16:23:56 np0005580781.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 10 16:23:56 np0005580781.novalocal NetworkManager[859]: <info>  [1768062236.3258] manager: NetworkManager state is now CONNECTED_GLOBAL
Jan 10 16:23:56 np0005580781.novalocal NetworkManager[859]: <info>  [1768062236.3271] manager: startup complete
Jan 10 16:23:56 np0005580781.novalocal systemd[1]: Finished Network Manager Wait Online.
Jan 10 16:23:56 np0005580781.novalocal systemd[1]: Starting Cloud-init: Network Stage...
Jan 10 16:23:56 np0005580781.novalocal cloud-init[922]: Cloud-init v. 24.4-8.el9 running 'init' at Sat, 10 Jan 2026 16:23:56 +0000. Up 7.36 seconds.
Jan 10 16:23:56 np0005580781.novalocal cloud-init[922]: ci-info: ++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Jan 10 16:23:56 np0005580781.novalocal cloud-init[922]: ci-info: +--------+------+-----------------------------+---------------+--------+-------------------+
Jan 10 16:23:56 np0005580781.novalocal cloud-init[922]: ci-info: | Device |  Up  |           Address           |      Mask     | Scope  |     Hw-Address    |
Jan 10 16:23:56 np0005580781.novalocal cloud-init[922]: ci-info: +--------+------+-----------------------------+---------------+--------+-------------------+
Jan 10 16:23:56 np0005580781.novalocal cloud-init[922]: ci-info: |  eth0  | True |         38.102.83.74        | 255.255.255.0 | global | fa:16:3e:49:0e:aa |
Jan 10 16:23:56 np0005580781.novalocal cloud-init[922]: ci-info: |  eth0  | True | fe80::f816:3eff:fe49:eaa/64 |       .       |  link  | fa:16:3e:49:0e:aa |
Jan 10 16:23:56 np0005580781.novalocal cloud-init[922]: ci-info: |   lo   | True |          127.0.0.1          |   255.0.0.0   |  host  |         .         |
Jan 10 16:23:56 np0005580781.novalocal cloud-init[922]: ci-info: |   lo   | True |           ::1/128           |       .       |  host  |         .         |
Jan 10 16:23:56 np0005580781.novalocal cloud-init[922]: ci-info: +--------+------+-----------------------------+---------------+--------+-------------------+
Jan 10 16:23:56 np0005580781.novalocal cloud-init[922]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Jan 10 16:23:56 np0005580781.novalocal cloud-init[922]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Jan 10 16:23:56 np0005580781.novalocal cloud-init[922]: ci-info: | Route |   Destination   |    Gateway    |     Genmask     | Interface | Flags |
Jan 10 16:23:56 np0005580781.novalocal cloud-init[922]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Jan 10 16:23:56 np0005580781.novalocal cloud-init[922]: ci-info: |   0   |     0.0.0.0     |  38.102.83.1  |     0.0.0.0     |    eth0   |   UG  |
Jan 10 16:23:56 np0005580781.novalocal cloud-init[922]: ci-info: |   1   |   38.102.83.0   |    0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Jan 10 16:23:56 np0005580781.novalocal cloud-init[922]: ci-info: |   2   | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 |    eth0   |  UGH  |
Jan 10 16:23:56 np0005580781.novalocal cloud-init[922]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Jan 10 16:23:56 np0005580781.novalocal cloud-init[922]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Jan 10 16:23:56 np0005580781.novalocal cloud-init[922]: ci-info: +-------+-------------+---------+-----------+-------+
Jan 10 16:23:56 np0005580781.novalocal cloud-init[922]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Jan 10 16:23:56 np0005580781.novalocal cloud-init[922]: ci-info: +-------+-------------+---------+-----------+-------+
Jan 10 16:23:56 np0005580781.novalocal cloud-init[922]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Jan 10 16:23:56 np0005580781.novalocal cloud-init[922]: ci-info: |   3   |  multicast  |    ::   |    eth0   |   U   |
Jan 10 16:23:56 np0005580781.novalocal cloud-init[922]: ci-info: +-------+-------------+---------+-----------+-------+
Jan 10 16:23:57 np0005580781.novalocal useradd[989]: new group: name=cloud-user, GID=1001
Jan 10 16:23:57 np0005580781.novalocal useradd[989]: new user: name=cloud-user, UID=1001, GID=1001, home=/home/cloud-user, shell=/bin/bash, from=none
Jan 10 16:23:57 np0005580781.novalocal useradd[989]: add 'cloud-user' to group 'adm'
Jan 10 16:23:57 np0005580781.novalocal useradd[989]: add 'cloud-user' to group 'systemd-journal'
Jan 10 16:23:57 np0005580781.novalocal useradd[989]: add 'cloud-user' to shadow group 'adm'
Jan 10 16:23:57 np0005580781.novalocal useradd[989]: add 'cloud-user' to shadow group 'systemd-journal'
Jan 10 16:23:58 np0005580781.novalocal cloud-init[922]: Generating public/private rsa key pair.
Jan 10 16:23:58 np0005580781.novalocal cloud-init[922]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Jan 10 16:23:58 np0005580781.novalocal cloud-init[922]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Jan 10 16:23:58 np0005580781.novalocal cloud-init[922]: The key fingerprint is:
Jan 10 16:23:58 np0005580781.novalocal cloud-init[922]: SHA256:NwbL++5yrOKwEryJ6OBZfWK1r7uXkUNzc9raTCTtoYo root@np0005580781.novalocal
Jan 10 16:23:58 np0005580781.novalocal cloud-init[922]: The key's randomart image is:
Jan 10 16:23:58 np0005580781.novalocal cloud-init[922]: +---[RSA 3072]----+
Jan 10 16:23:58 np0005580781.novalocal cloud-init[922]: |                 |
Jan 10 16:23:58 np0005580781.novalocal cloud-init[922]: |                 |
Jan 10 16:23:58 np0005580781.novalocal cloud-init[922]: |        .   .    |
Jan 10 16:23:58 np0005580781.novalocal cloud-init[922]: |       . = + =   |
Jan 10 16:23:58 np0005580781.novalocal cloud-init[922]: | .     .S B X .  |
Jan 10 16:23:58 np0005580781.novalocal cloud-init[922]: |  o . . .B + +   |
Jan 10 16:23:58 np0005580781.novalocal cloud-init[922]: |o. =.+ oo.= =    |
Jan 10 16:23:58 np0005580781.novalocal cloud-init[922]: |= * .o+Eo=o. o   |
Jan 10 16:23:58 np0005580781.novalocal cloud-init[922]: |.+ ....=*B+      |
Jan 10 16:23:58 np0005580781.novalocal cloud-init[922]: +----[SHA256]-----+
Jan 10 16:23:58 np0005580781.novalocal cloud-init[922]: Generating public/private ecdsa key pair.
Jan 10 16:23:58 np0005580781.novalocal cloud-init[922]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Jan 10 16:23:58 np0005580781.novalocal cloud-init[922]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Jan 10 16:23:58 np0005580781.novalocal cloud-init[922]: The key fingerprint is:
Jan 10 16:23:58 np0005580781.novalocal cloud-init[922]: SHA256:Q5P44HImb40twqNrJHfa+flB24JjYHv9nVHYCLjTez0 root@np0005580781.novalocal
Jan 10 16:23:58 np0005580781.novalocal cloud-init[922]: The key's randomart image is:
Jan 10 16:23:58 np0005580781.novalocal cloud-init[922]: +---[ECDSA 256]---+
Jan 10 16:23:58 np0005580781.novalocal cloud-init[922]: |                 |
Jan 10 16:23:58 np0005580781.novalocal cloud-init[922]: |       . o       |
Jan 10 16:23:58 np0005580781.novalocal cloud-init[922]: |      o = .      |
Jan 10 16:23:58 np0005580781.novalocal cloud-init[922]: |     . + + . +   |
Jan 10 16:23:58 np0005580781.novalocal cloud-init[922]: |    = + S . o o  |
Jan 10 16:23:58 np0005580781.novalocal cloud-init[922]: |. oo.O B = . o   |
Jan 10 16:23:58 np0005580781.novalocal cloud-init[922]: | + +=.X B o o E  |
Jan 10 16:23:58 np0005580781.novalocal cloud-init[922]: |  o.o* + + o o . |
Jan 10 16:23:58 np0005580781.novalocal cloud-init[922]: | .o. .o.. . o    |
Jan 10 16:23:58 np0005580781.novalocal cloud-init[922]: +----[SHA256]-----+
Jan 10 16:23:58 np0005580781.novalocal cloud-init[922]: Generating public/private ed25519 key pair.
Jan 10 16:23:58 np0005580781.novalocal cloud-init[922]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Jan 10 16:23:58 np0005580781.novalocal cloud-init[922]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Jan 10 16:23:58 np0005580781.novalocal cloud-init[922]: The key fingerprint is:
Jan 10 16:23:58 np0005580781.novalocal cloud-init[922]: SHA256:NBCxV9OdqOJixAGHwcnNHRsQ/U445B6s8QNi7vZfqXU root@np0005580781.novalocal
Jan 10 16:23:58 np0005580781.novalocal cloud-init[922]: The key's randomart image is:
Jan 10 16:23:58 np0005580781.novalocal cloud-init[922]: +--[ED25519 256]--+
Jan 10 16:23:58 np0005580781.novalocal cloud-init[922]: |   ooBBBo.o. o . |
Jan 10 16:23:58 np0005580781.novalocal cloud-init[922]: |    =.+o++ .o o  |
Jan 10 16:23:58 np0005580781.novalocal cloud-init[922]: |     ..==o .     |
Jan 10 16:23:58 np0005580781.novalocal cloud-init[922]: |    o =oB.+      |
Jan 10 16:23:58 np0005580781.novalocal cloud-init[922]: |   o o BS*       |
Jan 10 16:23:58 np0005580781.novalocal cloud-init[922]: |    . + = ..     |
Jan 10 16:23:58 np0005580781.novalocal cloud-init[922]: |   . . . .+ E    |
Jan 10 16:23:58 np0005580781.novalocal cloud-init[922]: |    o    + .     |
Jan 10 16:23:58 np0005580781.novalocal cloud-init[922]: |   . ...o        |
Jan 10 16:23:58 np0005580781.novalocal cloud-init[922]: +----[SHA256]-----+
Jan 10 16:23:58 np0005580781.novalocal systemd[1]: Finished Cloud-init: Network Stage.
Jan 10 16:23:58 np0005580781.novalocal systemd[1]: Reached target Cloud-config availability.
Jan 10 16:23:58 np0005580781.novalocal systemd[1]: Reached target Network is Online.
Jan 10 16:23:58 np0005580781.novalocal systemd[1]: Starting Cloud-init: Config Stage...
Jan 10 16:23:58 np0005580781.novalocal systemd[1]: Starting Crash recovery kernel arming...
Jan 10 16:23:58 np0005580781.novalocal systemd[1]: Starting Notify NFS peers of a restart...
Jan 10 16:23:58 np0005580781.novalocal systemd[1]: Starting System Logging Service...
Jan 10 16:23:58 np0005580781.novalocal sm-notify[1005]: Version 2.5.4 starting
Jan 10 16:23:58 np0005580781.novalocal systemd[1]: Starting OpenSSH server daemon...
Jan 10 16:23:58 np0005580781.novalocal systemd[1]: Starting Permit User Sessions...
Jan 10 16:23:58 np0005580781.novalocal systemd[1]: Started Notify NFS peers of a restart.
Jan 10 16:23:58 np0005580781.novalocal systemd[1]: Finished Permit User Sessions.
Jan 10 16:23:58 np0005580781.novalocal sshd[1007]: Server listening on 0.0.0.0 port 22.
Jan 10 16:23:58 np0005580781.novalocal sshd[1007]: Server listening on :: port 22.
Jan 10 16:23:58 np0005580781.novalocal rsyslogd[1006]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="1006" x-info="https://www.rsyslog.com"] start
Jan 10 16:23:58 np0005580781.novalocal rsyslogd[1006]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Jan 10 16:23:58 np0005580781.novalocal systemd[1]: Started Command Scheduler.
Jan 10 16:23:58 np0005580781.novalocal systemd[1]: Started Getty on tty1.
Jan 10 16:23:58 np0005580781.novalocal crond[1010]: (CRON) STARTUP (1.5.7)
Jan 10 16:23:58 np0005580781.novalocal crond[1010]: (CRON) INFO (Syslog will be used instead of sendmail.)
Jan 10 16:23:58 np0005580781.novalocal systemd[1]: Started Serial Getty on ttyS0.
Jan 10 16:23:58 np0005580781.novalocal crond[1010]: (CRON) INFO (RANDOM_DELAY will be scaled with factor 56% if used.)
Jan 10 16:23:58 np0005580781.novalocal systemd[1]: Reached target Login Prompts.
Jan 10 16:23:58 np0005580781.novalocal crond[1010]: (CRON) INFO (running with inotify support)
Jan 10 16:23:58 np0005580781.novalocal systemd[1]: Started OpenSSH server daemon.
Jan 10 16:23:58 np0005580781.novalocal systemd[1]: Started System Logging Service.
Jan 10 16:23:58 np0005580781.novalocal systemd[1]: Reached target Multi-User System.
Jan 10 16:23:58 np0005580781.novalocal systemd[1]: Starting Record Runlevel Change in UTMP...
Jan 10 16:23:58 np0005580781.novalocal systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Jan 10 16:23:58 np0005580781.novalocal systemd[1]: Finished Record Runlevel Change in UTMP.
Jan 10 16:23:58 np0005580781.novalocal rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 10 16:23:58 np0005580781.novalocal kdumpctl[1020]: kdump: No kdump initial ramdisk found.
Jan 10 16:23:58 np0005580781.novalocal kdumpctl[1020]: kdump: Rebuilding /boot/initramfs-5.14.0-655.el9.x86_64kdump.img
Jan 10 16:23:58 np0005580781.novalocal cloud-init[1103]: Cloud-init v. 24.4-8.el9 running 'modules:config' at Sat, 10 Jan 2026 16:23:58 +0000. Up 9.16 seconds.
Jan 10 16:23:58 np0005580781.novalocal systemd[1]: Finished Cloud-init: Config Stage.
Jan 10 16:23:58 np0005580781.novalocal systemd[1]: Starting Cloud-init: Final Stage...
Jan 10 16:23:58 np0005580781.novalocal cloud-init[1266]: Cloud-init v. 24.4-8.el9 running 'modules:final' at Sat, 10 Jan 2026 16:23:58 +0000. Up 9.58 seconds.
Jan 10 16:23:58 np0005580781.novalocal dracut[1270]: dracut-057-102.git20250818.el9
Jan 10 16:23:58 np0005580781.novalocal cloud-init[1287]: #############################################################
Jan 10 16:23:58 np0005580781.novalocal cloud-init[1288]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Jan 10 16:23:58 np0005580781.novalocal cloud-init[1290]: 256 SHA256:Q5P44HImb40twqNrJHfa+flB24JjYHv9nVHYCLjTez0 root@np0005580781.novalocal (ECDSA)
Jan 10 16:23:58 np0005580781.novalocal cloud-init[1292]: 256 SHA256:NBCxV9OdqOJixAGHwcnNHRsQ/U445B6s8QNi7vZfqXU root@np0005580781.novalocal (ED25519)
Jan 10 16:23:58 np0005580781.novalocal cloud-init[1294]: 3072 SHA256:NwbL++5yrOKwEryJ6OBZfWK1r7uXkUNzc9raTCTtoYo root@np0005580781.novalocal (RSA)
Jan 10 16:23:58 np0005580781.novalocal cloud-init[1295]: -----END SSH HOST KEY FINGERPRINTS-----
Jan 10 16:23:58 np0005580781.novalocal cloud-init[1296]: #############################################################
Jan 10 16:23:59 np0005580781.novalocal cloud-init[1266]: Cloud-init v. 24.4-8.el9 finished at Sat, 10 Jan 2026 16:23:59 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 9.75 seconds
Jan 10 16:23:59 np0005580781.novalocal systemd[1]: Finished Cloud-init: Final Stage.
Jan 10 16:23:59 np0005580781.novalocal systemd[1]: Reached target Cloud-init target.
Jan 10 16:23:59 np0005580781.novalocal dracut[1272]: Executing: /usr/bin/dracut --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics  --mount "/dev/disk/by-uuid/f2a0a5c1-133f-4977-b837-e40b31cbd9cc /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device --add-confdir /lib/kdump/dracut.conf.d -f /boot/initramfs-5.14.0-655.el9.x86_64kdump.img 5.14.0-655.el9.x86_64
Jan 10 16:23:59 np0005580781.novalocal sshd-session[1384]: Unable to negotiate with 38.102.83.114 port 47506: no matching host key type found. Their offer: ssh-ed25519,ssh-ed25519-cert-v01@openssh.com [preauth]
Jan 10 16:23:59 np0005580781.novalocal sshd-session[1394]: Unable to negotiate with 38.102.83.114 port 47520: no matching host key type found. Their offer: ecdsa-sha2-nistp384,ecdsa-sha2-nistp384-cert-v01@openssh.com [preauth]
Jan 10 16:23:59 np0005580781.novalocal sshd-session[1399]: Unable to negotiate with 38.102.83.114 port 47524: no matching host key type found. Their offer: ecdsa-sha2-nistp521,ecdsa-sha2-nistp521-cert-v01@openssh.com [preauth]
Jan 10 16:23:59 np0005580781.novalocal sshd-session[1377]: Connection closed by 38.102.83.114 port 47490 [preauth]
Jan 10 16:23:59 np0005580781.novalocal sshd-session[1404]: Connection reset by 38.102.83.114 port 47536 [preauth]
Jan 10 16:23:59 np0005580781.novalocal sshd-session[1387]: Connection closed by 38.102.83.114 port 47508 [preauth]
Jan 10 16:23:59 np0005580781.novalocal sshd-session[1414]: Unable to negotiate with 38.102.83.114 port 47550: no matching host key type found. Their offer: ssh-rsa,ssh-rsa-cert-v01@openssh.com [preauth]
Jan 10 16:23:59 np0005580781.novalocal sshd-session[1419]: Unable to negotiate with 38.102.83.114 port 47556: no matching host key type found. Their offer: ssh-dss,ssh-dss-cert-v01@openssh.com [preauth]
Jan 10 16:23:59 np0005580781.novalocal sshd-session[1406]: Connection closed by 38.102.83.114 port 47540 [preauth]
Jan 10 16:23:59 np0005580781.novalocal dracut[1272]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found!
Jan 10 16:23:59 np0005580781.novalocal dracut[1272]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found!
Jan 10 16:23:59 np0005580781.novalocal dracut[1272]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found!
Jan 10 16:23:59 np0005580781.novalocal dracut[1272]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Jan 10 16:23:59 np0005580781.novalocal dracut[1272]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Jan 10 16:23:59 np0005580781.novalocal dracut[1272]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Jan 10 16:23:59 np0005580781.novalocal dracut[1272]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Jan 10 16:23:59 np0005580781.novalocal dracut[1272]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Jan 10 16:23:59 np0005580781.novalocal dracut[1272]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Jan 10 16:23:59 np0005580781.novalocal dracut[1272]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Jan 10 16:23:59 np0005580781.novalocal dracut[1272]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Jan 10 16:23:59 np0005580781.novalocal dracut[1272]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Jan 10 16:23:59 np0005580781.novalocal dracut[1272]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Jan 10 16:23:59 np0005580781.novalocal dracut[1272]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Jan 10 16:23:59 np0005580781.novalocal dracut[1272]: Module 'ifcfg' will not be installed, because it's in the list to be omitted!
Jan 10 16:23:59 np0005580781.novalocal dracut[1272]: Module 'plymouth' will not be installed, because it's in the list to be omitted!
Jan 10 16:23:59 np0005580781.novalocal dracut[1272]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Jan 10 16:24:00 np0005580781.novalocal dracut[1272]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Jan 10 16:24:00 np0005580781.novalocal dracut[1272]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Jan 10 16:24:00 np0005580781.novalocal dracut[1272]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Jan 10 16:24:00 np0005580781.novalocal dracut[1272]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Jan 10 16:24:00 np0005580781.novalocal dracut[1272]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Jan 10 16:24:00 np0005580781.novalocal dracut[1272]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Jan 10 16:24:00 np0005580781.novalocal dracut[1272]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Jan 10 16:24:00 np0005580781.novalocal dracut[1272]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Jan 10 16:24:00 np0005580781.novalocal dracut[1272]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Jan 10 16:24:00 np0005580781.novalocal dracut[1272]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Jan 10 16:24:00 np0005580781.novalocal dracut[1272]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Jan 10 16:24:00 np0005580781.novalocal dracut[1272]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Jan 10 16:24:00 np0005580781.novalocal dracut[1272]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Jan 10 16:24:00 np0005580781.novalocal dracut[1272]: Module 'resume' will not be installed, because it's in the list to be omitted!
Jan 10 16:24:00 np0005580781.novalocal dracut[1272]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found!
Jan 10 16:24:00 np0005580781.novalocal dracut[1272]: Module 'earlykdump' will not be installed, because it's in the list to be omitted!
Jan 10 16:24:00 np0005580781.novalocal dracut[1272]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Jan 10 16:24:00 np0005580781.novalocal dracut[1272]: memstrack is not available
Jan 10 16:24:00 np0005580781.novalocal dracut[1272]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Jan 10 16:24:00 np0005580781.novalocal dracut[1272]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Jan 10 16:24:00 np0005580781.novalocal dracut[1272]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Jan 10 16:24:00 np0005580781.novalocal dracut[1272]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Jan 10 16:24:00 np0005580781.novalocal dracut[1272]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Jan 10 16:24:00 np0005580781.novalocal dracut[1272]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Jan 10 16:24:00 np0005580781.novalocal dracut[1272]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Jan 10 16:24:00 np0005580781.novalocal dracut[1272]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Jan 10 16:24:00 np0005580781.novalocal dracut[1272]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Jan 10 16:24:00 np0005580781.novalocal dracut[1272]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Jan 10 16:24:00 np0005580781.novalocal dracut[1272]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Jan 10 16:24:00 np0005580781.novalocal dracut[1272]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Jan 10 16:24:00 np0005580781.novalocal dracut[1272]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Jan 10 16:24:00 np0005580781.novalocal dracut[1272]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Jan 10 16:24:00 np0005580781.novalocal dracut[1272]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Jan 10 16:24:00 np0005580781.novalocal dracut[1272]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Jan 10 16:24:00 np0005580781.novalocal dracut[1272]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Jan 10 16:24:00 np0005580781.novalocal dracut[1272]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Jan 10 16:24:00 np0005580781.novalocal dracut[1272]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Jan 10 16:24:00 np0005580781.novalocal dracut[1272]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Jan 10 16:24:00 np0005580781.novalocal dracut[1272]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Jan 10 16:24:00 np0005580781.novalocal dracut[1272]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Jan 10 16:24:01 np0005580781.novalocal dracut[1272]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Jan 10 16:24:01 np0005580781.novalocal dracut[1272]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Jan 10 16:24:01 np0005580781.novalocal dracut[1272]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Jan 10 16:24:01 np0005580781.novalocal dracut[1272]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Jan 10 16:24:01 np0005580781.novalocal dracut[1272]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Jan 10 16:24:01 np0005580781.novalocal dracut[1272]: memstrack is not available
Jan 10 16:24:01 np0005580781.novalocal dracut[1272]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Jan 10 16:24:01 np0005580781.novalocal dracut[1272]: *** Including module: systemd ***
Jan 10 16:24:01 np0005580781.novalocal chronyd[785]: Selected source 162.159.200.123 (2.centos.pool.ntp.org)
Jan 10 16:24:01 np0005580781.novalocal chronyd[785]: System clock TAI offset set to 37 seconds
Jan 10 16:24:01 np0005580781.novalocal dracut[1272]: *** Including module: fips ***
Jan 10 16:24:01 np0005580781.novalocal dracut[1272]: *** Including module: systemd-initrd ***
Jan 10 16:24:01 np0005580781.novalocal dracut[1272]: *** Including module: i18n ***
Jan 10 16:24:02 np0005580781.novalocal dracut[1272]: *** Including module: drm ***
Jan 10 16:24:02 np0005580781.novalocal dracut[1272]: *** Including module: prefixdevname ***
Jan 10 16:24:02 np0005580781.novalocal dracut[1272]: *** Including module: kernel-modules ***
Jan 10 16:24:02 np0005580781.novalocal kernel: block vda: the capability attribute has been deprecated.
Jan 10 16:24:03 np0005580781.novalocal chronyd[785]: Selected source 167.160.187.12 (2.centos.pool.ntp.org)
Jan 10 16:24:03 np0005580781.novalocal dracut[1272]: *** Including module: kernel-modules-extra ***
Jan 10 16:24:03 np0005580781.novalocal dracut[1272]:   kernel-modules-extra: configuration source "/run/depmod.d" does not exist
Jan 10 16:24:03 np0005580781.novalocal dracut[1272]:   kernel-modules-extra: configuration source "/lib/depmod.d" does not exist
Jan 10 16:24:03 np0005580781.novalocal dracut[1272]:   kernel-modules-extra: parsing configuration file "/etc/depmod.d/dist.conf"
Jan 10 16:24:03 np0005580781.novalocal dracut[1272]:   kernel-modules-extra: /etc/depmod.d/dist.conf: added "updates extra built-in weak-updates" to the list of search directories
Jan 10 16:24:03 np0005580781.novalocal dracut[1272]: *** Including module: qemu ***
Jan 10 16:24:03 np0005580781.novalocal dracut[1272]: *** Including module: fstab-sys ***
Jan 10 16:24:03 np0005580781.novalocal dracut[1272]: *** Including module: rootfs-block ***
Jan 10 16:24:03 np0005580781.novalocal dracut[1272]: *** Including module: terminfo ***
Jan 10 16:24:03 np0005580781.novalocal dracut[1272]: *** Including module: udev-rules ***
Jan 10 16:24:04 np0005580781.novalocal dracut[1272]: Skipping udev rule: 91-permissions.rules
Jan 10 16:24:04 np0005580781.novalocal dracut[1272]: Skipping udev rule: 80-drivers-modprobe.rules
Jan 10 16:24:04 np0005580781.novalocal dracut[1272]: *** Including module: virtiofs ***
Jan 10 16:24:04 np0005580781.novalocal dracut[1272]: *** Including module: dracut-systemd ***
Jan 10 16:24:04 np0005580781.novalocal dracut[1272]: *** Including module: usrmount ***
Jan 10 16:24:04 np0005580781.novalocal dracut[1272]: *** Including module: base ***
Jan 10 16:24:04 np0005580781.novalocal dracut[1272]: *** Including module: fs-lib ***
Jan 10 16:24:04 np0005580781.novalocal dracut[1272]: *** Including module: kdumpbase ***
Jan 10 16:24:05 np0005580781.novalocal dracut[1272]: *** Including module: microcode_ctl-fw_dir_override ***
Jan 10 16:24:05 np0005580781.novalocal dracut[1272]:   microcode_ctl module: mangling fw_dir
Jan 10 16:24:05 np0005580781.novalocal dracut[1272]:     microcode_ctl: reset fw_dir to "/lib/firmware/updates /lib/firmware"
Jan 10 16:24:05 np0005580781.novalocal dracut[1272]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel"...
Jan 10 16:24:05 np0005580781.novalocal dracut[1272]:     microcode_ctl: configuration "intel" is ignored
Jan 10 16:24:05 np0005580781.novalocal dracut[1272]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"...
Jan 10 16:24:05 np0005580781.novalocal irqbalance[794]: Cannot change IRQ 25 affinity: Operation not permitted
Jan 10 16:24:05 np0005580781.novalocal irqbalance[794]: IRQ 25 affinity is now unmanaged
Jan 10 16:24:05 np0005580781.novalocal irqbalance[794]: Cannot change IRQ 31 affinity: Operation not permitted
Jan 10 16:24:05 np0005580781.novalocal irqbalance[794]: IRQ 31 affinity is now unmanaged
Jan 10 16:24:05 np0005580781.novalocal irqbalance[794]: Cannot change IRQ 28 affinity: Operation not permitted
Jan 10 16:24:05 np0005580781.novalocal irqbalance[794]: IRQ 28 affinity is now unmanaged
Jan 10 16:24:05 np0005580781.novalocal irqbalance[794]: Cannot change IRQ 32 affinity: Operation not permitted
Jan 10 16:24:05 np0005580781.novalocal irqbalance[794]: IRQ 32 affinity is now unmanaged
Jan 10 16:24:05 np0005580781.novalocal irqbalance[794]: Cannot change IRQ 30 affinity: Operation not permitted
Jan 10 16:24:05 np0005580781.novalocal irqbalance[794]: IRQ 30 affinity is now unmanaged
Jan 10 16:24:05 np0005580781.novalocal irqbalance[794]: Cannot change IRQ 29 affinity: Operation not permitted
Jan 10 16:24:05 np0005580781.novalocal irqbalance[794]: IRQ 29 affinity is now unmanaged
Jan 10 16:24:05 np0005580781.novalocal dracut[1272]:     microcode_ctl: configuration "intel-06-2d-07" is ignored
Jan 10 16:24:05 np0005580781.novalocal dracut[1272]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"...
Jan 10 16:24:05 np0005580781.novalocal dracut[1272]:     microcode_ctl: configuration "intel-06-4e-03" is ignored
Jan 10 16:24:05 np0005580781.novalocal dracut[1272]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"...
Jan 10 16:24:05 np0005580781.novalocal dracut[1272]:     microcode_ctl: configuration "intel-06-4f-01" is ignored
Jan 10 16:24:05 np0005580781.novalocal dracut[1272]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"...
Jan 10 16:24:05 np0005580781.novalocal dracut[1272]:     microcode_ctl: configuration "intel-06-55-04" is ignored
Jan 10 16:24:05 np0005580781.novalocal dracut[1272]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"...
Jan 10 16:24:05 np0005580781.novalocal dracut[1272]:     microcode_ctl: configuration "intel-06-5e-03" is ignored
Jan 10 16:24:05 np0005580781.novalocal dracut[1272]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"...
Jan 10 16:24:05 np0005580781.novalocal dracut[1272]:     microcode_ctl: configuration "intel-06-8c-01" is ignored
Jan 10 16:24:05 np0005580781.novalocal dracut[1272]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"...
Jan 10 16:24:05 np0005580781.novalocal dracut[1272]:     microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored
Jan 10 16:24:05 np0005580781.novalocal dracut[1272]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"...
Jan 10 16:24:05 np0005580781.novalocal dracut[1272]:     microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored
Jan 10 16:24:05 np0005580781.novalocal dracut[1272]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8f-08"...
Jan 10 16:24:05 np0005580781.novalocal dracut[1272]:     microcode_ctl: configuration "intel-06-8f-08" is ignored
Jan 10 16:24:05 np0005580781.novalocal dracut[1272]:     microcode_ctl: final fw_dir: "/lib/firmware/updates /lib/firmware"
Jan 10 16:24:05 np0005580781.novalocal dracut[1272]: *** Including module: openssl ***
Jan 10 16:24:05 np0005580781.novalocal dracut[1272]: *** Including module: shutdown ***
Jan 10 16:24:05 np0005580781.novalocal dracut[1272]: *** Including module: squash ***
Jan 10 16:24:05 np0005580781.novalocal dracut[1272]: *** Including modules done ***
Jan 10 16:24:05 np0005580781.novalocal dracut[1272]: *** Installing kernel module dependencies ***
Jan 10 16:24:06 np0005580781.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 10 16:24:06 np0005580781.novalocal dracut[1272]: *** Installing kernel module dependencies done ***
Jan 10 16:24:06 np0005580781.novalocal dracut[1272]: *** Resolving executable dependencies ***
Jan 10 16:24:08 np0005580781.novalocal dracut[1272]: *** Resolving executable dependencies done ***
Jan 10 16:24:08 np0005580781.novalocal dracut[1272]: *** Generating early-microcode cpio image ***
Jan 10 16:24:08 np0005580781.novalocal dracut[1272]: *** Store current command line parameters ***
Jan 10 16:24:08 np0005580781.novalocal dracut[1272]: Stored kernel commandline:
Jan 10 16:24:08 np0005580781.novalocal dracut[1272]: No dracut internal kernel commandline stored in the initramfs
Jan 10 16:24:08 np0005580781.novalocal dracut[1272]: *** Install squash loader ***
Jan 10 16:24:09 np0005580781.novalocal dracut[1272]: *** Squashing the files inside the initramfs ***
Jan 10 16:24:10 np0005580781.novalocal dracut[1272]: *** Squashing the files inside the initramfs done ***
Jan 10 16:24:10 np0005580781.novalocal dracut[1272]: *** Creating image file '/boot/initramfs-5.14.0-655.el9.x86_64kdump.img' ***
Jan 10 16:24:10 np0005580781.novalocal dracut[1272]: *** Hardlinking files ***
Jan 10 16:24:10 np0005580781.novalocal dracut[1272]: Mode:           real
Jan 10 16:24:10 np0005580781.novalocal dracut[1272]: Files:          50
Jan 10 16:24:10 np0005580781.novalocal dracut[1272]: Linked:         0 files
Jan 10 16:24:10 np0005580781.novalocal dracut[1272]: Compared:       0 xattrs
Jan 10 16:24:10 np0005580781.novalocal dracut[1272]: Compared:       0 files
Jan 10 16:24:10 np0005580781.novalocal dracut[1272]: Saved:          0 B
Jan 10 16:24:10 np0005580781.novalocal dracut[1272]: Duration:       0.000442 seconds
Jan 10 16:24:10 np0005580781.novalocal dracut[1272]: *** Hardlinking files done ***
Jan 10 16:24:11 np0005580781.novalocal dracut[1272]: *** Creating initramfs image file '/boot/initramfs-5.14.0-655.el9.x86_64kdump.img' done ***
Jan 10 16:24:11 np0005580781.novalocal kdumpctl[1020]: kdump: kexec: loaded kdump kernel
Jan 10 16:24:11 np0005580781.novalocal kdumpctl[1020]: kdump: Starting kdump: [OK]
Jan 10 16:24:11 np0005580781.novalocal systemd[1]: Finished Crash recovery kernel arming.
Jan 10 16:24:11 np0005580781.novalocal systemd[1]: Startup finished in 1.616s (kernel) + 2.673s (initrd) + 17.992s (userspace) = 22.282s.
Jan 10 16:24:15 np0005580781.novalocal sshd-session[4296]: Accepted publickey for zuul from 38.102.83.114 port 44516 ssh2: RSA SHA256:zhs3MiW0JhxzckYcMHQES8SMYHj1iGcomnyzmbiwor8
Jan 10 16:24:15 np0005580781.novalocal systemd[1]: Created slice User Slice of UID 1000.
Jan 10 16:24:15 np0005580781.novalocal systemd[1]: Starting User Runtime Directory /run/user/1000...
Jan 10 16:24:15 np0005580781.novalocal systemd-logind[798]: New session 1 of user zuul.
Jan 10 16:24:15 np0005580781.novalocal systemd[1]: Finished User Runtime Directory /run/user/1000.
Jan 10 16:24:15 np0005580781.novalocal systemd[1]: Starting User Manager for UID 1000...
Jan 10 16:24:15 np0005580781.novalocal systemd[4300]: pam_unix(systemd-user:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 10 16:24:16 np0005580781.novalocal systemd[4300]: Queued start job for default target Main User Target.
Jan 10 16:24:16 np0005580781.novalocal systemd[4300]: Created slice User Application Slice.
Jan 10 16:24:16 np0005580781.novalocal systemd[4300]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 10 16:24:16 np0005580781.novalocal systemd[4300]: Started Daily Cleanup of User's Temporary Directories.
Jan 10 16:24:16 np0005580781.novalocal systemd[4300]: Reached target Paths.
Jan 10 16:24:16 np0005580781.novalocal systemd[4300]: Reached target Timers.
Jan 10 16:24:16 np0005580781.novalocal systemd[4300]: Starting D-Bus User Message Bus Socket...
Jan 10 16:24:16 np0005580781.novalocal systemd[4300]: Starting Create User's Volatile Files and Directories...
Jan 10 16:24:16 np0005580781.novalocal systemd[4300]: Listening on D-Bus User Message Bus Socket.
Jan 10 16:24:16 np0005580781.novalocal systemd[4300]: Reached target Sockets.
Jan 10 16:24:16 np0005580781.novalocal systemd[4300]: Finished Create User's Volatile Files and Directories.
Jan 10 16:24:16 np0005580781.novalocal systemd[4300]: Reached target Basic System.
Jan 10 16:24:16 np0005580781.novalocal systemd[4300]: Reached target Main User Target.
Jan 10 16:24:16 np0005580781.novalocal systemd[4300]: Startup finished in 180ms.
Jan 10 16:24:16 np0005580781.novalocal systemd[1]: Started User Manager for UID 1000.
Jan 10 16:24:16 np0005580781.novalocal systemd[1]: Started Session 1 of User zuul.
Jan 10 16:24:16 np0005580781.novalocal sshd-session[4296]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 10 16:24:16 np0005580781.novalocal python3[4382]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 10 16:24:19 np0005580781.novalocal python3[4410]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 10 16:24:25 np0005580781.novalocal python3[4468]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 10 16:24:26 np0005580781.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 10 16:24:26 np0005580781.novalocal python3[4508]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Jan 10 16:24:28 np0005580781.novalocal python3[4536]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDeD5tUo51Lv8h9yEywo+EOfcOe8O9qcfGbBz06qpMofMkfxvR9FJX7HaldGOhwPYwnN8IcRyTV0h847QPaPD8sCQvCeERQFB0o7dWNv+B+pWlIgEkBfmCi8JouOBTrd0NGVq3z7xoWFJCIhDxepjZel40n5uFRbXifZMxjGZrBxLACjQHb8AMbrKf8TYZcYndKFcrlL13N1yC56oCEom41G55ck/7+EGgn0l5uwcGMq1fd8RaeO0ZQltzgUcuE/zaPMv0q2Ei6Ckc2bxrS6VXqXtlQFBfapEZxx0e1ihCKZbdcILoqJKFsm5ufcIXfG6MHTWxmvAx/4z5vq71RgaMB05qVzt519yWHI5FrhDr7CeTtAnPuaLUdyzMYuCcmle5UE3HfdflGVSXEuMjOCQUqF76hnlsJcZW54AtE2ia6dDZ42zqD/T5034uJu3DuFHblXGZt3nABoRwiikk+BWjMR2kKY7OR5kFqysxprgOGlHXMBEIBdkN6WZmUXHMLHjM= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 10 16:24:28 np0005580781.novalocal python3[4560]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 16:24:29 np0005580781.novalocal python3[4659]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 10 16:24:29 np0005580781.novalocal python3[4730]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1768062268.8812065-207-207263926591037/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=06074398b50949c395f57300e3d7e828_id_rsa follow=False checksum=133840384c351d4ac55c4317617f39d325dcfaaf backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 16:24:30 np0005580781.novalocal python3[4853]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 10 16:24:30 np0005580781.novalocal python3[4924]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1768062269.8792195-240-170148823908400/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=06074398b50949c395f57300e3d7e828_id_rsa.pub follow=False checksum=218145de1e2a4d006d31f6e8dfc84696a708209c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 16:24:32 np0005580781.novalocal python3[4972]: ansible-ping Invoked with data=pong
Jan 10 16:24:33 np0005580781.novalocal python3[4996]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 10 16:24:35 np0005580781.novalocal python3[5054]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Jan 10 16:24:36 np0005580781.novalocal python3[5086]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 16:24:36 np0005580781.novalocal python3[5110]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 16:24:36 np0005580781.novalocal python3[5134]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 16:24:37 np0005580781.novalocal python3[5158]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 16:24:37 np0005580781.novalocal python3[5182]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 16:24:37 np0005580781.novalocal python3[5206]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 16:24:39 np0005580781.novalocal sudo[5230]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tvnrntdckgaoluwcqlbovlnevsnstagw ; /usr/bin/python3'
Jan 10 16:24:39 np0005580781.novalocal sudo[5230]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:24:39 np0005580781.novalocal python3[5232]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 16:24:39 np0005580781.novalocal sudo[5230]: pam_unix(sudo:session): session closed for user root
Jan 10 16:24:39 np0005580781.novalocal sudo[5308]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bvmswhnkapqrudghkzucudozqgwczxts ; /usr/bin/python3'
Jan 10 16:24:39 np0005580781.novalocal sudo[5308]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:24:40 np0005580781.novalocal python3[5310]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 10 16:24:40 np0005580781.novalocal sudo[5308]: pam_unix(sudo:session): session closed for user root
Jan 10 16:24:40 np0005580781.novalocal sudo[5381]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bgvsiycixfjrgbuqxouhfvraenktuoxz ; /usr/bin/python3'
Jan 10 16:24:40 np0005580781.novalocal sudo[5381]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:24:40 np0005580781.novalocal python3[5383]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1768062279.694272-21-144281286314797/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 16:24:40 np0005580781.novalocal sudo[5381]: pam_unix(sudo:session): session closed for user root
Jan 10 16:24:41 np0005580781.novalocal python3[5431]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 10 16:24:41 np0005580781.novalocal python3[5455]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 10 16:24:41 np0005580781.novalocal python3[5479]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 10 16:24:42 np0005580781.novalocal python3[5503]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 10 16:24:42 np0005580781.novalocal python3[5527]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 10 16:24:42 np0005580781.novalocal python3[5551]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 10 16:24:43 np0005580781.novalocal python3[5575]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 10 16:24:43 np0005580781.novalocal python3[5599]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 10 16:24:43 np0005580781.novalocal python3[5623]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 10 16:24:44 np0005580781.novalocal python3[5647]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 10 16:24:44 np0005580781.novalocal python3[5671]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 10 16:24:44 np0005580781.novalocal python3[5695]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 10 16:24:44 np0005580781.novalocal python3[5719]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 10 16:24:45 np0005580781.novalocal python3[5743]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 10 16:24:45 np0005580781.novalocal python3[5767]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 10 16:24:45 np0005580781.novalocal python3[5791]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 10 16:24:46 np0005580781.novalocal python3[5815]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 10 16:24:46 np0005580781.novalocal python3[5839]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 10 16:24:46 np0005580781.novalocal python3[5863]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 10 16:24:47 np0005580781.novalocal python3[5887]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 10 16:24:47 np0005580781.novalocal python3[5911]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 10 16:24:47 np0005580781.novalocal python3[5935]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 10 16:24:47 np0005580781.novalocal python3[5959]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 10 16:24:48 np0005580781.novalocal python3[5983]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 10 16:24:48 np0005580781.novalocal python3[6007]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 10 16:24:48 np0005580781.novalocal python3[6031]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 10 16:24:50 np0005580781.novalocal sudo[6055]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vzsbrtjlybjkfmhmgvknurthxgzxwyml ; /usr/bin/python3'
Jan 10 16:24:50 np0005580781.novalocal sudo[6055]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:24:51 np0005580781.novalocal python3[6057]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Jan 10 16:24:51 np0005580781.novalocal systemd[1]: Starting Time & Date Service...
Jan 10 16:24:51 np0005580781.novalocal systemd[1]: Started Time & Date Service.
Jan 10 16:24:51 np0005580781.novalocal systemd-timedated[6059]: Changed time zone to 'UTC' (UTC).
Jan 10 16:24:51 np0005580781.novalocal sudo[6055]: pam_unix(sudo:session): session closed for user root
Jan 10 16:24:51 np0005580781.novalocal sudo[6086]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rxaryfeqkduviwluteqgzoyfvmsxdfkn ; /usr/bin/python3'
Jan 10 16:24:51 np0005580781.novalocal sudo[6086]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:24:51 np0005580781.novalocal python3[6088]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 16:24:51 np0005580781.novalocal sudo[6086]: pam_unix(sudo:session): session closed for user root
Jan 10 16:24:52 np0005580781.novalocal python3[6164]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 10 16:24:52 np0005580781.novalocal python3[6235]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1768062291.7915237-153-210111170487502/source _original_basename=tmp57z7jz8w follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 16:24:53 np0005580781.novalocal python3[6335]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 10 16:24:53 np0005580781.novalocal python3[6406]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1768062292.711432-183-93559023362781/source _original_basename=tmpo3thpkxa follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 16:24:53 np0005580781.novalocal sudo[6506]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xyzmwggsbxepnlerqvimejmycxisaluo ; /usr/bin/python3'
Jan 10 16:24:53 np0005580781.novalocal sudo[6506]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:24:54 np0005580781.novalocal python3[6508]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 10 16:24:54 np0005580781.novalocal sudo[6506]: pam_unix(sudo:session): session closed for user root
Jan 10 16:24:54 np0005580781.novalocal sudo[6579]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ryzriwpopttzznqzntathasiucnhhysd ; /usr/bin/python3'
Jan 10 16:24:54 np0005580781.novalocal sudo[6579]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:24:54 np0005580781.novalocal python3[6581]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1768062293.7859862-231-262066641921198/source _original_basename=tmpe4j7s4jn follow=False checksum=6c462e10cf6b935fb22f4386c31d576dcf4d4133 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 16:24:54 np0005580781.novalocal sudo[6579]: pam_unix(sudo:session): session closed for user root
Jan 10 16:24:55 np0005580781.novalocal python3[6629]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 16:24:55 np0005580781.novalocal python3[6655]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 16:24:55 np0005580781.novalocal sudo[6733]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uxdcegabmxuyhtrynhsjddmojitirxce ; /usr/bin/python3'
Jan 10 16:24:55 np0005580781.novalocal sudo[6733]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:24:55 np0005580781.novalocal python3[6735]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 10 16:24:55 np0005580781.novalocal sudo[6733]: pam_unix(sudo:session): session closed for user root
Jan 10 16:24:56 np0005580781.novalocal sudo[6806]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ttjexuwljagyczsgipdsjekxpkwxkbot ; /usr/bin/python3'
Jan 10 16:24:56 np0005580781.novalocal sudo[6806]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:24:56 np0005580781.novalocal python3[6808]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1768062295.5583942-273-160780419477474/source _original_basename=tmpeixxwqmy follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 16:24:56 np0005580781.novalocal sudo[6806]: pam_unix(sudo:session): session closed for user root
Jan 10 16:24:56 np0005580781.novalocal sudo[6857]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-znpokpbdmxkjaexjpqbtpjgyqooydoao ; /usr/bin/python3'
Jan 10 16:24:56 np0005580781.novalocal sudo[6857]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:24:56 np0005580781.novalocal python3[6859]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163ec2-ffbe-34c0-10b8-00000000001d-1-compute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 16:24:56 np0005580781.novalocal sudo[6857]: pam_unix(sudo:session): session closed for user root
Jan 10 16:24:57 np0005580781.novalocal python3[6887]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env
                                                       _uses_shell=True zuul_log_id=fa163ec2-ffbe-34c0-10b8-00000000001e-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Jan 10 16:24:58 np0005580781.novalocal python3[6915]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 16:25:14 np0005580781.novalocal sudo[6939]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mcjnxumhppqxomrhzzjxzvrhqhgnueye ; /usr/bin/python3'
Jan 10 16:25:14 np0005580781.novalocal sudo[6939]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:25:14 np0005580781.novalocal python3[6941]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 16:25:14 np0005580781.novalocal sudo[6939]: pam_unix(sudo:session): session closed for user root
Jan 10 16:25:21 np0005580781.novalocal systemd[1]: systemd-timedated.service: Deactivated successfully.
Jan 10 16:25:50 np0005580781.novalocal kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Jan 10 16:25:50 np0005580781.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Jan 10 16:25:50 np0005580781.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Jan 10 16:25:50 np0005580781.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Jan 10 16:25:50 np0005580781.novalocal kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Jan 10 16:25:50 np0005580781.novalocal kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Jan 10 16:25:50 np0005580781.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Jan 10 16:25:50 np0005580781.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Jan 10 16:25:50 np0005580781.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Jan 10 16:25:50 np0005580781.novalocal kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Jan 10 16:25:50 np0005580781.novalocal NetworkManager[859]: <info>  [1768062350.2539] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Jan 10 16:25:50 np0005580781.novalocal systemd-udevd[6945]: Network interface NamePolicy= disabled on kernel command line.
Jan 10 16:25:50 np0005580781.novalocal NetworkManager[859]: <info>  [1768062350.2714] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 10 16:25:50 np0005580781.novalocal NetworkManager[859]: <info>  [1768062350.2743] settings: (eth1): created default wired connection 'Wired connection 1'
Jan 10 16:25:50 np0005580781.novalocal NetworkManager[859]: <info>  [1768062350.2747] device (eth1): carrier: link connected
Jan 10 16:25:50 np0005580781.novalocal NetworkManager[859]: <info>  [1768062350.2750] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Jan 10 16:25:50 np0005580781.novalocal NetworkManager[859]: <info>  [1768062350.2757] policy: auto-activating connection 'Wired connection 1' (3d2c32e1-e902-3a7a-bfe1-2a4ee0361874)
Jan 10 16:25:50 np0005580781.novalocal NetworkManager[859]: <info>  [1768062350.2761] device (eth1): Activation: starting connection 'Wired connection 1' (3d2c32e1-e902-3a7a-bfe1-2a4ee0361874)
Jan 10 16:25:50 np0005580781.novalocal NetworkManager[859]: <info>  [1768062350.2762] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 10 16:25:50 np0005580781.novalocal NetworkManager[859]: <info>  [1768062350.2765] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 10 16:25:50 np0005580781.novalocal NetworkManager[859]: <info>  [1768062350.2770] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 10 16:25:50 np0005580781.novalocal NetworkManager[859]: <info>  [1768062350.2775] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Jan 10 16:25:51 np0005580781.novalocal python3[6971]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163ec2-ffbe-2dfb-ba4b-0000000000fc-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 16:26:01 np0005580781.novalocal sudo[7049]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aqxduozlrnxtuadbepwcypewctcxhvpz ; OS_CLOUD=vexxhost /usr/bin/python3'
Jan 10 16:26:01 np0005580781.novalocal sudo[7049]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:26:01 np0005580781.novalocal python3[7051]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 10 16:26:01 np0005580781.novalocal sudo[7049]: pam_unix(sudo:session): session closed for user root
Jan 10 16:26:01 np0005580781.novalocal sudo[7122]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-osnlcsbdydkcxaixlwjfnprvhtzjqwqg ; OS_CLOUD=vexxhost /usr/bin/python3'
Jan 10 16:26:01 np0005580781.novalocal sudo[7122]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:26:01 np0005580781.novalocal python3[7124]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1768062361.144869-102-256905809893115/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=72c6eb85ec6de524f8f776b873377fe42c6f485e backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 16:26:01 np0005580781.novalocal sudo[7122]: pam_unix(sudo:session): session closed for user root
Jan 10 16:26:02 np0005580781.novalocal sudo[7172]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ynjpgsqmspjnnpajsbsapsphgaiqjoea ; OS_CLOUD=vexxhost /usr/bin/python3'
Jan 10 16:26:02 np0005580781.novalocal sudo[7172]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:26:02 np0005580781.novalocal python3[7174]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 10 16:26:02 np0005580781.novalocal systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Jan 10 16:26:02 np0005580781.novalocal systemd[1]: Stopped Network Manager Wait Online.
Jan 10 16:26:02 np0005580781.novalocal systemd[1]: Stopping Network Manager Wait Online...
Jan 10 16:26:02 np0005580781.novalocal systemd[1]: Stopping Network Manager...
Jan 10 16:26:02 np0005580781.novalocal NetworkManager[859]: <info>  [1768062362.7866] caught SIGTERM, shutting down normally.
Jan 10 16:26:02 np0005580781.novalocal NetworkManager[859]: <info>  [1768062362.7876] dhcp4 (eth0): canceled DHCP transaction
Jan 10 16:26:02 np0005580781.novalocal NetworkManager[859]: <info>  [1768062362.7877] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 10 16:26:02 np0005580781.novalocal NetworkManager[859]: <info>  [1768062362.7877] dhcp4 (eth0): state changed no lease
Jan 10 16:26:02 np0005580781.novalocal NetworkManager[859]: <info>  [1768062362.7880] manager: NetworkManager state is now CONNECTING
Jan 10 16:26:02 np0005580781.novalocal NetworkManager[859]: <info>  [1768062362.8037] dhcp4 (eth1): canceled DHCP transaction
Jan 10 16:26:02 np0005580781.novalocal NetworkManager[859]: <info>  [1768062362.8038] dhcp4 (eth1): state changed no lease
Jan 10 16:26:02 np0005580781.novalocal NetworkManager[859]: <info>  [1768062362.8107] exiting (success)
Jan 10 16:26:02 np0005580781.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 10 16:26:02 np0005580781.novalocal systemd[1]: NetworkManager.service: Deactivated successfully.
Jan 10 16:26:02 np0005580781.novalocal systemd[1]: Stopped Network Manager.
Jan 10 16:26:02 np0005580781.novalocal systemd[1]: NetworkManager.service: Consumed 1.133s CPU time, 10.0M memory peak.
Jan 10 16:26:02 np0005580781.novalocal systemd[1]: Starting Network Manager...
Jan 10 16:26:02 np0005580781.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 10 16:26:02 np0005580781.novalocal NetworkManager[7178]: <info>  [1768062362.8805] NetworkManager (version 1.54.2-1.el9) is starting... (after a restart, boot:bad47697-514b-4229-8b29-23921a9a6958)
Jan 10 16:26:02 np0005580781.novalocal NetworkManager[7178]: <info>  [1768062362.8808] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Jan 10 16:26:02 np0005580781.novalocal NetworkManager[7178]: <info>  [1768062362.8899] manager[0x55ea1b6e0000]: monitoring kernel firmware directory '/lib/firmware'.
Jan 10 16:26:02 np0005580781.novalocal systemd[1]: Starting Hostname Service...
Jan 10 16:26:02 np0005580781.novalocal systemd[1]: Started Hostname Service.
Jan 10 16:26:03 np0005580781.novalocal NetworkManager[7178]: <info>  [1768062363.0020] hostname: hostname: using hostnamed
Jan 10 16:26:03 np0005580781.novalocal NetworkManager[7178]: <info>  [1768062363.0022] hostname: static hostname changed from (none) to "np0005580781.novalocal"
Jan 10 16:26:03 np0005580781.novalocal NetworkManager[7178]: <info>  [1768062363.0029] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Jan 10 16:26:03 np0005580781.novalocal NetworkManager[7178]: <info>  [1768062363.0034] manager[0x55ea1b6e0000]: rfkill: Wi-Fi hardware radio set enabled
Jan 10 16:26:03 np0005580781.novalocal NetworkManager[7178]: <info>  [1768062363.0034] manager[0x55ea1b6e0000]: rfkill: WWAN hardware radio set enabled
Jan 10 16:26:03 np0005580781.novalocal NetworkManager[7178]: <info>  [1768062363.0075] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.2-1.el9/libnm-device-plugin-team.so)
Jan 10 16:26:03 np0005580781.novalocal NetworkManager[7178]: <info>  [1768062363.0076] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Jan 10 16:26:03 np0005580781.novalocal NetworkManager[7178]: <info>  [1768062363.0078] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Jan 10 16:26:03 np0005580781.novalocal NetworkManager[7178]: <info>  [1768062363.0078] manager: Networking is enabled by state file
Jan 10 16:26:03 np0005580781.novalocal NetworkManager[7178]: <info>  [1768062363.0082] settings: Loaded settings plugin: keyfile (internal)
Jan 10 16:26:03 np0005580781.novalocal NetworkManager[7178]: <info>  [1768062363.0088] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.2-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Jan 10 16:26:03 np0005580781.novalocal NetworkManager[7178]: <info>  [1768062363.0125] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Jan 10 16:26:03 np0005580781.novalocal NetworkManager[7178]: <info>  [1768062363.0138] dhcp: init: Using DHCP client 'internal'
Jan 10 16:26:03 np0005580781.novalocal NetworkManager[7178]: <info>  [1768062363.0142] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Jan 10 16:26:03 np0005580781.novalocal NetworkManager[7178]: <info>  [1768062363.0149] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 10 16:26:03 np0005580781.novalocal NetworkManager[7178]: <info>  [1768062363.0156] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Jan 10 16:26:03 np0005580781.novalocal NetworkManager[7178]: <info>  [1768062363.0167] device (lo): Activation: starting connection 'lo' (d627873a-279e-4130-ac7c-6a2872dc6445)
Jan 10 16:26:03 np0005580781.novalocal NetworkManager[7178]: <info>  [1768062363.0176] device (eth0): carrier: link connected
Jan 10 16:26:03 np0005580781.novalocal NetworkManager[7178]: <info>  [1768062363.0183] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Jan 10 16:26:03 np0005580781.novalocal NetworkManager[7178]: <info>  [1768062363.0190] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Jan 10 16:26:03 np0005580781.novalocal NetworkManager[7178]: <info>  [1768062363.0191] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 10 16:26:03 np0005580781.novalocal NetworkManager[7178]: <info>  [1768062363.0200] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 10 16:26:03 np0005580781.novalocal NetworkManager[7178]: <info>  [1768062363.0210] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 10 16:26:03 np0005580781.novalocal NetworkManager[7178]: <info>  [1768062363.0219] device (eth1): carrier: link connected
Jan 10 16:26:03 np0005580781.novalocal NetworkManager[7178]: <info>  [1768062363.0225] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Jan 10 16:26:03 np0005580781.novalocal NetworkManager[7178]: <info>  [1768062363.0233] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (3d2c32e1-e902-3a7a-bfe1-2a4ee0361874) (indicated)
Jan 10 16:26:03 np0005580781.novalocal NetworkManager[7178]: <info>  [1768062363.0234] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 10 16:26:03 np0005580781.novalocal NetworkManager[7178]: <info>  [1768062363.0241] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 10 16:26:03 np0005580781.novalocal NetworkManager[7178]: <info>  [1768062363.0251] device (eth1): Activation: starting connection 'Wired connection 1' (3d2c32e1-e902-3a7a-bfe1-2a4ee0361874)
Jan 10 16:26:03 np0005580781.novalocal systemd[1]: Started Network Manager.
Jan 10 16:26:03 np0005580781.novalocal NetworkManager[7178]: <info>  [1768062363.0261] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Jan 10 16:26:03 np0005580781.novalocal NetworkManager[7178]: <info>  [1768062363.0267] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Jan 10 16:26:03 np0005580781.novalocal NetworkManager[7178]: <info>  [1768062363.0271] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Jan 10 16:26:03 np0005580781.novalocal NetworkManager[7178]: <info>  [1768062363.0274] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Jan 10 16:26:03 np0005580781.novalocal NetworkManager[7178]: <info>  [1768062363.0277] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 10 16:26:03 np0005580781.novalocal NetworkManager[7178]: <info>  [1768062363.0282] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 10 16:26:03 np0005580781.novalocal NetworkManager[7178]: <info>  [1768062363.0285] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 10 16:26:03 np0005580781.novalocal NetworkManager[7178]: <info>  [1768062363.0290] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 10 16:26:03 np0005580781.novalocal NetworkManager[7178]: <info>  [1768062363.0294] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Jan 10 16:26:03 np0005580781.novalocal NetworkManager[7178]: <info>  [1768062363.0304] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 10 16:26:03 np0005580781.novalocal NetworkManager[7178]: <info>  [1768062363.0309] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 10 16:26:03 np0005580781.novalocal NetworkManager[7178]: <info>  [1768062363.0320] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 10 16:26:03 np0005580781.novalocal NetworkManager[7178]: <info>  [1768062363.0325] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Jan 10 16:26:03 np0005580781.novalocal NetworkManager[7178]: <info>  [1768062363.0344] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Jan 10 16:26:03 np0005580781.novalocal NetworkManager[7178]: <info>  [1768062363.0351] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Jan 10 16:26:03 np0005580781.novalocal NetworkManager[7178]: <info>  [1768062363.0360] device (lo): Activation: successful, device activated.
Jan 10 16:26:03 np0005580781.novalocal NetworkManager[7178]: <info>  [1768062363.0370] dhcp4 (eth0): state changed new lease, address=38.102.83.74
Jan 10 16:26:03 np0005580781.novalocal NetworkManager[7178]: <info>  [1768062363.0380] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Jan 10 16:26:03 np0005580781.novalocal systemd[1]: Starting Network Manager Wait Online...
Jan 10 16:26:03 np0005580781.novalocal NetworkManager[7178]: <info>  [1768062363.0459] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 10 16:26:03 np0005580781.novalocal NetworkManager[7178]: <info>  [1768062363.0500] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 10 16:26:03 np0005580781.novalocal NetworkManager[7178]: <info>  [1768062363.0502] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 10 16:26:03 np0005580781.novalocal NetworkManager[7178]: <info>  [1768062363.0507] manager: NetworkManager state is now CONNECTED_SITE
Jan 10 16:26:03 np0005580781.novalocal NetworkManager[7178]: <info>  [1768062363.0510] device (eth0): Activation: successful, device activated.
Jan 10 16:26:03 np0005580781.novalocal NetworkManager[7178]: <info>  [1768062363.0519] manager: NetworkManager state is now CONNECTED_GLOBAL
Jan 10 16:26:03 np0005580781.novalocal sudo[7172]: pam_unix(sudo:session): session closed for user root
Jan 10 16:26:03 np0005580781.novalocal python3[7258]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163ec2-ffbe-2dfb-ba4b-0000000000a7-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 16:26:13 np0005580781.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 10 16:26:33 np0005580781.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 10 16:26:48 np0005580781.novalocal NetworkManager[7178]: <info>  [1768062408.2801] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 10 16:26:48 np0005580781.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 10 16:26:48 np0005580781.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 10 16:26:48 np0005580781.novalocal NetworkManager[7178]: <info>  [1768062408.3280] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 10 16:26:48 np0005580781.novalocal NetworkManager[7178]: <info>  [1768062408.3284] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 10 16:26:48 np0005580781.novalocal NetworkManager[7178]: <info>  [1768062408.3295] device (eth1): Activation: successful, device activated.
Jan 10 16:26:48 np0005580781.novalocal NetworkManager[7178]: <info>  [1768062408.3306] manager: startup complete
Jan 10 16:26:48 np0005580781.novalocal NetworkManager[7178]: <info>  [1768062408.3309] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Jan 10 16:26:48 np0005580781.novalocal NetworkManager[7178]: <warn>  [1768062408.3319] device (eth1): Activation: failed for connection 'Wired connection 1'
Jan 10 16:26:48 np0005580781.novalocal NetworkManager[7178]: <info>  [1768062408.3331] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Jan 10 16:26:48 np0005580781.novalocal systemd[1]: Finished Network Manager Wait Online.
Jan 10 16:26:48 np0005580781.novalocal NetworkManager[7178]: <info>  [1768062408.3470] dhcp4 (eth1): canceled DHCP transaction
Jan 10 16:26:48 np0005580781.novalocal NetworkManager[7178]: <info>  [1768062408.3470] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Jan 10 16:26:48 np0005580781.novalocal NetworkManager[7178]: <info>  [1768062408.3471] dhcp4 (eth1): state changed no lease
Jan 10 16:26:48 np0005580781.novalocal NetworkManager[7178]: <info>  [1768062408.3493] policy: auto-activating connection 'ci-private-network' (91161bbf-f289-5cf0-9a28-a3cd6f92331b)
Jan 10 16:26:48 np0005580781.novalocal NetworkManager[7178]: <info>  [1768062408.3500] device (eth1): Activation: starting connection 'ci-private-network' (91161bbf-f289-5cf0-9a28-a3cd6f92331b)
Jan 10 16:26:48 np0005580781.novalocal NetworkManager[7178]: <info>  [1768062408.3501] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 10 16:26:48 np0005580781.novalocal NetworkManager[7178]: <info>  [1768062408.3504] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 10 16:26:48 np0005580781.novalocal NetworkManager[7178]: <info>  [1768062408.3516] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 10 16:26:48 np0005580781.novalocal NetworkManager[7178]: <info>  [1768062408.3530] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 10 16:26:48 np0005580781.novalocal NetworkManager[7178]: <info>  [1768062408.3584] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 10 16:26:48 np0005580781.novalocal NetworkManager[7178]: <info>  [1768062408.3588] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 10 16:26:48 np0005580781.novalocal NetworkManager[7178]: <info>  [1768062408.3600] device (eth1): Activation: successful, device activated.
Jan 10 16:26:58 np0005580781.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 10 16:27:03 np0005580781.novalocal sshd-session[4309]: Received disconnect from 38.102.83.114 port 44516:11: disconnected by user
Jan 10 16:27:03 np0005580781.novalocal sshd-session[4309]: Disconnected from user zuul 38.102.83.114 port 44516
Jan 10 16:27:03 np0005580781.novalocal sshd-session[4296]: pam_unix(sshd:session): session closed for user zuul
Jan 10 16:27:03 np0005580781.novalocal systemd-logind[798]: Session 1 logged out. Waiting for processes to exit.
Jan 10 16:27:03 np0005580781.novalocal sshd-session[7286]: Accepted publickey for zuul from 38.102.83.114 port 40876 ssh2: RSA SHA256:dyXfdFt4JSR1rmxb/SO9ENtHN43FPPABVlLhSeU8+co
Jan 10 16:27:03 np0005580781.novalocal systemd-logind[798]: New session 3 of user zuul.
Jan 10 16:27:03 np0005580781.novalocal systemd[1]: Started Session 3 of User zuul.
Jan 10 16:27:03 np0005580781.novalocal sshd-session[7286]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 10 16:27:03 np0005580781.novalocal sudo[7366]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fexrrzngkznichonjcdvhuxyjqcuorrl ; OS_CLOUD=vexxhost /usr/bin/python3'
Jan 10 16:27:03 np0005580781.novalocal sudo[7366]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:27:03 np0005580781.novalocal python3[7368]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 10 16:27:03 np0005580781.novalocal sudo[7366]: pam_unix(sudo:session): session closed for user root
Jan 10 16:27:04 np0005580781.novalocal sudo[7439]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ckkgdkivgzfuqpvgezsoxxxpscyjsqtw ; OS_CLOUD=vexxhost /usr/bin/python3'
Jan 10 16:27:04 np0005580781.novalocal sudo[7439]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:27:04 np0005580781.novalocal python3[7441]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/ansible-tmp-1768062423.635777-267-210518818811984/source _original_basename=tmp_ghykej1 follow=False checksum=7fba06d2e41938c83d6477fc2dd3f650e30fc2d0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 16:27:04 np0005580781.novalocal sudo[7439]: pam_unix(sudo:session): session closed for user root
Jan 10 16:27:06 np0005580781.novalocal sshd-session[7289]: Connection closed by 38.102.83.114 port 40876
Jan 10 16:27:06 np0005580781.novalocal sshd-session[7286]: pam_unix(sshd:session): session closed for user zuul
Jan 10 16:27:06 np0005580781.novalocal systemd[1]: session-3.scope: Deactivated successfully.
Jan 10 16:27:06 np0005580781.novalocal systemd-logind[798]: Session 3 logged out. Waiting for processes to exit.
Jan 10 16:27:06 np0005580781.novalocal systemd-logind[798]: Removed session 3.
Jan 10 16:27:15 np0005580781.novalocal systemd[4300]: Starting Mark boot as successful...
Jan 10 16:27:15 np0005580781.novalocal systemd[4300]: Finished Mark boot as successful.
Jan 10 16:27:49 np0005580781.novalocal sshd-session[7467]: Invalid user admin from 216.36.124.133 port 47586
Jan 10 16:27:49 np0005580781.novalocal sshd-session[7467]: Connection closed by invalid user admin 216.36.124.133 port 47586 [preauth]
Jan 10 16:29:15 np0005580781.novalocal systemd[4300]: Created slice User Background Tasks Slice.
Jan 10 16:29:16 np0005580781.novalocal systemd[4300]: Starting Cleanup of User's Temporary Files and Directories...
Jan 10 16:29:16 np0005580781.novalocal systemd[4300]: Finished Cleanup of User's Temporary Files and Directories.
Jan 10 16:29:27 np0005580781.novalocal sshd-session[7471]: Invalid user orangepi from 216.36.124.133 port 48736
Jan 10 16:29:27 np0005580781.novalocal sshd-session[7471]: Connection closed by invalid user orangepi 216.36.124.133 port 48736 [preauth]
Jan 10 16:31:06 np0005580781.novalocal sshd-session[7474]: Connection closed by authenticating user root 216.36.124.133 port 50094 [preauth]
Jan 10 16:31:28 np0005580781.novalocal sshd-session[7477]: Accepted publickey for zuul from 38.102.83.114 port 55560 ssh2: RSA SHA256:dyXfdFt4JSR1rmxb/SO9ENtHN43FPPABVlLhSeU8+co
Jan 10 16:31:28 np0005580781.novalocal systemd-logind[798]: New session 4 of user zuul.
Jan 10 16:31:28 np0005580781.novalocal systemd[1]: Started Session 4 of User zuul.
Jan 10 16:31:28 np0005580781.novalocal sshd-session[7477]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 10 16:31:29 np0005580781.novalocal sudo[7504]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fwrhpebsdsmgacnayldmdrwpcjnnpadk ; /usr/bin/python3'
Jan 10 16:31:29 np0005580781.novalocal sudo[7504]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:31:29 np0005580781.novalocal python3[7506]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda
                                                       _uses_shell=True zuul_log_id=fa163ec2-ffbe-e6de-bd89-000000002159-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 16:31:29 np0005580781.novalocal sudo[7504]: pam_unix(sudo:session): session closed for user root
Jan 10 16:31:29 np0005580781.novalocal sudo[7532]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xdfcaxmzhqrcrpjudswtyvqmfihktpof ; /usr/bin/python3'
Jan 10 16:31:29 np0005580781.novalocal sudo[7532]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:31:29 np0005580781.novalocal python3[7534]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 16:31:29 np0005580781.novalocal sudo[7532]: pam_unix(sudo:session): session closed for user root
Jan 10 16:31:29 np0005580781.novalocal sudo[7558]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mshhuvkjmbjyglziswsttspkzrrkteut ; /usr/bin/python3'
Jan 10 16:31:29 np0005580781.novalocal sudo[7558]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:31:29 np0005580781.novalocal python3[7560]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 16:31:29 np0005580781.novalocal sudo[7558]: pam_unix(sudo:session): session closed for user root
Jan 10 16:31:29 np0005580781.novalocal sudo[7585]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zjbklhlxjsrtaduytsltzjntoeoxkhfx ; /usr/bin/python3'
Jan 10 16:31:29 np0005580781.novalocal sudo[7585]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:31:30 np0005580781.novalocal python3[7587]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 16:31:30 np0005580781.novalocal sudo[7585]: pam_unix(sudo:session): session closed for user root
Jan 10 16:31:30 np0005580781.novalocal sudo[7611]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lbeyzwpdvnildwnhsispxztvriglgsil ; /usr/bin/python3'
Jan 10 16:31:30 np0005580781.novalocal sudo[7611]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:31:30 np0005580781.novalocal python3[7613]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 16:31:30 np0005580781.novalocal sudo[7611]: pam_unix(sudo:session): session closed for user root
Jan 10 16:31:30 np0005580781.novalocal sudo[7637]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oywhmegglqrsecvlfpfmrflymmlndqdi ; /usr/bin/python3'
Jan 10 16:31:30 np0005580781.novalocal sudo[7637]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:31:30 np0005580781.novalocal python3[7639]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 16:31:30 np0005580781.novalocal sudo[7637]: pam_unix(sudo:session): session closed for user root
Jan 10 16:31:31 np0005580781.novalocal sudo[7715]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-igpfjawdusoasjptencegrsioltihcql ; /usr/bin/python3'
Jan 10 16:31:31 np0005580781.novalocal sudo[7715]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:31:31 np0005580781.novalocal python3[7717]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 10 16:31:31 np0005580781.novalocal sudo[7715]: pam_unix(sudo:session): session closed for user root
Jan 10 16:31:31 np0005580781.novalocal sudo[7788]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cgfburzzkubquqdfoturehcinkkqxptb ; /usr/bin/python3'
Jan 10 16:31:31 np0005580781.novalocal sudo[7788]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:31:31 np0005580781.novalocal python3[7790]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1768062691.1167693-487-206991698135714/source _original_basename=tmpxeq2bqjc follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 16:31:31 np0005580781.novalocal sudo[7788]: pam_unix(sudo:session): session closed for user root
Jan 10 16:31:32 np0005580781.novalocal sudo[7838]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hbazbiurvlmuumnximkgjigmziwwylvx ; /usr/bin/python3'
Jan 10 16:31:32 np0005580781.novalocal sudo[7838]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:31:33 np0005580781.novalocal python3[7840]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 10 16:31:33 np0005580781.novalocal systemd[1]: Reloading.
Jan 10 16:31:33 np0005580781.novalocal systemd-rc-local-generator[7862]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 16:31:33 np0005580781.novalocal sudo[7838]: pam_unix(sudo:session): session closed for user root
Jan 10 16:31:34 np0005580781.novalocal sudo[7895]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bfpbmdjwxovplcamyyzgscojuoxjwqhp ; /usr/bin/python3'
Jan 10 16:31:34 np0005580781.novalocal sudo[7895]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:31:34 np0005580781.novalocal python3[7897]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Jan 10 16:31:34 np0005580781.novalocal sudo[7895]: pam_unix(sudo:session): session closed for user root
Jan 10 16:31:35 np0005580781.novalocal sudo[7921]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fwfipxebhrhipyinncymcavewnnrccee ; /usr/bin/python3'
Jan 10 16:31:35 np0005580781.novalocal sudo[7921]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:31:35 np0005580781.novalocal irqbalance[794]: Cannot change IRQ 27 affinity: Operation not permitted
Jan 10 16:31:35 np0005580781.novalocal irqbalance[794]: IRQ 27 affinity is now unmanaged
Jan 10 16:31:35 np0005580781.novalocal python3[7923]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 16:31:35 np0005580781.novalocal sudo[7921]: pam_unix(sudo:session): session closed for user root
Jan 10 16:31:35 np0005580781.novalocal sudo[7949]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-trmoebzqgarskrwyootljvmcijrrxvqk ; /usr/bin/python3'
Jan 10 16:31:35 np0005580781.novalocal sudo[7949]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:31:35 np0005580781.novalocal python3[7951]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 16:31:35 np0005580781.novalocal sudo[7949]: pam_unix(sudo:session): session closed for user root
Jan 10 16:31:35 np0005580781.novalocal sudo[7977]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sscnhmmhddzthsltszhoiaujlxtfbrra ; /usr/bin/python3'
Jan 10 16:31:35 np0005580781.novalocal sudo[7977]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:31:35 np0005580781.novalocal python3[7979]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 16:31:35 np0005580781.novalocal sudo[7977]: pam_unix(sudo:session): session closed for user root
Jan 10 16:31:36 np0005580781.novalocal sudo[8005]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rypvyavqlppucldeguhwnkbjqmyawbwf ; /usr/bin/python3'
Jan 10 16:31:36 np0005580781.novalocal sudo[8005]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:31:36 np0005580781.novalocal python3[8007]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 16:31:36 np0005580781.novalocal sudo[8005]: pam_unix(sudo:session): session closed for user root
Jan 10 16:31:36 np0005580781.novalocal python3[8034]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;
                                                       _uses_shell=True zuul_log_id=fa163ec2-ffbe-e6de-bd89-000000002160-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 16:31:37 np0005580781.novalocal python3[8064]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 10 16:31:39 np0005580781.novalocal sshd-session[7480]: Connection closed by 38.102.83.114 port 55560
Jan 10 16:31:39 np0005580781.novalocal sshd-session[7477]: pam_unix(sshd:session): session closed for user zuul
Jan 10 16:31:39 np0005580781.novalocal systemd[1]: session-4.scope: Deactivated successfully.
Jan 10 16:31:39 np0005580781.novalocal systemd[1]: session-4.scope: Consumed 4.197s CPU time.
Jan 10 16:31:39 np0005580781.novalocal systemd-logind[798]: Session 4 logged out. Waiting for processes to exit.
Jan 10 16:31:39 np0005580781.novalocal systemd-logind[798]: Removed session 4.
Jan 10 16:31:40 np0005580781.novalocal sshd-session[8069]: Accepted publickey for zuul from 38.102.83.114 port 57492 ssh2: RSA SHA256:dyXfdFt4JSR1rmxb/SO9ENtHN43FPPABVlLhSeU8+co
Jan 10 16:31:40 np0005580781.novalocal systemd-logind[798]: New session 5 of user zuul.
Jan 10 16:31:40 np0005580781.novalocal systemd[1]: Started Session 5 of User zuul.
Jan 10 16:31:40 np0005580781.novalocal sshd-session[8069]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 10 16:31:40 np0005580781.novalocal sudo[8096]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zjbqqwcuvbfgtxqihipuwvxwvyygevqn ; /usr/bin/python3'
Jan 10 16:31:40 np0005580781.novalocal sudo[8096]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:31:41 np0005580781.novalocal python3[8098]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 10 16:31:47 np0005580781.novalocal setsebool[8140]: The virt_use_nfs policy boolean was changed to 1 by root
Jan 10 16:31:47 np0005580781.novalocal setsebool[8140]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Jan 10 16:31:58 np0005580781.novalocal kernel: SELinux:  Converting 385 SID table entries...
Jan 10 16:31:58 np0005580781.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Jan 10 16:31:58 np0005580781.novalocal kernel: SELinux:  policy capability open_perms=1
Jan 10 16:31:58 np0005580781.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Jan 10 16:31:58 np0005580781.novalocal kernel: SELinux:  policy capability always_check_network=0
Jan 10 16:31:58 np0005580781.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 10 16:31:58 np0005580781.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 10 16:31:58 np0005580781.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 10 16:32:07 np0005580781.novalocal kernel: SELinux:  Converting 388 SID table entries...
Jan 10 16:32:07 np0005580781.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Jan 10 16:32:07 np0005580781.novalocal kernel: SELinux:  policy capability open_perms=1
Jan 10 16:32:07 np0005580781.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Jan 10 16:32:07 np0005580781.novalocal kernel: SELinux:  policy capability always_check_network=0
Jan 10 16:32:07 np0005580781.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 10 16:32:07 np0005580781.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 10 16:32:07 np0005580781.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 10 16:32:25 np0005580781.novalocal dbus-broker-launch[777]: avc:  op=load_policy lsm=selinux seqno=4 res=1
Jan 10 16:32:25 np0005580781.novalocal systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 10 16:32:25 np0005580781.novalocal systemd[1]: Starting man-db-cache-update.service...
Jan 10 16:32:25 np0005580781.novalocal systemd[1]: Reloading.
Jan 10 16:32:25 np0005580781.novalocal systemd-rc-local-generator[8909]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 16:32:26 np0005580781.novalocal systemd[1]: Queuing reload/restart jobs for marked units…
Jan 10 16:32:27 np0005580781.novalocal sudo[8096]: pam_unix(sudo:session): session closed for user root
Jan 10 16:32:41 np0005580781.novalocal python3[17172]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"
                                                        _uses_shell=True zuul_log_id=fa163ec2-ffbe-0026-8d85-00000000000a-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 16:32:41 np0005580781.novalocal kernel: evm: overlay not supported
Jan 10 16:32:42 np0005580781.novalocal systemd[4300]: Starting D-Bus User Message Bus...
Jan 10 16:32:42 np0005580781.novalocal dbus-broker-launch[17590]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Jan 10 16:32:42 np0005580781.novalocal dbus-broker-launch[17590]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Jan 10 16:32:42 np0005580781.novalocal systemd[4300]: Started D-Bus User Message Bus.
Jan 10 16:32:42 np0005580781.novalocal dbus-broker-lau[17590]: Ready
Jan 10 16:32:42 np0005580781.novalocal systemd[4300]: selinux: avc:  op=load_policy lsm=selinux seqno=4 res=1
Jan 10 16:32:42 np0005580781.novalocal systemd[4300]: Created slice Slice /user.
Jan 10 16:32:42 np0005580781.novalocal systemd[4300]: podman-17519.scope: unit configures an IP firewall, but not running as root.
Jan 10 16:32:42 np0005580781.novalocal systemd[4300]: (This warning is only shown for the first unit using IP firewalling.)
Jan 10 16:32:42 np0005580781.novalocal systemd[4300]: Started podman-17519.scope.
Jan 10 16:32:42 np0005580781.novalocal systemd[4300]: Started podman-pause-947aeabd.scope.
Jan 10 16:32:42 np0005580781.novalocal sudo[17956]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iquwlwdnnjxzehctvilworhzulwbgthr ; /usr/bin/python3'
Jan 10 16:32:42 np0005580781.novalocal sudo[17956]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:32:42 np0005580781.novalocal python3[17970]: ansible-ansible.builtin.blockinfile Invoked with state=present insertafter=EOF dest=/etc/containers/registries.conf content=[[registry]]
                                                       location = "38.102.83.73:5001"
                                                       insecure = true path=/etc/containers/registries.conf block=[[registry]]
                                                       location = "38.102.83.73:5001"
                                                       insecure = true marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 16:32:42 np0005580781.novalocal python3[17970]: ansible-ansible.builtin.blockinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Jan 10 16:32:42 np0005580781.novalocal sudo[17956]: pam_unix(sudo:session): session closed for user root
Jan 10 16:32:43 np0005580781.novalocal sshd-session[8072]: Connection closed by 38.102.83.114 port 57492
Jan 10 16:32:43 np0005580781.novalocal sshd-session[8069]: pam_unix(sshd:session): session closed for user zuul
Jan 10 16:32:43 np0005580781.novalocal systemd[1]: session-5.scope: Deactivated successfully.
Jan 10 16:32:43 np0005580781.novalocal systemd[1]: session-5.scope: Consumed 42.610s CPU time.
Jan 10 16:32:43 np0005580781.novalocal systemd-logind[798]: Session 5 logged out. Waiting for processes to exit.
Jan 10 16:32:43 np0005580781.novalocal systemd-logind[798]: Removed session 5.
Jan 10 16:32:52 np0005580781.novalocal sshd-session[20931]: Connection closed by authenticating user root 216.36.124.133 port 51394 [preauth]
Jan 10 16:33:02 np0005580781.novalocal sshd-session[24946]: Unable to negotiate with 38.102.83.82 port 47122: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Jan 10 16:33:02 np0005580781.novalocal sshd-session[24948]: Connection closed by 38.102.83.82 port 47110 [preauth]
Jan 10 16:33:02 np0005580781.novalocal sshd-session[24956]: Connection closed by 38.102.83.82 port 47106 [preauth]
Jan 10 16:33:02 np0005580781.novalocal sshd-session[24949]: Unable to negotiate with 38.102.83.82 port 47114: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Jan 10 16:33:02 np0005580781.novalocal sshd-session[24954]: Unable to negotiate with 38.102.83.82 port 47120: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Jan 10 16:33:06 np0005580781.novalocal sshd-session[26685]: Accepted publickey for zuul from 38.102.83.114 port 60644 ssh2: RSA SHA256:dyXfdFt4JSR1rmxb/SO9ENtHN43FPPABVlLhSeU8+co
Jan 10 16:33:06 np0005580781.novalocal systemd-logind[798]: New session 6 of user zuul.
Jan 10 16:33:06 np0005580781.novalocal systemd[1]: Started Session 6 of User zuul.
Jan 10 16:33:06 np0005580781.novalocal sshd-session[26685]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 10 16:33:07 np0005580781.novalocal python3[26783]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOXnVAKH7weFkA5GtYbIuGsCkG349Pr6AZv5lMmMSI/AqOyyURLjrTZmhQphTCn8tonuqqdfNaoJoZXEKGDKaRA= zuul@np0005580780.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 10 16:33:07 np0005580781.novalocal sudo[26911]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ppmfvcwmpwtupzzhtnucjftbwceipved ; /usr/bin/python3'
Jan 10 16:33:07 np0005580781.novalocal sudo[26911]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:33:07 np0005580781.novalocal python3[26921]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOXnVAKH7weFkA5GtYbIuGsCkG349Pr6AZv5lMmMSI/AqOyyURLjrTZmhQphTCn8tonuqqdfNaoJoZXEKGDKaRA= zuul@np0005580780.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 10 16:33:07 np0005580781.novalocal sudo[26911]: pam_unix(sudo:session): session closed for user root
Jan 10 16:33:08 np0005580781.novalocal sudo[27299]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yinqpbkqeaimywmtrddylpphntftaqmk ; /usr/bin/python3'
Jan 10 16:33:08 np0005580781.novalocal sudo[27299]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:33:08 np0005580781.novalocal python3[27308]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005580781.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Jan 10 16:33:08 np0005580781.novalocal useradd[27364]: new group: name=cloud-admin, GID=1002
Jan 10 16:33:08 np0005580781.novalocal useradd[27364]: new user: name=cloud-admin, UID=1002, GID=1002, home=/home/cloud-admin, shell=/bin/bash, from=none
Jan 10 16:33:08 np0005580781.novalocal sudo[27299]: pam_unix(sudo:session): session closed for user root
Jan 10 16:33:08 np0005580781.novalocal sudo[27491]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kadxdugawgpdacuoupiqzwsbuimrcjnn ; /usr/bin/python3'
Jan 10 16:33:08 np0005580781.novalocal sudo[27491]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:33:08 np0005580781.novalocal python3[27497]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOXnVAKH7weFkA5GtYbIuGsCkG349Pr6AZv5lMmMSI/AqOyyURLjrTZmhQphTCn8tonuqqdfNaoJoZXEKGDKaRA= zuul@np0005580780.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 10 16:33:08 np0005580781.novalocal sudo[27491]: pam_unix(sudo:session): session closed for user root
Jan 10 16:33:09 np0005580781.novalocal sudo[27726]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ygiidtqtcmjcyojlwahejfaoyopjiqit ; /usr/bin/python3'
Jan 10 16:33:09 np0005580781.novalocal sudo[27726]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:33:09 np0005580781.novalocal python3[27736]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 10 16:33:09 np0005580781.novalocal sudo[27726]: pam_unix(sudo:session): session closed for user root
Jan 10 16:33:09 np0005580781.novalocal sudo[27970]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gftxyahdvusepfehakhyqdwfxhbpakfq ; /usr/bin/python3'
Jan 10 16:33:09 np0005580781.novalocal sudo[27970]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:33:09 np0005580781.novalocal python3[27979]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1768062789.0827332-135-99394940406518/source _original_basename=tmpbrzuoljp follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 16:33:09 np0005580781.novalocal sudo[27970]: pam_unix(sudo:session): session closed for user root
Jan 10 16:33:10 np0005580781.novalocal sudo[28244]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kjjgbzepskajkcquojlixgxknuakpkqw ; /usr/bin/python3'
Jan 10 16:33:10 np0005580781.novalocal sudo[28244]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:33:10 np0005580781.novalocal python3[28253]: ansible-ansible.builtin.hostname Invoked with name=compute-0 use=systemd
Jan 10 16:33:10 np0005580781.novalocal systemd[1]: Starting Hostname Service...
Jan 10 16:33:10 np0005580781.novalocal systemd[1]: Started Hostname Service.
Jan 10 16:33:10 np0005580781.novalocal systemd-hostnamed[28356]: Changed pretty hostname to 'compute-0'
Jan 10 16:33:10 compute-0 systemd-hostnamed[28356]: Hostname set to <compute-0> (static)
Jan 10 16:33:10 compute-0 NetworkManager[7178]: <info>  [1768062790.9354] hostname: static hostname changed from "np0005580781.novalocal" to "compute-0"
Jan 10 16:33:10 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 10 16:33:10 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 10 16:33:10 compute-0 sudo[28244]: pam_unix(sudo:session): session closed for user root
Jan 10 16:33:11 compute-0 sshd-session[26727]: Connection closed by 38.102.83.114 port 60644
Jan 10 16:33:11 compute-0 sshd-session[26685]: pam_unix(sshd:session): session closed for user zuul
Jan 10 16:33:11 compute-0 systemd[1]: session-6.scope: Deactivated successfully.
Jan 10 16:33:11 compute-0 systemd[1]: session-6.scope: Consumed 2.503s CPU time.
Jan 10 16:33:11 compute-0 systemd-logind[798]: Session 6 logged out. Waiting for processes to exit.
Jan 10 16:33:11 compute-0 systemd-logind[798]: Removed session 6.
Jan 10 16:33:15 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 10 16:33:15 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 10 16:33:15 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1min 91ms CPU time.
Jan 10 16:33:15 compute-0 systemd[1]: run-r91412d436caa4e05bb22e9aedcd8ad7b.service: Deactivated successfully.
Jan 10 16:33:20 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 10 16:33:40 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 10 16:34:38 compute-0 sshd-session[29970]: Connection closed by authenticating user root 216.36.124.133 port 52608 [preauth]
Jan 10 16:35:00 compute-0 sshd-session[29974]: Received disconnect from 193.46.255.7 port 38851:11:  [preauth]
Jan 10 16:35:00 compute-0 sshd-session[29974]: Disconnected from authenticating user root 193.46.255.7 port 38851 [preauth]
Jan 10 16:36:24 compute-0 sshd-session[29977]: Connection closed by authenticating user root 216.36.124.133 port 54022 [preauth]
Jan 10 16:36:58 compute-0 sshd-session[29979]: Accepted publickey for zuul from 38.102.83.82 port 35890 ssh2: RSA SHA256:dyXfdFt4JSR1rmxb/SO9ENtHN43FPPABVlLhSeU8+co
Jan 10 16:36:58 compute-0 systemd-logind[798]: New session 7 of user zuul.
Jan 10 16:36:58 compute-0 systemd[1]: Started Session 7 of User zuul.
Jan 10 16:36:58 compute-0 sshd-session[29979]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 10 16:36:58 compute-0 python3[30055]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 10 16:37:00 compute-0 sudo[30169]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-efvknhjgclprvhwldfwtbgbmxdgmxevw ; /usr/bin/python3'
Jan 10 16:37:00 compute-0 sudo[30169]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:37:00 compute-0 python3[30171]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 10 16:37:00 compute-0 sudo[30169]: pam_unix(sudo:session): session closed for user root
Jan 10 16:37:00 compute-0 sudo[30242]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bzsydocyfvxizdggffwowcuwgmchnqru ; /usr/bin/python3'
Jan 10 16:37:00 compute-0 sudo[30242]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:37:00 compute-0 python3[30244]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1768063020.0407746-33549-56292853264439/source mode=0755 _original_basename=delorean.repo follow=False checksum=0f7c85cc67bf467c48edf98d5acc63e62d808324 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 16:37:00 compute-0 sudo[30242]: pam_unix(sudo:session): session closed for user root
Jan 10 16:37:01 compute-0 sudo[30268]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-myaynpwphwaodrlnssjdbtcoabuxvccp ; /usr/bin/python3'
Jan 10 16:37:01 compute-0 sudo[30268]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:37:01 compute-0 python3[30270]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 10 16:37:01 compute-0 sudo[30268]: pam_unix(sudo:session): session closed for user root
Jan 10 16:37:01 compute-0 sudo[30341]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dxqmzimhkyriqeztnizrmeqascmfiilb ; /usr/bin/python3'
Jan 10 16:37:01 compute-0 sudo[30341]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:37:01 compute-0 python3[30343]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1768063020.0407746-33549-56292853264439/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=4ebc56dead962b5d40b8d420dad43b948b84d3fc backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 16:37:01 compute-0 sudo[30341]: pam_unix(sudo:session): session closed for user root
Jan 10 16:37:01 compute-0 sudo[30367]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lxikvvebkszelreqgtfipdohviggtlst ; /usr/bin/python3'
Jan 10 16:37:01 compute-0 sudo[30367]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:37:01 compute-0 python3[30369]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 10 16:37:01 compute-0 sudo[30367]: pam_unix(sudo:session): session closed for user root
Jan 10 16:37:01 compute-0 sudo[30440]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iptgjygrcpztrcrpeetbajgrpwcjuqpb ; /usr/bin/python3'
Jan 10 16:37:01 compute-0 sudo[30440]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:37:02 compute-0 python3[30442]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1768063020.0407746-33549-56292853264439/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 16:37:02 compute-0 sudo[30440]: pam_unix(sudo:session): session closed for user root
Jan 10 16:37:02 compute-0 sudo[30466]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bzrnnfvkfvzwyxsuxmoxuxtuxflhchea ; /usr/bin/python3'
Jan 10 16:37:02 compute-0 sudo[30466]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:37:02 compute-0 python3[30468]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 10 16:37:02 compute-0 sudo[30466]: pam_unix(sudo:session): session closed for user root
Jan 10 16:37:02 compute-0 sudo[30539]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ehrhzqfznjlbvncqxuidrfcdjbnokddt ; /usr/bin/python3'
Jan 10 16:37:02 compute-0 sudo[30539]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:37:02 compute-0 python3[30541]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1768063020.0407746-33549-56292853264439/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 16:37:02 compute-0 sudo[30539]: pam_unix(sudo:session): session closed for user root
Jan 10 16:37:02 compute-0 sudo[30565]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oqkfvkeoqmrodqbybsjzrozpkutqqcfg ; /usr/bin/python3'
Jan 10 16:37:02 compute-0 sudo[30565]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:37:02 compute-0 python3[30567]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 10 16:37:02 compute-0 sudo[30565]: pam_unix(sudo:session): session closed for user root
Jan 10 16:37:03 compute-0 sudo[30638]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fkhejkmdoweudjrwgepwqlitttfqayzh ; /usr/bin/python3'
Jan 10 16:37:03 compute-0 sudo[30638]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:37:03 compute-0 python3[30640]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1768063020.0407746-33549-56292853264439/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 16:37:03 compute-0 sudo[30638]: pam_unix(sudo:session): session closed for user root
Jan 10 16:37:03 compute-0 sudo[30664]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iqynjkudpprjrkmrjvwrmiktuqflqogl ; /usr/bin/python3'
Jan 10 16:37:03 compute-0 sudo[30664]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:37:03 compute-0 python3[30666]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 10 16:37:03 compute-0 sudo[30664]: pam_unix(sudo:session): session closed for user root
Jan 10 16:37:03 compute-0 sudo[30737]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jkrgwwkdjhnxccwijjsxlekujgzebdrf ; /usr/bin/python3'
Jan 10 16:37:03 compute-0 sudo[30737]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:37:04 compute-0 python3[30739]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1768063020.0407746-33549-56292853264439/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 16:37:04 compute-0 sudo[30737]: pam_unix(sudo:session): session closed for user root
Jan 10 16:37:04 compute-0 sudo[30763]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cteavqzpihbbrnzdibrposdqbfjsqvmt ; /usr/bin/python3'
Jan 10 16:37:04 compute-0 sudo[30763]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:37:04 compute-0 python3[30765]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 10 16:37:04 compute-0 sudo[30763]: pam_unix(sudo:session): session closed for user root
Jan 10 16:37:04 compute-0 sudo[30836]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-foniznmtjukdtvafgfmghckwrkutsimt ; /usr/bin/python3'
Jan 10 16:37:04 compute-0 sudo[30836]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:37:04 compute-0 python3[30838]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1768063020.0407746-33549-56292853264439/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=2583a70b3ee76a9837350b0837bc004a8e52405c backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 16:37:04 compute-0 sudo[30836]: pam_unix(sudo:session): session closed for user root
Jan 10 16:37:07 compute-0 sshd-session[30863]: Connection closed by 192.168.122.11 port 58326 [preauth]
Jan 10 16:37:07 compute-0 sshd-session[30864]: Connection closed by 192.168.122.11 port 58336 [preauth]
Jan 10 16:37:07 compute-0 sshd-session[30865]: Unable to negotiate with 192.168.122.11 port 58346: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Jan 10 16:37:07 compute-0 sshd-session[30866]: Unable to negotiate with 192.168.122.11 port 58354: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Jan 10 16:37:07 compute-0 sshd-session[30867]: Unable to negotiate with 192.168.122.11 port 58356: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Jan 10 16:37:16 compute-0 python3[30896]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 16:38:09 compute-0 sshd-session[30899]: Connection closed by authenticating user root 216.36.124.133 port 55220 [preauth]
Jan 10 16:39:05 compute-0 systemd[1]: Starting Cleanup of Temporary Directories...
Jan 10 16:39:06 compute-0 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Jan 10 16:39:06 compute-0 systemd[1]: Finished Cleanup of Temporary Directories.
Jan 10 16:39:06 compute-0 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Jan 10 16:40:03 compute-0 sshd-session[30904]: Connection closed by authenticating user root 34.122.156.88 port 52378 [preauth]
Jan 10 16:40:03 compute-0 sshd-session[30906]: Connection closed by authenticating user root 34.122.156.88 port 52384 [preauth]
Jan 10 16:40:03 compute-0 sshd-session[30908]: Connection closed by authenticating user root 34.122.156.88 port 52390 [preauth]
Jan 10 16:40:04 compute-0 sshd-session[30910]: Connection closed by authenticating user root 34.122.156.88 port 52394 [preauth]
Jan 10 16:40:04 compute-0 sshd-session[30912]: Connection closed by authenticating user root 34.122.156.88 port 52410 [preauth]
Jan 10 16:40:04 compute-0 sshd-session[30916]: Connection closed by authenticating user root 34.122.156.88 port 52424 [preauth]
Jan 10 16:40:04 compute-0 sshd-session[30918]: Connection closed by authenticating user root 34.122.156.88 port 52430 [preauth]
Jan 10 16:40:04 compute-0 sshd-session[30920]: Connection closed by authenticating user root 34.122.156.88 port 52440 [preauth]
Jan 10 16:40:05 compute-0 sshd-session[30922]: Connection closed by authenticating user root 34.122.156.88 port 52444 [preauth]
Jan 10 16:40:05 compute-0 sshd-session[30914]: Connection closed by authenticating user root 216.36.124.133 port 56396 [preauth]
Jan 10 16:40:05 compute-0 sshd-session[30924]: Connection closed by authenticating user root 34.122.156.88 port 52454 [preauth]
Jan 10 16:40:05 compute-0 sshd-session[30926]: Connection closed by authenticating user root 34.122.156.88 port 52468 [preauth]
Jan 10 16:40:05 compute-0 sshd-session[30928]: Connection closed by authenticating user root 34.122.156.88 port 52484 [preauth]
Jan 10 16:40:05 compute-0 sshd-session[30930]: Connection closed by authenticating user root 34.122.156.88 port 52486 [preauth]
Jan 10 16:40:06 compute-0 sshd-session[30932]: Connection closed by authenticating user root 34.122.156.88 port 52490 [preauth]
Jan 10 16:40:06 compute-0 sshd-session[30934]: Connection closed by authenticating user root 34.122.156.88 port 52494 [preauth]
Jan 10 16:40:06 compute-0 sshd-session[30936]: Connection closed by authenticating user root 34.122.156.88 port 52508 [preauth]
Jan 10 16:40:06 compute-0 sshd-session[30938]: Connection closed by authenticating user root 34.122.156.88 port 52524 [preauth]
Jan 10 16:40:06 compute-0 sshd-session[30940]: Connection closed by authenticating user root 34.122.156.88 port 52540 [preauth]
Jan 10 16:40:07 compute-0 sshd-session[30942]: Connection closed by authenticating user root 34.122.156.88 port 52544 [preauth]
Jan 10 16:40:07 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:52556 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:07 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:52560 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:07 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:52568 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:07 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:52574 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:07 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:52578 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:07 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:52586 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:07 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:52594 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:07 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:52596 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:07 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:52602 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:07 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:52614 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:07 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:52616 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:08 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:52628 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:08 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:52630 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:08 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:52644 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:08 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:52654 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:08 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:52666 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:08 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:52678 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:08 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:52692 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:08 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:52694 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:08 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:52700 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:08 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:52708 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:08 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:52716 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:08 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:52726 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:08 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:52734 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:08 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:52750 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:08 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:52760 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:08 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:52768 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:09 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:52772 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:09 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:52784 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:09 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:52790 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:09 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:52792 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:09 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:52808 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:09 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:52816 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:09 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:43826 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:09 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:43832 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:09 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:43836 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:09 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:43850 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:09 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:43864 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:09 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:43878 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:09 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:43892 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:09 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:43906 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:09 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:43912 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:09 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:43916 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:10 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:43922 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:10 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:43928 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:10 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:43938 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:10 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:43940 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:10 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:43946 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:10 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:43954 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:10 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:43958 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:10 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:43960 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:10 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:43962 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:10 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:43976 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:10 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:43988 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:10 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:43994 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:10 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:43998 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:10 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:44000 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:10 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:44002 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:10 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:44004 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:10 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:44020 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:11 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:44034 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:11 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:44050 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:11 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:44056 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:11 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:44064 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:11 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:44072 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:11 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:44074 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:11 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:44076 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:11 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:44078 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:11 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:44090 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:11 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:44096 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:11 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:44100 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:11 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:44104 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:11 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:44116 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:11 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:44128 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:11 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:44136 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:12 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:44140 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:12 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:44156 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:12 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:44164 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:12 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:44180 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:12 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:44188 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:12 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:44204 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:12 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:44216 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:12 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:44230 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:12 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:44244 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:12 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:44248 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:12 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:44256 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:12 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:44260 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:12 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:44266 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:12 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:44280 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:12 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:44294 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:12 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:44302 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:13 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:44314 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:13 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:44326 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:13 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:44334 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:13 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:44336 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:13 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:44342 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:13 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:44350 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:13 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:44366 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:13 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:44368 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:13 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:44382 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:13 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:44394 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:13 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:44406 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:13 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:44420 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:13 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:44426 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:13 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:44442 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:13 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:44454 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:13 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:44464 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:14 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:44476 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:14 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:44488 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:14 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:44500 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:14 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:44516 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:14 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:44522 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:14 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:44536 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:14 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:44538 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:14 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:44544 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:14 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:44560 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:14 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:44568 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:14 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:44582 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:14 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:44598 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:14 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:44600 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:14 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:44610 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:14 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:44614 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:14 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:44630 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:15 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:44632 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:15 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:44640 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:15 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:44646 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:15 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:44652 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:15 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:44660 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:15 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:44672 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:15 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:44688 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:15 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:44702 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:15 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:44706 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:15 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:44708 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:15 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:44724 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:15 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:44738 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:15 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:44748 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:15 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:44754 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:15 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:44756 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:15 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:44768 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:16 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:44770 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:16 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:44772 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:16 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:44786 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:16 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:44790 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:16 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:44792 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:16 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:44804 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:16 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:44814 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:16 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:44824 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:16 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:44840 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:16 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:44846 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:16 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:44860 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:16 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:44874 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:16 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:44878 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:16 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:44892 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:16 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:44904 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:16 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:44912 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:17 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:44918 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:17 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:44926 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:17 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:44934 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:17 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:44948 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:17 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:44950 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:17 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:44952 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:17 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:44968 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:17 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:44974 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:17 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:44984 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:17 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:44990 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:17 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:44992 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:17 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:45008 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:17 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:45018 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:17 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:45022 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:17 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:45026 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:17 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:45032 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:18 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:45040 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:18 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:45050 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:18 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:45056 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:18 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:45072 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:18 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:45080 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:18 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:45082 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:18 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:45094 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:18 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:45102 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:18 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:45118 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:18 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:45126 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:18 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:45138 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:18 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:45146 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:18 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:45148 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:18 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:45158 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:18 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:45170 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:18 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:45172 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:19 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:45180 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:19 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:45190 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:19 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:45202 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:19 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:45206 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:19 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:45214 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:19 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:45222 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:19 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:60518 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:19 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:60534 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:19 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:60542 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:19 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:60550 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:19 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:60556 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:19 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:60560 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:19 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:60568 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:19 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:60584 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:19 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:60598 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:19 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:60610 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:19 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:60622 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:20 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:60628 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:20 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:60638 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:20 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:60654 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:20 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:60656 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:20 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:60660 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:20 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:60662 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:20 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:60664 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:20 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:60674 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:20 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:60690 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:20 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:60704 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:20 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:60718 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:20 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:60734 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:20 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:60740 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:20 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:60752 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:20 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:60764 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:20 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:60774 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:21 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:60780 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:21 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:60794 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:21 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:60808 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:21 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:60810 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:21 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:60812 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:21 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:60828 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:21 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:60836 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:21 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:60846 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:21 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:60858 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:21 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:60874 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:21 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:60882 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:21 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:60896 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:21 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:60904 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:21 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:60906 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:21 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:60918 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:21 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:60932 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:22 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:60948 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:22 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:60956 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:22 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:60968 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:22 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:60972 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:22 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:60976 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:22 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:60986 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:22 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:60992 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:22 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:60994 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:22 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:32776 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:22 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:32792 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:22 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:32800 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:22 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:32808 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:22 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:32820 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:22 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:32822 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:22 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:32836 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:22 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:32844 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:23 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:32860 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:23 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:32862 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:23 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:32874 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:23 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:32886 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:23 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:32902 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:23 compute-0 sshd-session[30944]: Invalid user debian from 34.122.156.88 port 32908
Jan 10 16:40:23 compute-0 sshd-session[30944]: Connection closed by invalid user debian 34.122.156.88 port 32908 [preauth]
Jan 10 16:40:23 compute-0 sshd-session[30946]: Invalid user debian from 34.122.156.88 port 32922
Jan 10 16:40:23 compute-0 sshd-session[30946]: Connection closed by invalid user debian 34.122.156.88 port 32922 [preauth]
Jan 10 16:40:23 compute-0 sshd-session[30948]: Invalid user debian from 34.122.156.88 port 32930
Jan 10 16:40:23 compute-0 sshd-session[30948]: Connection closed by invalid user debian 34.122.156.88 port 32930 [preauth]
Jan 10 16:40:24 compute-0 sshd-session[30950]: Invalid user debian from 34.122.156.88 port 32940
Jan 10 16:40:24 compute-0 sshd-session[30950]: Connection closed by invalid user debian 34.122.156.88 port 32940 [preauth]
Jan 10 16:40:24 compute-0 sshd-session[30952]: Invalid user debian from 34.122.156.88 port 32950
Jan 10 16:40:24 compute-0 sshd-session[30952]: Connection closed by invalid user debian 34.122.156.88 port 32950 [preauth]
Jan 10 16:40:24 compute-0 sshd-session[30954]: Invalid user debian from 34.122.156.88 port 32958
Jan 10 16:40:24 compute-0 sshd-session[30954]: Connection closed by invalid user debian 34.122.156.88 port 32958 [preauth]
Jan 10 16:40:24 compute-0 sshd-session[30956]: Invalid user debian from 34.122.156.88 port 32968
Jan 10 16:40:24 compute-0 sshd-session[30956]: Connection closed by invalid user debian 34.122.156.88 port 32968 [preauth]
Jan 10 16:40:24 compute-0 sshd-session[30958]: Invalid user debian from 34.122.156.88 port 32980
Jan 10 16:40:24 compute-0 sshd-session[30958]: Connection closed by invalid user debian 34.122.156.88 port 32980 [preauth]
Jan 10 16:40:25 compute-0 sshd-session[30960]: Invalid user debian from 34.122.156.88 port 32990
Jan 10 16:40:25 compute-0 sshd-session[30960]: Connection closed by invalid user debian 34.122.156.88 port 32990 [preauth]
Jan 10 16:40:25 compute-0 sshd-session[30962]: Invalid user debian from 34.122.156.88 port 32996
Jan 10 16:40:25 compute-0 sshd-session[30962]: Connection closed by invalid user debian 34.122.156.88 port 32996 [preauth]
Jan 10 16:40:25 compute-0 sshd-session[30964]: Invalid user debian from 34.122.156.88 port 33008
Jan 10 16:40:25 compute-0 sshd-session[30964]: Connection closed by invalid user debian 34.122.156.88 port 33008 [preauth]
Jan 10 16:40:25 compute-0 sshd-session[30966]: Invalid user debian from 34.122.156.88 port 33016
Jan 10 16:40:25 compute-0 sshd-session[30966]: Connection closed by invalid user debian 34.122.156.88 port 33016 [preauth]
Jan 10 16:40:25 compute-0 sshd-session[30968]: Invalid user debian from 34.122.156.88 port 33030
Jan 10 16:40:25 compute-0 sshd-session[30968]: Connection closed by invalid user debian 34.122.156.88 port 33030 [preauth]
Jan 10 16:40:26 compute-0 sshd-session[30970]: Invalid user debian from 34.122.156.88 port 33042
Jan 10 16:40:26 compute-0 sshd-session[30970]: Connection closed by invalid user debian 34.122.156.88 port 33042 [preauth]
Jan 10 16:40:26 compute-0 sshd-session[30972]: Invalid user debian from 34.122.156.88 port 33054
Jan 10 16:40:26 compute-0 sshd-session[30972]: Connection closed by invalid user debian 34.122.156.88 port 33054 [preauth]
Jan 10 16:40:26 compute-0 sshd-session[30974]: Invalid user debian from 34.122.156.88 port 33062
Jan 10 16:40:26 compute-0 sshd-session[30974]: Connection closed by invalid user debian 34.122.156.88 port 33062 [preauth]
Jan 10 16:40:26 compute-0 sshd-session[30976]: Invalid user debian from 34.122.156.88 port 33070
Jan 10 16:40:26 compute-0 sshd-session[30976]: Connection closed by invalid user debian 34.122.156.88 port 33070 [preauth]
Jan 10 16:40:26 compute-0 sshd-session[30978]: Invalid user debian from 34.122.156.88 port 33084
Jan 10 16:40:27 compute-0 sshd-session[30978]: Connection closed by invalid user debian 34.122.156.88 port 33084 [preauth]
Jan 10 16:40:27 compute-0 sshd-session[30980]: Invalid user debian from 34.122.156.88 port 33092
Jan 10 16:40:27 compute-0 sshd-session[30980]: Connection closed by invalid user debian 34.122.156.88 port 33092 [preauth]
Jan 10 16:40:27 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:33096 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:27 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:33102 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:27 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:33110 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:27 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:33116 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:27 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:33118 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:27 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:33120 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:27 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:33128 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:27 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:33136 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:27 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:33152 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:27 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:33166 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:27 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:33178 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:27 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:33192 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:27 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:33196 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:28 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:33204 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:28 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:33220 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:28 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:33232 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:28 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:33244 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:28 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:33254 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:28 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:33266 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:28 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:33282 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:28 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:33296 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:28 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:33310 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:28 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:33314 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:28 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:33320 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:28 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:33324 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:28 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:33340 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:28 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:33346 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:28 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:33354 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:28 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:33362 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:29 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:33376 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:29 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:33380 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:29 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:33394 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:29 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:33398 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:29 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:33406 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:29 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:33410 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:29 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:59826 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:29 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:59834 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:29 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:59850 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:29 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:59860 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:29 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:59874 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:29 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:59880 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:29 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:59890 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:29 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:59902 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:29 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:59910 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:29 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:59924 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:30 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:59930 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:30 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:59942 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:30 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:59948 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:30 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:59958 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:30 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:59962 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:30 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:59974 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:30 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:59988 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:30 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:59998 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:30 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:60000 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:30 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:60010 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:30 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:60016 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:30 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:60024 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:30 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:60036 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:30 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:60052 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:30 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:60056 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:30 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:60070 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:30 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:60084 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:31 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:60096 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:31 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:60110 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:31 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:60114 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:31 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:60126 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:31 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:60128 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:31 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:60144 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:31 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:60160 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:31 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:60174 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:31 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:60188 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:31 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:60200 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:31 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:60204 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:31 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:60220 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:31 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:60228 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:31 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:60244 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:31 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:60252 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:31 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:60256 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:32 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:60260 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:32 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:60270 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:32 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:60272 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:32 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:60282 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:32 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:60296 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:32 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:60300 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:32 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:60302 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:32 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:60318 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:32 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:60328 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:32 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:60330 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:32 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:60336 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:32 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:60338 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:32 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:60340 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:32 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:60352 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:32 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:60360 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:32 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:60364 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:33 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:60366 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:33 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:60382 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:33 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:60394 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:33 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:60410 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:33 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:60414 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:33 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:60426 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:33 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:60432 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:33 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:60436 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:33 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:60450 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:33 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:60464 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:33 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:60476 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:33 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:60486 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:33 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:60494 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:33 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:60500 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:33 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:60502 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:33 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:60512 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:34 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:60526 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:34 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:60538 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:34 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:60544 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:34 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:60552 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:34 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:60564 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:34 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:60570 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:40:34 compute-0 sshd[1007]: drop connection #0 from [34.122.156.88]:60584 on [38.102.83.74]:22 penalty: connections without attempting authentication
Jan 10 16:41:59 compute-0 sshd-session[30982]: Invalid user test from 216.36.124.133 port 57430
Jan 10 16:42:00 compute-0 sshd-session[30982]: Connection closed by invalid user test 216.36.124.133 port 57430 [preauth]
Jan 10 16:42:16 compute-0 sshd-session[29982]: Received disconnect from 38.102.83.82 port 35890:11: disconnected by user
Jan 10 16:42:16 compute-0 sshd-session[29982]: Disconnected from user zuul 38.102.83.82 port 35890
Jan 10 16:42:16 compute-0 sshd-session[29979]: pam_unix(sshd:session): session closed for user zuul
Jan 10 16:42:16 compute-0 systemd[1]: session-7.scope: Deactivated successfully.
Jan 10 16:42:16 compute-0 systemd[1]: session-7.scope: Consumed 5.276s CPU time.
Jan 10 16:42:16 compute-0 systemd-logind[798]: Session 7 logged out. Waiting for processes to exit.
Jan 10 16:42:16 compute-0 systemd-logind[798]: Removed session 7.
Jan 10 16:43:55 compute-0 sshd-session[30985]: Invalid user user from 216.36.124.133 port 58904
Jan 10 16:43:55 compute-0 sshd-session[30985]: Connection closed by invalid user user 216.36.124.133 port 58904 [preauth]
Jan 10 16:45:51 compute-0 sshd-session[30988]: Connection closed by authenticating user root 216.36.124.133 port 60434 [preauth]
Jan 10 16:47:46 compute-0 sshd-session[30990]: Invalid user admin from 216.36.124.133 port 33682
Jan 10 16:47:46 compute-0 sshd-session[30990]: Connection closed by invalid user admin 216.36.124.133 port 33682 [preauth]
Jan 10 16:48:21 compute-0 sshd-session[30992]: Accepted publickey for zuul from 192.168.122.30 port 46458 ssh2: ECDSA SHA256:YYROLJW/JwZAyyZtyl+88gzuUs1GqrQIhGb+AzXg9yc
Jan 10 16:48:21 compute-0 systemd-logind[798]: New session 8 of user zuul.
Jan 10 16:48:21 compute-0 systemd[1]: Started Session 8 of User zuul.
Jan 10 16:48:21 compute-0 sshd-session[30992]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 10 16:48:22 compute-0 python3.9[31145]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 10 16:48:23 compute-0 sudo[31324]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fytabatbpnbjiqkscqsxwqetwvleiwlc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768063703.3522415-27-125155848854972/AnsiballZ_command.py'
Jan 10 16:48:23 compute-0 sudo[31324]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:48:24 compute-0 python3.9[31326]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                            pushd /var/tmp
                                            curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                            pushd repo-setup-main
                                            python3 -m venv ./venv
                                            PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                            ./venv/bin/repo-setup current-podified -b antelope
                                            popd
                                            rm -rf repo-setup-main
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 16:48:32 compute-0 sudo[31324]: pam_unix(sudo:session): session closed for user root
Jan 10 16:48:32 compute-0 sshd-session[30995]: Connection closed by 192.168.122.30 port 46458
Jan 10 16:48:32 compute-0 sshd-session[30992]: pam_unix(sshd:session): session closed for user zuul
Jan 10 16:48:32 compute-0 systemd[1]: session-8.scope: Deactivated successfully.
Jan 10 16:48:32 compute-0 systemd[1]: session-8.scope: Consumed 8.792s CPU time.
Jan 10 16:48:32 compute-0 systemd-logind[798]: Session 8 logged out. Waiting for processes to exit.
Jan 10 16:48:32 compute-0 systemd-logind[798]: Removed session 8.
Jan 10 16:48:47 compute-0 sshd-session[31384]: Accepted publickey for zuul from 192.168.122.30 port 38208 ssh2: ECDSA SHA256:YYROLJW/JwZAyyZtyl+88gzuUs1GqrQIhGb+AzXg9yc
Jan 10 16:48:47 compute-0 systemd-logind[798]: New session 9 of user zuul.
Jan 10 16:48:47 compute-0 systemd[1]: Started Session 9 of User zuul.
Jan 10 16:48:47 compute-0 sshd-session[31384]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 10 16:48:48 compute-0 python3.9[31537]: ansible-ansible.legacy.ping Invoked with data=pong
Jan 10 16:48:49 compute-0 python3.9[31711]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 10 16:48:50 compute-0 sudo[31861]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jcdjnnrzbjtuxkvhvcdkmnekjtfseyle ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768063729.8904524-40-173155758597010/AnsiballZ_command.py'
Jan 10 16:48:50 compute-0 sudo[31861]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:48:50 compute-0 python3.9[31863]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 16:48:50 compute-0 sudo[31861]: pam_unix(sudo:session): session closed for user root
Jan 10 16:48:51 compute-0 sudo[32014]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-grwpahjfxxdincgludukikrxgdnljuxu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768063731.3049576-52-248461246167488/AnsiballZ_stat.py'
Jan 10 16:48:51 compute-0 sudo[32014]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:48:51 compute-0 python3.9[32016]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 10 16:48:52 compute-0 sudo[32014]: pam_unix(sudo:session): session closed for user root
Jan 10 16:48:52 compute-0 sudo[32166]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ycwlgzyvbtdjjdlemjxaizmrtstgqmhn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768063732.184399-60-97133419954920/AnsiballZ_file.py'
Jan 10 16:48:52 compute-0 sudo[32166]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:48:52 compute-0 python3.9[32168]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 16:48:52 compute-0 sudo[32166]: pam_unix(sudo:session): session closed for user root
Jan 10 16:48:53 compute-0 sudo[32318]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hosqncejtpmtqtnxhguoxfftbttvscpn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768063733.058479-68-19246270801804/AnsiballZ_stat.py'
Jan 10 16:48:53 compute-0 sudo[32318]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:48:53 compute-0 python3.9[32320]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 16:48:53 compute-0 sudo[32318]: pam_unix(sudo:session): session closed for user root
Jan 10 16:48:54 compute-0 sudo[32441]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xewhdacjygikzvbjfmbpuuzdnyfijpgm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768063733.058479-68-19246270801804/AnsiballZ_copy.py'
Jan 10 16:48:54 compute-0 sudo[32441]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:48:54 compute-0 python3.9[32443]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1768063733.058479-68-19246270801804/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 16:48:54 compute-0 sudo[32441]: pam_unix(sudo:session): session closed for user root
Jan 10 16:48:54 compute-0 sudo[32593]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vpjjftvbkniitjiilmumddzgxmktohjf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768063734.5052023-83-207710492885960/AnsiballZ_setup.py'
Jan 10 16:48:54 compute-0 sudo[32593]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:48:55 compute-0 python3.9[32595]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 10 16:48:55 compute-0 sudo[32593]: pam_unix(sudo:session): session closed for user root
Jan 10 16:48:55 compute-0 irqbalance[794]: Cannot change IRQ 26 affinity: Operation not permitted
Jan 10 16:48:55 compute-0 irqbalance[794]: IRQ 26 affinity is now unmanaged
Jan 10 16:48:55 compute-0 sudo[32749]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vcdzspwwvkxblobgxvmbxgjpwfomrowf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768063735.5461857-91-83457514947058/AnsiballZ_file.py'
Jan 10 16:48:55 compute-0 sudo[32749]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:48:55 compute-0 python3.9[32751]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 10 16:48:56 compute-0 sudo[32749]: pam_unix(sudo:session): session closed for user root
Jan 10 16:48:56 compute-0 sudo[32901]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ilzsspbdiewwtxttnhtyknszsmoojjjm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768063736.2121165-100-35711052935037/AnsiballZ_file.py'
Jan 10 16:48:56 compute-0 sudo[32901]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:48:56 compute-0 python3.9[32903]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 10 16:48:56 compute-0 sudo[32901]: pam_unix(sudo:session): session closed for user root
Jan 10 16:48:57 compute-0 python3.9[33053]: ansible-ansible.builtin.service_facts Invoked
Jan 10 16:49:00 compute-0 python3.9[33306]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 16:49:01 compute-0 python3.9[33456]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 10 16:49:02 compute-0 python3.9[33610]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 10 16:49:03 compute-0 sudo[33766]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kghsvlovmoepduejnhtkaooqftllkvha ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768063743.3910844-148-153223728510037/AnsiballZ_setup.py'
Jan 10 16:49:03 compute-0 sudo[33766]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:49:04 compute-0 python3.9[33768]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 10 16:49:04 compute-0 sudo[33766]: pam_unix(sudo:session): session closed for user root
Jan 10 16:49:04 compute-0 sudo[33850]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqzcyujevdqzmjualgslighnjflpvsna ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768063743.3910844-148-153223728510037/AnsiballZ_dnf.py'
Jan 10 16:49:04 compute-0 sudo[33850]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:49:04 compute-0 python3.9[33852]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 10 16:49:30 compute-0 sshd-session[33968]: Invalid user cirros from 216.36.124.133 port 34804
Jan 10 16:49:30 compute-0 sshd-session[33968]: Connection closed by invalid user cirros 216.36.124.133 port 34804 [preauth]
Jan 10 16:49:56 compute-0 systemd[1]: Reloading.
Jan 10 16:49:56 compute-0 systemd-rc-local-generator[34045]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 16:49:56 compute-0 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Jan 10 16:49:57 compute-0 systemd[1]: Reloading.
Jan 10 16:49:57 compute-0 systemd-rc-local-generator[34096]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 16:49:57 compute-0 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Jan 10 16:49:57 compute-0 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Jan 10 16:49:57 compute-0 systemd[1]: Reloading.
Jan 10 16:49:57 compute-0 systemd-rc-local-generator[34135]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 16:49:57 compute-0 systemd[1]: Listening on LVM2 poll daemon socket.
Jan 10 16:49:57 compute-0 dbus-broker-launch[744]: Noticed file-system modification, trigger reload.
Jan 10 16:49:57 compute-0 dbus-broker-launch[744]: Noticed file-system modification, trigger reload.
Jan 10 16:49:57 compute-0 dbus-broker-launch[744]: Noticed file-system modification, trigger reload.
Jan 10 16:50:44 compute-0 sshd-session[34324]: Invalid user postgres from 36.111.150.151 port 54834
Jan 10 16:50:44 compute-0 sshd-session[34324]: Received disconnect from 36.111.150.151 port 54834:11:  [preauth]
Jan 10 16:50:44 compute-0 sshd-session[34324]: Disconnected from invalid user postgres 36.111.150.151 port 54834 [preauth]
Jan 10 16:51:05 compute-0 kernel: SELinux:  Converting 2719 SID table entries...
Jan 10 16:51:05 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Jan 10 16:51:05 compute-0 kernel: SELinux:  policy capability open_perms=1
Jan 10 16:51:05 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Jan 10 16:51:05 compute-0 kernel: SELinux:  policy capability always_check_network=0
Jan 10 16:51:05 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 10 16:51:05 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 10 16:51:05 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 10 16:51:05 compute-0 dbus-broker-launch[777]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Jan 10 16:51:05 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 10 16:51:05 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 10 16:51:05 compute-0 systemd[1]: Reloading.
Jan 10 16:51:05 compute-0 systemd-rc-local-generator[34470]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 16:51:05 compute-0 systemd[1]: Starting dnf makecache...
Jan 10 16:51:05 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 10 16:51:05 compute-0 dnf[34508]: Failed determining last makecache time.
Jan 10 16:51:05 compute-0 dnf[34508]: delorean-openstack-barbican-42b4c41831408a8e323 120 kB/s | 3.0 kB     00:00
Jan 10 16:51:06 compute-0 dnf[34508]: delorean-python-glean-10df0bd91b9bc5c9fd9cc02d7 175 kB/s | 3.0 kB     00:00
Jan 10 16:51:06 compute-0 dnf[34508]: delorean-openstack-cinder-1c00d6490d88e436f26ef 162 kB/s | 3.0 kB     00:00
Jan 10 16:51:06 compute-0 dnf[34508]: delorean-python-stevedore-c4acc5639fd2329372142 176 kB/s | 3.0 kB     00:00
Jan 10 16:51:06 compute-0 dnf[34508]: delorean-python-cloudkitty-tests-tempest-2c80f8 178 kB/s | 3.0 kB     00:00
Jan 10 16:51:06 compute-0 dnf[34508]: delorean-os-refresh-config-9bfc52b5049be2d8de61 169 kB/s | 3.0 kB     00:00
Jan 10 16:51:06 compute-0 dnf[34508]: delorean-openstack-nova-6f8decf0b4f1aa2e96292b6 182 kB/s | 3.0 kB     00:00
Jan 10 16:51:06 compute-0 dnf[34508]: delorean-python-designate-tests-tempest-347fdbc 173 kB/s | 3.0 kB     00:00
Jan 10 16:51:06 compute-0 dnf[34508]: delorean-openstack-glance-1fd12c29b339f30fe823e 194 kB/s | 3.0 kB     00:00
Jan 10 16:51:06 compute-0 dnf[34508]: delorean-openstack-keystone-e4b40af0ae3698fbbbb 193 kB/s | 3.0 kB     00:00
Jan 10 16:51:06 compute-0 dnf[34508]: delorean-openstack-manila-3c01b7181572c95dac462 168 kB/s | 3.0 kB     00:00
Jan 10 16:51:06 compute-0 sudo[33850]: pam_unix(sudo:session): session closed for user root
Jan 10 16:51:06 compute-0 dnf[34508]: delorean-python-whitebox-neutron-tests-tempest- 155 kB/s | 3.0 kB     00:00
Jan 10 16:51:06 compute-0 dnf[34508]: delorean-openstack-octavia-ba397f07a7331190208c 168 kB/s | 3.0 kB     00:00
Jan 10 16:51:06 compute-0 dnf[34508]: delorean-openstack-watcher-c014f81a8647287f6dcc 176 kB/s | 3.0 kB     00:00
Jan 10 16:51:06 compute-0 dnf[34508]: delorean-ansible-config_template-5ccaa22121a7ff 180 kB/s | 3.0 kB     00:00
Jan 10 16:51:06 compute-0 dnf[34508]: delorean-puppet-ceph-7352068d7b8c84ded636ab3158 178 kB/s | 3.0 kB     00:00
Jan 10 16:51:06 compute-0 dnf[34508]: delorean-openstack-swift-dc98a8463506ac520c469a 186 kB/s | 3.0 kB     00:00
Jan 10 16:51:06 compute-0 dnf[34508]: delorean-python-tempestconf-8515371b7cceebd4282 163 kB/s | 3.0 kB     00:00
Jan 10 16:51:06 compute-0 dnf[34508]: delorean-openstack-heat-ui-013accbfd179753bc3f0 173 kB/s | 3.0 kB     00:00
Jan 10 16:51:06 compute-0 dnf[34508]: CentOS Stream 9 - BaseOS                         68 kB/s | 6.7 kB     00:00
Jan 10 16:51:06 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 10 16:51:06 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 10 16:51:06 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1.317s CPU time.
Jan 10 16:51:06 compute-0 systemd[1]: run-rd1f3c2c596344fefa723795417c7e0da.service: Deactivated successfully.
Jan 10 16:51:06 compute-0 sudo[35407]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-obppmadndlrjidsajfawrunpmeftwrnx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768063866.4034064-160-82245419951719/AnsiballZ_command.py'
Jan 10 16:51:06 compute-0 sudo[35407]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:51:06 compute-0 dnf[34508]: CentOS Stream 9 - AppStream                      30 kB/s | 6.8 kB     00:00
Jan 10 16:51:06 compute-0 python3.9[35409]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 16:51:06 compute-0 dnf[34508]: CentOS Stream 9 - CRB                            65 kB/s | 6.6 kB     00:00
Jan 10 16:51:07 compute-0 dnf[34508]: CentOS Stream 9 - Extras packages                74 kB/s | 7.3 kB     00:00
Jan 10 16:51:07 compute-0 dnf[34508]: dlrn-antelope-testing                           130 kB/s | 3.0 kB     00:00
Jan 10 16:51:07 compute-0 dnf[34508]: dlrn-antelope-build-deps                        149 kB/s | 3.0 kB     00:00
Jan 10 16:51:07 compute-0 dnf[34508]: centos9-rabbitmq                                110 kB/s | 3.0 kB     00:00
Jan 10 16:51:07 compute-0 dnf[34508]: centos9-storage                                 107 kB/s | 3.0 kB     00:00
Jan 10 16:51:07 compute-0 dnf[34508]: centos9-opstools                                101 kB/s | 3.0 kB     00:00
Jan 10 16:51:07 compute-0 dnf[34508]: NFV SIG OpenvSwitch                             100 kB/s | 3.0 kB     00:00
Jan 10 16:51:07 compute-0 dnf[34508]: repo-setup-centos-appstream                     153 kB/s | 4.4 kB     00:00
Jan 10 16:51:07 compute-0 dnf[34508]: repo-setup-centos-baseos                        151 kB/s | 3.9 kB     00:00
Jan 10 16:51:07 compute-0 dnf[34508]: repo-setup-centos-highavailability              164 kB/s | 3.9 kB     00:00
Jan 10 16:51:07 compute-0 dnf[34508]: repo-setup-centos-powertools                    197 kB/s | 4.3 kB     00:00
Jan 10 16:51:07 compute-0 sudo[35407]: pam_unix(sudo:session): session closed for user root
Jan 10 16:51:07 compute-0 dnf[34508]: Extra Packages for Enterprise Linux 9 - x86_64  224 kB/s |  31 kB     00:00
Jan 10 16:51:08 compute-0 dnf[34508]: Metadata cache created.
Jan 10 16:51:08 compute-0 systemd[1]: dnf-makecache.service: Deactivated successfully.
Jan 10 16:51:08 compute-0 systemd[1]: Finished dnf makecache.
Jan 10 16:51:08 compute-0 systemd[1]: dnf-makecache.service: Consumed 1.968s CPU time.
Jan 10 16:51:08 compute-0 sudo[35709]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kwprnkvvclrenhsflueprywweqiiniec ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768063867.9531202-168-58671233384827/AnsiballZ_selinux.py'
Jan 10 16:51:08 compute-0 sudo[35709]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:51:08 compute-0 python3.9[35711]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Jan 10 16:51:08 compute-0 sudo[35709]: pam_unix(sudo:session): session closed for user root
Jan 10 16:51:09 compute-0 sudo[35861]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-adstfgmoxqjfnxfmihfiyjtyqhfnhorv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768063869.1552486-179-85148465670267/AnsiballZ_command.py'
Jan 10 16:51:09 compute-0 sudo[35861]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:51:09 compute-0 python3.9[35863]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Jan 10 16:51:12 compute-0 sudo[35861]: pam_unix(sudo:session): session closed for user root
Jan 10 16:51:12 compute-0 sudo[36014]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ukkdglkdmiidhrfefhdaugqlpjtmflnw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768063872.2942038-187-203844283664945/AnsiballZ_file.py'
Jan 10 16:51:12 compute-0 sudo[36014]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:51:13 compute-0 python3.9[36016]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 16:51:13 compute-0 sudo[36014]: pam_unix(sudo:session): session closed for user root
Jan 10 16:51:13 compute-0 sudo[36166]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yvyunbnvdicssaioxavkpqmuqgwizmqv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768063873.2651641-195-93153827891922/AnsiballZ_mount.py'
Jan 10 16:51:13 compute-0 sudo[36166]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:51:14 compute-0 python3.9[36168]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Jan 10 16:51:14 compute-0 sudo[36166]: pam_unix(sudo:session): session closed for user root
Jan 10 16:51:15 compute-0 sudo[36318]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iyehxhqeadaagqxaygluytrlfzjcczeg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768063874.7755387-223-33380634617064/AnsiballZ_file.py'
Jan 10 16:51:15 compute-0 sudo[36318]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:51:15 compute-0 python3.9[36320]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 10 16:51:15 compute-0 sudo[36318]: pam_unix(sudo:session): session closed for user root
Jan 10 16:51:15 compute-0 sudo[36470]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dggbtkqisyziwnxshlsxzhfbkombzwkp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768063875.6270247-231-78958342477032/AnsiballZ_stat.py'
Jan 10 16:51:15 compute-0 sudo[36470]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:51:16 compute-0 python3.9[36472]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 16:51:16 compute-0 sudo[36470]: pam_unix(sudo:session): session closed for user root
Jan 10 16:51:16 compute-0 sudo[36593]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sozjyqshdytrzsmgzqsaypuluictjpay ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768063875.6270247-231-78958342477032/AnsiballZ_copy.py'
Jan 10 16:51:16 compute-0 sudo[36593]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:51:16 compute-0 python3.9[36595]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768063875.6270247-231-78958342477032/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=1c1aa104eb1736f59ba6477b43a84ef8e828e0b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 16:51:16 compute-0 sudo[36593]: pam_unix(sudo:session): session closed for user root
Jan 10 16:51:17 compute-0 sudo[36745]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-legwbgudclanhplbfdeabsrdvlkxcsam ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768063877.2806482-255-199181166924820/AnsiballZ_stat.py'
Jan 10 16:51:17 compute-0 sudo[36745]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:51:20 compute-0 python3.9[36747]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 10 16:51:20 compute-0 sudo[36745]: pam_unix(sudo:session): session closed for user root
Jan 10 16:51:20 compute-0 sudo[36897]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iveqyyuqkuqmpzqubkcbtiewmeccotbw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768063880.5266166-263-163474237758433/AnsiballZ_command.py'
Jan 10 16:51:20 compute-0 sudo[36897]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:51:24 compute-0 sshd-session[36900]: Connection closed by authenticating user root 216.36.124.133 port 35758 [preauth]
Jan 10 16:51:24 compute-0 python3.9[36899]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/vgimportdevices --all _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 16:51:24 compute-0 sudo[36897]: pam_unix(sudo:session): session closed for user root
Jan 10 16:51:25 compute-0 sudo[37052]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ldhbmihtcgulqbyzlupdiqaecdjcuufs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768063884.8388808-271-246286749148639/AnsiballZ_file.py'
Jan 10 16:51:25 compute-0 sudo[37052]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:51:25 compute-0 python3.9[37054]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/lvm/devices/system.devices state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 16:51:25 compute-0 sudo[37052]: pam_unix(sudo:session): session closed for user root
Jan 10 16:51:26 compute-0 sudo[37204]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jwqimohrzxgulwwmanjgsytjwjnfpzmh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768063885.6200066-282-55254929277492/AnsiballZ_getent.py'
Jan 10 16:51:26 compute-0 sudo[37204]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:51:26 compute-0 python3.9[37206]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Jan 10 16:51:26 compute-0 sudo[37204]: pam_unix(sudo:session): session closed for user root
Jan 10 16:51:26 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 10 16:51:26 compute-0 sudo[37358]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bzhvwdcodplmzmggnorihuoqmgtectqx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768063886.4036534-290-19473190435560/AnsiballZ_group.py'
Jan 10 16:51:26 compute-0 sudo[37358]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:51:27 compute-0 python3.9[37360]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 10 16:51:27 compute-0 groupadd[37361]: group added to /etc/group: name=qemu, GID=107
Jan 10 16:51:27 compute-0 groupadd[37361]: group added to /etc/gshadow: name=qemu
Jan 10 16:51:27 compute-0 groupadd[37361]: new group: name=qemu, GID=107
Jan 10 16:51:27 compute-0 sudo[37358]: pam_unix(sudo:session): session closed for user root
Jan 10 16:51:27 compute-0 sudo[37516]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hqkqwcnxwvxpgtaxtpmfkuqcfdwhrdos ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768063887.439285-298-38614470837867/AnsiballZ_user.py'
Jan 10 16:51:27 compute-0 sudo[37516]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:51:28 compute-0 python3.9[37518]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 10 16:51:28 compute-0 useradd[37520]: new user: name=qemu, UID=107, GID=107, home=/home/qemu, shell=/sbin/nologin, from=/dev/pts/0
Jan 10 16:51:28 compute-0 sudo[37516]: pam_unix(sudo:session): session closed for user root
Jan 10 16:51:29 compute-0 sudo[37676]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ahzpskxflkgcyzhqkqdjfmbcnpdktubu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768063888.773635-306-3356881228958/AnsiballZ_getent.py'
Jan 10 16:51:29 compute-0 sudo[37676]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:51:29 compute-0 python3.9[37678]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Jan 10 16:51:29 compute-0 sudo[37676]: pam_unix(sudo:session): session closed for user root
Jan 10 16:51:29 compute-0 sudo[37829]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sbucymyicfjnisqmvibgcbzvpfwacvmf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768063889.4854069-314-239386700269155/AnsiballZ_group.py'
Jan 10 16:51:29 compute-0 sudo[37829]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:51:29 compute-0 python3.9[37831]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 10 16:51:29 compute-0 groupadd[37832]: group added to /etc/group: name=hugetlbfs, GID=42477
Jan 10 16:51:29 compute-0 groupadd[37832]: group added to /etc/gshadow: name=hugetlbfs
Jan 10 16:51:30 compute-0 groupadd[37832]: new group: name=hugetlbfs, GID=42477
Jan 10 16:51:30 compute-0 sudo[37829]: pam_unix(sudo:session): session closed for user root
Jan 10 16:51:30 compute-0 sudo[37987]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sunshgsmydcwjhqmicnakkbvxettqrxh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768063890.2554805-323-7359005809882/AnsiballZ_file.py'
Jan 10 16:51:30 compute-0 sudo[37987]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:51:30 compute-0 python3.9[37989]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Jan 10 16:51:30 compute-0 sudo[37987]: pam_unix(sudo:session): session closed for user root
Jan 10 16:51:31 compute-0 sudo[38139]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ubpieszjylckcczixgbpscftjpzlxcgr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768063891.0973384-334-88036282046305/AnsiballZ_dnf.py'
Jan 10 16:51:31 compute-0 sudo[38139]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:51:31 compute-0 python3.9[38141]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 10 16:51:34 compute-0 sudo[38139]: pam_unix(sudo:session): session closed for user root
Jan 10 16:51:34 compute-0 sudo[38292]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-keybcklujfegotbuogegqvjusnhnnjlw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768063894.6413617-342-231431216512127/AnsiballZ_file.py'
Jan 10 16:51:34 compute-0 sudo[38292]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:51:35 compute-0 python3.9[38294]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 10 16:51:35 compute-0 sudo[38292]: pam_unix(sudo:session): session closed for user root
Jan 10 16:51:35 compute-0 sudo[38444]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lavytivveamyeonhvkvjxisnkopedqyq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768063895.336047-350-241734069473998/AnsiballZ_stat.py'
Jan 10 16:51:35 compute-0 sudo[38444]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:51:35 compute-0 python3.9[38446]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 16:51:35 compute-0 sudo[38444]: pam_unix(sudo:session): session closed for user root
Jan 10 16:51:36 compute-0 sudo[38567]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-igyejsupyzyzagztucuitjhdlbhzvqby ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768063895.336047-350-241734069473998/AnsiballZ_copy.py'
Jan 10 16:51:36 compute-0 sudo[38567]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:51:36 compute-0 python3.9[38569]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1768063895.336047-350-241734069473998/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 10 16:51:36 compute-0 sudo[38567]: pam_unix(sudo:session): session closed for user root
Jan 10 16:51:37 compute-0 sudo[38719]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-stgnsrgjipnghafvjiixamlfhgeyxyuc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768063896.7219841-365-62766819369379/AnsiballZ_systemd.py'
Jan 10 16:51:37 compute-0 sudo[38719]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:51:37 compute-0 python3.9[38721]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 10 16:51:37 compute-0 systemd[1]: Starting Load Kernel Modules...
Jan 10 16:51:37 compute-0 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Jan 10 16:51:37 compute-0 kernel: Bridge firewalling registered
Jan 10 16:51:37 compute-0 systemd-modules-load[38725]: Inserted module 'br_netfilter'
Jan 10 16:51:37 compute-0 systemd[1]: Finished Load Kernel Modules.
Jan 10 16:51:37 compute-0 sudo[38719]: pam_unix(sudo:session): session closed for user root
Jan 10 16:51:38 compute-0 sudo[38879]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-urotacjnvytogndwvmzzatvdsqsabhex ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768063898.0593152-373-226241623176004/AnsiballZ_stat.py'
Jan 10 16:51:38 compute-0 sudo[38879]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:51:38 compute-0 python3.9[38881]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 16:51:38 compute-0 sudo[38879]: pam_unix(sudo:session): session closed for user root
Jan 10 16:51:38 compute-0 sudo[39002]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lzqnxuksjloixybxnwhyzuulozvhusic ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768063898.0593152-373-226241623176004/AnsiballZ_copy.py'
Jan 10 16:51:38 compute-0 sudo[39002]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:51:39 compute-0 python3.9[39004]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1768063898.0593152-373-226241623176004/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 10 16:51:39 compute-0 sudo[39002]: pam_unix(sudo:session): session closed for user root
Jan 10 16:51:39 compute-0 sudo[39154]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mspeuynqtawidxbetnzchopkieqljmxl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768063899.4937723-391-195193973777149/AnsiballZ_dnf.py'
Jan 10 16:51:39 compute-0 sudo[39154]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:51:40 compute-0 python3.9[39156]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 10 16:51:43 compute-0 dbus-broker-launch[744]: Noticed file-system modification, trigger reload.
Jan 10 16:51:43 compute-0 dbus-broker-launch[744]: Noticed file-system modification, trigger reload.
Jan 10 16:51:43 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 10 16:51:43 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 10 16:51:43 compute-0 systemd[1]: Reloading.
Jan 10 16:51:43 compute-0 systemd-rc-local-generator[39220]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 16:51:43 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 10 16:51:44 compute-0 sudo[39154]: pam_unix(sudo:session): session closed for user root
Jan 10 16:51:45 compute-0 python3.9[40336]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 10 16:51:46 compute-0 python3.9[41201]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Jan 10 16:51:46 compute-0 python3.9[41914]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 10 16:51:47 compute-0 sudo[42792]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-grdxrkizjywnfchkcsdkeincyfxskswp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768063907.015514-430-46112275004556/AnsiballZ_command.py'
Jan 10 16:51:47 compute-0 sudo[42792]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:51:47 compute-0 python3.9[42810]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 16:51:47 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Jan 10 16:51:48 compute-0 systemd[1]: Starting Authorization Manager...
Jan 10 16:51:48 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 10 16:51:48 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 10 16:51:48 compute-0 systemd[1]: man-db-cache-update.service: Consumed 5.629s CPU time.
Jan 10 16:51:48 compute-0 systemd[1]: run-r75795725a524488086aae489f1efef2c.service: Deactivated successfully.
Jan 10 16:51:48 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Jan 10 16:51:48 compute-0 polkitd[43532]: Started polkitd version 0.117
Jan 10 16:51:48 compute-0 polkitd[43532]: Loading rules from directory /etc/polkit-1/rules.d
Jan 10 16:51:48 compute-0 polkitd[43532]: Loading rules from directory /usr/share/polkit-1/rules.d
Jan 10 16:51:48 compute-0 polkitd[43532]: Finished loading, compiling and executing 2 rules
Jan 10 16:51:48 compute-0 polkitd[43532]: Acquired the name org.freedesktop.PolicyKit1 on the system bus
Jan 10 16:51:48 compute-0 systemd[1]: Started Authorization Manager.
Jan 10 16:51:48 compute-0 sudo[42792]: pam_unix(sudo:session): session closed for user root
Jan 10 16:51:48 compute-0 sudo[43701]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ajxhvtexjrgxkuvvpjsonhzlzxndshom ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768063908.4076536-439-243501061365334/AnsiballZ_systemd.py'
Jan 10 16:51:48 compute-0 sudo[43701]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:51:48 compute-0 python3.9[43703]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 10 16:51:49 compute-0 systemd[1]: Stopping Dynamic System Tuning Daemon...
Jan 10 16:51:49 compute-0 systemd[1]: tuned.service: Deactivated successfully.
Jan 10 16:51:49 compute-0 systemd[1]: Stopped Dynamic System Tuning Daemon.
Jan 10 16:51:49 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Jan 10 16:51:49 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Jan 10 16:51:49 compute-0 sudo[43701]: pam_unix(sudo:session): session closed for user root
Jan 10 16:51:49 compute-0 python3.9[43865]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Jan 10 16:51:51 compute-0 sudo[44015]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vugiihgfsirntuyzvsgxqbonyysnncye ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768063911.5940876-496-241915465944291/AnsiballZ_systemd.py'
Jan 10 16:51:51 compute-0 sudo[44015]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:51:52 compute-0 python3.9[44017]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 10 16:51:52 compute-0 systemd[1]: Reloading.
Jan 10 16:51:52 compute-0 systemd-rc-local-generator[44042]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 16:51:52 compute-0 sudo[44015]: pam_unix(sudo:session): session closed for user root
Jan 10 16:51:52 compute-0 sudo[44205]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lsaufpatmoqyvcjmrnkxfaheittdwqit ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768063912.6446567-496-203075192349510/AnsiballZ_systemd.py'
Jan 10 16:51:52 compute-0 sudo[44205]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:51:53 compute-0 python3.9[44207]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 10 16:51:53 compute-0 systemd[1]: Reloading.
Jan 10 16:51:53 compute-0 systemd-rc-local-generator[44231]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 16:51:53 compute-0 sudo[44205]: pam_unix(sudo:session): session closed for user root
Jan 10 16:51:54 compute-0 sudo[44394]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ysgdtivnvjtvkcnxvxedonrzpaivhmrh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768063913.8009045-512-59390715652284/AnsiballZ_command.py'
Jan 10 16:51:54 compute-0 sudo[44394]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:51:54 compute-0 python3.9[44396]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 16:51:54 compute-0 sudo[44394]: pam_unix(sudo:session): session closed for user root
Jan 10 16:51:54 compute-0 sudo[44547]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hdmvycdkrkzsrmuobjtewvfaglfsvthb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768063914.5293162-520-112825732053462/AnsiballZ_command.py'
Jan 10 16:51:54 compute-0 sudo[44547]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:51:54 compute-0 python3.9[44549]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 16:51:54 compute-0 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Jan 10 16:51:54 compute-0 sudo[44547]: pam_unix(sudo:session): session closed for user root
Jan 10 16:51:55 compute-0 sudo[44700]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aqpjkzsdpzsbactlzmwqlzjebcjwqmqd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768063915.1678076-528-270636987159158/AnsiballZ_command.py'
Jan 10 16:51:55 compute-0 sudo[44700]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:51:55 compute-0 python3.9[44702]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 16:51:57 compute-0 sudo[44700]: pam_unix(sudo:session): session closed for user root
Jan 10 16:51:57 compute-0 sudo[44862]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bpkgnaqktqctzgqpdsvqagadmwmgaros ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768063917.3930025-536-50687855989353/AnsiballZ_command.py'
Jan 10 16:51:57 compute-0 sudo[44862]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:51:57 compute-0 python3.9[44864]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 16:51:57 compute-0 sudo[44862]: pam_unix(sudo:session): session closed for user root
Jan 10 16:51:58 compute-0 sudo[45015]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-itnnzcxjnerefgpxnzzoatjttttwvyvr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768063918.0533128-544-214969149997822/AnsiballZ_systemd.py'
Jan 10 16:51:58 compute-0 sudo[45015]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:51:58 compute-0 python3.9[45017]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 10 16:51:58 compute-0 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Jan 10 16:51:58 compute-0 systemd[1]: Stopped Apply Kernel Variables.
Jan 10 16:51:58 compute-0 systemd[1]: Stopping Apply Kernel Variables...
Jan 10 16:51:58 compute-0 systemd[1]: Starting Apply Kernel Variables...
Jan 10 16:51:58 compute-0 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Jan 10 16:51:58 compute-0 systemd[1]: Finished Apply Kernel Variables.
Jan 10 16:51:58 compute-0 sudo[45015]: pam_unix(sudo:session): session closed for user root
Jan 10 16:51:59 compute-0 sshd-session[31387]: Connection closed by 192.168.122.30 port 38208
Jan 10 16:51:59 compute-0 sshd-session[31384]: pam_unix(sshd:session): session closed for user zuul
Jan 10 16:51:59 compute-0 systemd[1]: session-9.scope: Deactivated successfully.
Jan 10 16:51:59 compute-0 systemd[1]: session-9.scope: Consumed 2min 31.344s CPU time.
Jan 10 16:51:59 compute-0 systemd-logind[798]: Session 9 logged out. Waiting for processes to exit.
Jan 10 16:51:59 compute-0 systemd-logind[798]: Removed session 9.
Jan 10 16:52:05 compute-0 sshd-session[45047]: Accepted publickey for zuul from 192.168.122.30 port 52270 ssh2: ECDSA SHA256:YYROLJW/JwZAyyZtyl+88gzuUs1GqrQIhGb+AzXg9yc
Jan 10 16:52:05 compute-0 systemd-logind[798]: New session 10 of user zuul.
Jan 10 16:52:05 compute-0 systemd[1]: Started Session 10 of User zuul.
Jan 10 16:52:05 compute-0 sshd-session[45047]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 10 16:52:06 compute-0 python3.9[45200]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 10 16:52:07 compute-0 sudo[45354]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-amltxsfnqvknktsdkdoahahmgkdrqttd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768063926.6664429-31-121868334866490/AnsiballZ_getent.py'
Jan 10 16:52:07 compute-0 sudo[45354]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:52:07 compute-0 python3.9[45356]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Jan 10 16:52:07 compute-0 sudo[45354]: pam_unix(sudo:session): session closed for user root
Jan 10 16:52:08 compute-0 sudo[45507]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-chcuzdiviwjuzacztqxqkcfxjdiwbpav ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768063927.559896-39-186918819376073/AnsiballZ_group.py'
Jan 10 16:52:08 compute-0 sudo[45507]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:52:08 compute-0 python3.9[45509]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 10 16:52:08 compute-0 groupadd[45510]: group added to /etc/group: name=openvswitch, GID=42476
Jan 10 16:52:08 compute-0 groupadd[45510]: group added to /etc/gshadow: name=openvswitch
Jan 10 16:52:08 compute-0 groupadd[45510]: new group: name=openvswitch, GID=42476
Jan 10 16:52:08 compute-0 sudo[45507]: pam_unix(sudo:session): session closed for user root
Jan 10 16:52:08 compute-0 sudo[45665]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ftbeutdrrnndwwxibdeuvxefsjlyjoyi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768063928.4502628-47-45440046317378/AnsiballZ_user.py'
Jan 10 16:52:08 compute-0 sudo[45665]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:52:09 compute-0 python3.9[45667]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 10 16:52:09 compute-0 useradd[45669]: new user: name=openvswitch, UID=42476, GID=42476, home=/home/openvswitch, shell=/sbin/nologin, from=/dev/pts/0
Jan 10 16:52:09 compute-0 useradd[45669]: add 'openvswitch' to group 'hugetlbfs'
Jan 10 16:52:09 compute-0 useradd[45669]: add 'openvswitch' to shadow group 'hugetlbfs'
Jan 10 16:52:09 compute-0 sudo[45665]: pam_unix(sudo:session): session closed for user root
Jan 10 16:52:09 compute-0 sudo[45825]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tqcrtmatoowhdrpgofnazzmqflhrykir ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768063929.5466106-57-164551125736676/AnsiballZ_setup.py'
Jan 10 16:52:09 compute-0 sudo[45825]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:52:10 compute-0 python3.9[45827]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 10 16:52:10 compute-0 sudo[45825]: pam_unix(sudo:session): session closed for user root
Jan 10 16:52:10 compute-0 sudo[45910]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-evscpltcpuniuxpfmgwfuhssqnpkciqe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768063929.5466106-57-164551125736676/AnsiballZ_dnf.py'
Jan 10 16:52:10 compute-0 sudo[45910]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:52:11 compute-0 python3.9[45912]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 10 16:52:13 compute-0 sudo[45910]: pam_unix(sudo:session): session closed for user root
Jan 10 16:52:13 compute-0 sudo[46073]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zyfzgswmwfxqucpkfhhopohvhadsffgd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768063933.486632-71-229664558016628/AnsiballZ_dnf.py'
Jan 10 16:52:13 compute-0 sudo[46073]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:52:13 compute-0 python3.9[46075]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 10 16:52:25 compute-0 kernel: SELinux:  Converting 2732 SID table entries...
Jan 10 16:52:25 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Jan 10 16:52:25 compute-0 kernel: SELinux:  policy capability open_perms=1
Jan 10 16:52:25 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Jan 10 16:52:25 compute-0 kernel: SELinux:  policy capability always_check_network=0
Jan 10 16:52:25 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 10 16:52:25 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 10 16:52:25 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 10 16:52:26 compute-0 groupadd[46098]: group added to /etc/group: name=unbound, GID=994
Jan 10 16:52:26 compute-0 groupadd[46098]: group added to /etc/gshadow: name=unbound
Jan 10 16:52:26 compute-0 groupadd[46098]: new group: name=unbound, GID=994
Jan 10 16:52:26 compute-0 useradd[46105]: new user: name=unbound, UID=993, GID=994, home=/var/lib/unbound, shell=/sbin/nologin, from=none
Jan 10 16:52:26 compute-0 dbus-broker-launch[777]: avc:  op=load_policy lsm=selinux seqno=7 res=1
Jan 10 16:52:26 compute-0 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Jan 10 16:52:28 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 10 16:52:28 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 10 16:52:28 compute-0 systemd[1]: Reloading.
Jan 10 16:52:28 compute-0 systemd-rc-local-generator[46597]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 16:52:28 compute-0 systemd-sysv-generator[46603]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 10 16:52:28 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 10 16:52:28 compute-0 sudo[46073]: pam_unix(sudo:session): session closed for user root
Jan 10 16:52:28 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 10 16:52:28 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 10 16:52:29 compute-0 systemd[1]: run-r93b3b86b0dd04336994bcf814f486156.service: Deactivated successfully.
Jan 10 16:52:29 compute-0 sudo[47172]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eresnfjwiymabezdczgewnudjgmbvkhj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768063949.1402717-79-144689356312113/AnsiballZ_systemd.py'
Jan 10 16:52:29 compute-0 sudo[47172]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:52:30 compute-0 python3.9[47174]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 10 16:52:30 compute-0 systemd[1]: Reloading.
Jan 10 16:52:30 compute-0 systemd-rc-local-generator[47196]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 16:52:30 compute-0 systemd-sysv-generator[47203]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 10 16:52:30 compute-0 systemd[1]: Starting Open vSwitch Database Unit...
Jan 10 16:52:30 compute-0 chown[47216]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Jan 10 16:52:30 compute-0 ovs-ctl[47221]: /etc/openvswitch/conf.db does not exist ... (warning).
Jan 10 16:52:30 compute-0 ovs-ctl[47221]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Jan 10 16:52:30 compute-0 ovs-ctl[47221]: Starting ovsdb-server [  OK  ]
Jan 10 16:52:30 compute-0 ovs-vsctl[47270]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Jan 10 16:52:30 compute-0 ovs-vsctl[47290]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"fbd04e21-7be2-4eb3-a385-03f0bb540a40\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Jan 10 16:52:30 compute-0 ovs-ctl[47221]: Configuring Open vSwitch system IDs [  OK  ]
Jan 10 16:52:30 compute-0 ovs-ctl[47221]: Enabling remote OVSDB managers [  OK  ]
Jan 10 16:52:30 compute-0 systemd[1]: Started Open vSwitch Database Unit.
Jan 10 16:52:30 compute-0 ovs-vsctl[47296]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Jan 10 16:52:30 compute-0 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Jan 10 16:52:30 compute-0 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Jan 10 16:52:30 compute-0 systemd[1]: Starting Open vSwitch Forwarding Unit...
Jan 10 16:52:30 compute-0 kernel: openvswitch: Open vSwitch switching datapath
Jan 10 16:52:30 compute-0 ovs-ctl[47341]: Inserting openvswitch module [  OK  ]
Jan 10 16:52:30 compute-0 ovs-ctl[47310]: Starting ovs-vswitchd [  OK  ]
Jan 10 16:52:30 compute-0 ovs-vsctl[47358]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Jan 10 16:52:30 compute-0 ovs-ctl[47310]: Enabling remote OVSDB managers [  OK  ]
Jan 10 16:52:30 compute-0 systemd[1]: Started Open vSwitch Forwarding Unit.
Jan 10 16:52:30 compute-0 systemd[1]: Starting Open vSwitch...
Jan 10 16:52:30 compute-0 systemd[1]: Finished Open vSwitch.
Jan 10 16:52:30 compute-0 sudo[47172]: pam_unix(sudo:session): session closed for user root
Jan 10 16:52:31 compute-0 python3.9[47510]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 10 16:52:32 compute-0 sudo[47660]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sbvwvjuinhrwcyjokgifpspkqorokzpk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768063952.0159533-97-55069062967079/AnsiballZ_sefcontext.py'
Jan 10 16:52:32 compute-0 sudo[47660]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:52:32 compute-0 python3.9[47662]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Jan 10 16:52:34 compute-0 kernel: SELinux:  Converting 2746 SID table entries...
Jan 10 16:52:34 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Jan 10 16:52:34 compute-0 kernel: SELinux:  policy capability open_perms=1
Jan 10 16:52:34 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Jan 10 16:52:34 compute-0 kernel: SELinux:  policy capability always_check_network=0
Jan 10 16:52:34 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 10 16:52:34 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 10 16:52:34 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 10 16:52:34 compute-0 sudo[47660]: pam_unix(sudo:session): session closed for user root
Jan 10 16:52:35 compute-0 python3.9[47818]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 10 16:52:35 compute-0 sudo[47974]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjhykaclrqpmgwbrxgbdepfgpckgunqy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768063955.5749886-115-107899449259127/AnsiballZ_dnf.py'
Jan 10 16:52:35 compute-0 dbus-broker-launch[777]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Jan 10 16:52:35 compute-0 sudo[47974]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:52:36 compute-0 python3.9[47976]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 10 16:52:37 compute-0 sudo[47974]: pam_unix(sudo:session): session closed for user root
Jan 10 16:52:38 compute-0 sudo[48129]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zsltgxsuewhmdbysynbofgnecqvqjdam ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768063957.7888076-123-248323879323715/AnsiballZ_command.py'
Jan 10 16:52:38 compute-0 sudo[48129]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:52:38 compute-0 sshd-session[48023]: Received disconnect from 193.46.255.99 port 43950:11:  [preauth]
Jan 10 16:52:38 compute-0 sshd-session[48023]: Disconnected from authenticating user root 193.46.255.99 port 43950 [preauth]
Jan 10 16:52:38 compute-0 python3.9[48131]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 16:52:39 compute-0 sudo[48129]: pam_unix(sudo:session): session closed for user root
Jan 10 16:52:39 compute-0 sudo[48416]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-motvsihmiwfbkyanlnrnqsfenhirmqtx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768063959.5225585-131-96251348191614/AnsiballZ_file.py'
Jan 10 16:52:39 compute-0 sudo[48416]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:52:40 compute-0 python3.9[48418]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None attributes=None
Jan 10 16:52:40 compute-0 sudo[48416]: pam_unix(sudo:session): session closed for user root
Jan 10 16:52:40 compute-0 python3.9[48568]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 10 16:52:41 compute-0 sudo[48720]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-csxoicvpwflxkidcvyiivvjwhlfeksgh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768063961.1892953-147-100606180058867/AnsiballZ_dnf.py'
Jan 10 16:52:41 compute-0 sudo[48720]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:52:41 compute-0 python3.9[48722]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 10 16:52:43 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 10 16:52:43 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 10 16:52:43 compute-0 systemd[1]: Reloading.
Jan 10 16:52:43 compute-0 systemd-rc-local-generator[48762]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 16:52:43 compute-0 systemd-sysv-generator[48765]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 10 16:52:43 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 10 16:52:44 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 10 16:52:44 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 10 16:52:44 compute-0 systemd[1]: run-r9d21fc0d50ca465b86a5967bc2a62f3c.service: Deactivated successfully.
Jan 10 16:52:44 compute-0 sudo[48720]: pam_unix(sudo:session): session closed for user root
Jan 10 16:52:44 compute-0 sudo[49036]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rzpebbspmcjnxiqizjlnvpdgpxpndlfp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768063964.5008519-155-171076107100556/AnsiballZ_systemd.py'
Jan 10 16:52:44 compute-0 sudo[49036]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:52:45 compute-0 python3.9[49038]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 10 16:52:45 compute-0 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Jan 10 16:52:45 compute-0 systemd[1]: Stopped Network Manager Wait Online.
Jan 10 16:52:45 compute-0 systemd[1]: Stopping Network Manager Wait Online...
Jan 10 16:52:45 compute-0 systemd[1]: Stopping Network Manager...
Jan 10 16:52:45 compute-0 NetworkManager[7178]: <info>  [1768063965.1525] caught SIGTERM, shutting down normally.
Jan 10 16:52:45 compute-0 NetworkManager[7178]: <info>  [1768063965.1539] dhcp4 (eth0): canceled DHCP transaction
Jan 10 16:52:45 compute-0 NetworkManager[7178]: <info>  [1768063965.1540] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 10 16:52:45 compute-0 NetworkManager[7178]: <info>  [1768063965.1540] dhcp4 (eth0): state changed no lease
Jan 10 16:52:45 compute-0 NetworkManager[7178]: <info>  [1768063965.1542] manager: NetworkManager state is now CONNECTED_SITE
Jan 10 16:52:45 compute-0 NetworkManager[7178]: <info>  [1768063965.1610] exiting (success)
Jan 10 16:52:45 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 10 16:52:45 compute-0 systemd[1]: NetworkManager.service: Deactivated successfully.
Jan 10 16:52:45 compute-0 systemd[1]: Stopped Network Manager.
Jan 10 16:52:45 compute-0 systemd[1]: NetworkManager.service: Consumed 13.559s CPU time, 4.1M memory peak, read 0B from disk, written 30.0K to disk.
Jan 10 16:52:45 compute-0 systemd[1]: Starting Network Manager...
Jan 10 16:52:45 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 10 16:52:45 compute-0 NetworkManager[49047]: <info>  [1768063965.2206] NetworkManager (version 1.54.2-1.el9) is starting... (after a restart, boot:bad47697-514b-4229-8b29-23921a9a6958)
Jan 10 16:52:45 compute-0 NetworkManager[49047]: <info>  [1768063965.2209] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Jan 10 16:52:45 compute-0 NetworkManager[49047]: <info>  [1768063965.2271] manager[0x5577ecce3000]: monitoring kernel firmware directory '/lib/firmware'.
Jan 10 16:52:45 compute-0 systemd[1]: Starting Hostname Service...
Jan 10 16:52:45 compute-0 systemd[1]: Started Hostname Service.
Jan 10 16:52:45 compute-0 NetworkManager[49047]: <info>  [1768063965.3067] hostname: hostname: using hostnamed
Jan 10 16:52:45 compute-0 NetworkManager[49047]: <info>  [1768063965.3068] hostname: static hostname changed from (none) to "compute-0"
Jan 10 16:52:45 compute-0 NetworkManager[49047]: <info>  [1768063965.3073] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Jan 10 16:52:45 compute-0 NetworkManager[49047]: <info>  [1768063965.3078] manager[0x5577ecce3000]: rfkill: Wi-Fi hardware radio set enabled
Jan 10 16:52:45 compute-0 NetworkManager[49047]: <info>  [1768063965.3079] manager[0x5577ecce3000]: rfkill: WWAN hardware radio set enabled
Jan 10 16:52:45 compute-0 NetworkManager[49047]: <info>  [1768063965.3098] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.2-1.el9/libnm-device-plugin-ovs.so)
Jan 10 16:52:45 compute-0 NetworkManager[49047]: <info>  [1768063965.3108] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.2-1.el9/libnm-device-plugin-team.so)
Jan 10 16:52:45 compute-0 NetworkManager[49047]: <info>  [1768063965.3109] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Jan 10 16:52:45 compute-0 NetworkManager[49047]: <info>  [1768063965.3110] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Jan 10 16:52:45 compute-0 NetworkManager[49047]: <info>  [1768063965.3111] manager: Networking is enabled by state file
Jan 10 16:52:45 compute-0 NetworkManager[49047]: <info>  [1768063965.3114] settings: Loaded settings plugin: keyfile (internal)
Jan 10 16:52:45 compute-0 NetworkManager[49047]: <info>  [1768063965.3118] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.2-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Jan 10 16:52:45 compute-0 NetworkManager[49047]: <info>  [1768063965.3140] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Jan 10 16:52:45 compute-0 NetworkManager[49047]: <info>  [1768063965.3149] dhcp: init: Using DHCP client 'internal'
Jan 10 16:52:45 compute-0 NetworkManager[49047]: <info>  [1768063965.3152] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Jan 10 16:52:45 compute-0 NetworkManager[49047]: <info>  [1768063965.3157] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 10 16:52:45 compute-0 NetworkManager[49047]: <info>  [1768063965.3163] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Jan 10 16:52:45 compute-0 NetworkManager[49047]: <info>  [1768063965.3169] device (lo): Activation: starting connection 'lo' (d627873a-279e-4130-ac7c-6a2872dc6445)
Jan 10 16:52:45 compute-0 NetworkManager[49047]: <info>  [1768063965.3175] device (eth0): carrier: link connected
Jan 10 16:52:45 compute-0 NetworkManager[49047]: <info>  [1768063965.3179] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Jan 10 16:52:45 compute-0 NetworkManager[49047]: <info>  [1768063965.3184] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Jan 10 16:52:45 compute-0 NetworkManager[49047]: <info>  [1768063965.3185] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 10 16:52:45 compute-0 NetworkManager[49047]: <info>  [1768063965.3191] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 10 16:52:45 compute-0 NetworkManager[49047]: <info>  [1768063965.3197] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 10 16:52:45 compute-0 NetworkManager[49047]: <info>  [1768063965.3202] device (eth1): carrier: link connected
Jan 10 16:52:45 compute-0 NetworkManager[49047]: <info>  [1768063965.3206] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Jan 10 16:52:45 compute-0 NetworkManager[49047]: <info>  [1768063965.3211] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (91161bbf-f289-5cf0-9a28-a3cd6f92331b) (indicated)
Jan 10 16:52:45 compute-0 NetworkManager[49047]: <info>  [1768063965.3212] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 10 16:52:45 compute-0 NetworkManager[49047]: <info>  [1768063965.3216] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 10 16:52:45 compute-0 NetworkManager[49047]: <info>  [1768063965.3222] device (eth1): Activation: starting connection 'ci-private-network' (91161bbf-f289-5cf0-9a28-a3cd6f92331b)
Jan 10 16:52:45 compute-0 NetworkManager[49047]: <info>  [1768063965.3228] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Jan 10 16:52:45 compute-0 systemd[1]: Started Network Manager.
Jan 10 16:52:45 compute-0 NetworkManager[49047]: <info>  [1768063965.3235] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Jan 10 16:52:45 compute-0 NetworkManager[49047]: <info>  [1768063965.3246] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Jan 10 16:52:45 compute-0 NetworkManager[49047]: <info>  [1768063965.3250] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Jan 10 16:52:45 compute-0 NetworkManager[49047]: <info>  [1768063965.3252] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 10 16:52:45 compute-0 NetworkManager[49047]: <info>  [1768063965.3256] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 10 16:52:45 compute-0 NetworkManager[49047]: <info>  [1768063965.3258] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 10 16:52:45 compute-0 NetworkManager[49047]: <info>  [1768063965.3262] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 10 16:52:45 compute-0 NetworkManager[49047]: <info>  [1768063965.3266] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Jan 10 16:52:45 compute-0 NetworkManager[49047]: <info>  [1768063965.3273] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 10 16:52:45 compute-0 NetworkManager[49047]: <info>  [1768063965.3277] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 10 16:52:45 compute-0 NetworkManager[49047]: <info>  [1768063965.3286] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 10 16:52:45 compute-0 NetworkManager[49047]: <info>  [1768063965.3303] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 10 16:52:45 compute-0 NetworkManager[49047]: <info>  [1768063965.3312] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Jan 10 16:52:45 compute-0 NetworkManager[49047]: <info>  [1768063965.3315] dhcp4 (eth0): state changed new lease, address=38.102.83.74
Jan 10 16:52:45 compute-0 NetworkManager[49047]: <info>  [1768063965.3317] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Jan 10 16:52:45 compute-0 NetworkManager[49047]: <info>  [1768063965.3322] device (lo): Activation: successful, device activated.
Jan 10 16:52:45 compute-0 NetworkManager[49047]: <info>  [1768063965.3335] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Jan 10 16:52:45 compute-0 systemd[1]: Starting Network Manager Wait Online...
Jan 10 16:52:45 compute-0 NetworkManager[49047]: <info>  [1768063965.3397] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 10 16:52:45 compute-0 NetworkManager[49047]: <info>  [1768063965.3403] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 10 16:52:45 compute-0 NetworkManager[49047]: <info>  [1768063965.3405] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 10 16:52:45 compute-0 NetworkManager[49047]: <info>  [1768063965.3408] manager: NetworkManager state is now CONNECTED_LOCAL
Jan 10 16:52:45 compute-0 NetworkManager[49047]: <info>  [1768063965.3412] device (eth1): Activation: successful, device activated.
Jan 10 16:52:45 compute-0 NetworkManager[49047]: <info>  [1768063965.3425] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 10 16:52:45 compute-0 NetworkManager[49047]: <info>  [1768063965.3427] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 10 16:52:45 compute-0 NetworkManager[49047]: <info>  [1768063965.3430] manager: NetworkManager state is now CONNECTED_SITE
Jan 10 16:52:45 compute-0 NetworkManager[49047]: <info>  [1768063965.3432] device (eth0): Activation: successful, device activated.
Jan 10 16:52:45 compute-0 NetworkManager[49047]: <info>  [1768063965.3435] manager: NetworkManager state is now CONNECTED_GLOBAL
Jan 10 16:52:45 compute-0 NetworkManager[49047]: <info>  [1768063965.3437] manager: startup complete
Jan 10 16:52:45 compute-0 systemd[1]: Finished Network Manager Wait Online.
Jan 10 16:52:45 compute-0 sudo[49036]: pam_unix(sudo:session): session closed for user root
Jan 10 16:52:45 compute-0 sudo[49263]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vydwwmdicmfgplttbubxoffvqhsfntci ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768063965.5305634-163-5200410991592/AnsiballZ_dnf.py'
Jan 10 16:52:45 compute-0 sudo[49263]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:52:46 compute-0 python3.9[49265]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 10 16:52:50 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 10 16:52:50 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 10 16:52:50 compute-0 systemd[1]: Reloading.
Jan 10 16:52:51 compute-0 systemd-rc-local-generator[49316]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 16:52:51 compute-0 systemd-sysv-generator[49429]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 10 16:52:51 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 10 16:52:51 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 10 16:52:51 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 10 16:52:51 compute-0 systemd[1]: run-ra68a682dcca1494baebaa7549057944c.service: Deactivated successfully.
Jan 10 16:52:52 compute-0 sudo[49263]: pam_unix(sudo:session): session closed for user root
Jan 10 16:52:52 compute-0 sudo[49726]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cthzehpykoacqdgpeztxxbmlzvutshzm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768063972.6145906-175-130763546200567/AnsiballZ_stat.py'
Jan 10 16:52:52 compute-0 sudo[49726]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:52:53 compute-0 python3.9[49728]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 10 16:52:53 compute-0 sudo[49726]: pam_unix(sudo:session): session closed for user root
Jan 10 16:52:53 compute-0 sudo[49878]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-syzvjxuvzblbpzeeqfhfoppbggtllahs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768063973.3118665-184-188567840179562/AnsiballZ_ini_file.py'
Jan 10 16:52:53 compute-0 sudo[49878]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:52:53 compute-0 python3.9[49880]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 16:52:53 compute-0 sudo[49878]: pam_unix(sudo:session): session closed for user root
Jan 10 16:52:54 compute-0 sudo[50032]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pihjxhsgyfptkjpzxfoedieythnrnleu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768063974.2420228-194-89561290791715/AnsiballZ_ini_file.py'
Jan 10 16:52:54 compute-0 sudo[50032]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:52:54 compute-0 python3.9[50034]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 16:52:54 compute-0 sudo[50032]: pam_unix(sudo:session): session closed for user root
Jan 10 16:52:55 compute-0 sudo[50184]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jydlxbahygzxndsdstsilsrnaiftbmrq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768063974.9035323-194-255814999151075/AnsiballZ_ini_file.py'
Jan 10 16:52:55 compute-0 sudo[50184]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:52:55 compute-0 python3.9[50186]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 16:52:55 compute-0 sudo[50184]: pam_unix(sudo:session): session closed for user root
Jan 10 16:52:55 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 10 16:52:55 compute-0 sudo[50336]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nfdfjcgxnispncilqztrjhtyyzefnrty ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768063975.5860207-209-171750402216893/AnsiballZ_ini_file.py'
Jan 10 16:52:55 compute-0 sudo[50336]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:52:56 compute-0 python3.9[50338]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 16:52:56 compute-0 sudo[50336]: pam_unix(sudo:session): session closed for user root
Jan 10 16:52:56 compute-0 sudo[50488]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xusxzratgdfxyfcmsrskfvlnfckbzjtq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768063976.2402298-209-4939623079580/AnsiballZ_ini_file.py'
Jan 10 16:52:56 compute-0 sudo[50488]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:52:56 compute-0 python3.9[50490]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 16:52:56 compute-0 sudo[50488]: pam_unix(sudo:session): session closed for user root
Jan 10 16:52:57 compute-0 sudo[50640]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nqzmfzkwerhimiabadnebonywvmyidto ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768063976.8513954-224-50568702282763/AnsiballZ_stat.py'
Jan 10 16:52:57 compute-0 sudo[50640]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:52:57 compute-0 python3.9[50642]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 16:52:57 compute-0 sudo[50640]: pam_unix(sudo:session): session closed for user root
Jan 10 16:52:57 compute-0 sudo[50763]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aukrashtfigaccxhjvkxlyymhgfxxvfz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768063976.8513954-224-50568702282763/AnsiballZ_copy.py'
Jan 10 16:52:57 compute-0 sudo[50763]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:52:58 compute-0 python3.9[50765]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1768063976.8513954-224-50568702282763/.source _original_basename=.5uer7pom follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 16:52:58 compute-0 sudo[50763]: pam_unix(sudo:session): session closed for user root
Jan 10 16:52:58 compute-0 sudo[50915]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gguwbsfjgpinpkwfwjdnvlynfdohwfdi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768063978.2606153-239-160213150032681/AnsiballZ_file.py'
Jan 10 16:52:58 compute-0 sudo[50915]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:52:58 compute-0 python3.9[50917]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 16:52:58 compute-0 sudo[50915]: pam_unix(sudo:session): session closed for user root
Jan 10 16:52:59 compute-0 sudo[51067]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-frmdjcrbhwrjotvyfcxoznmqebiyhksa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768063979.0107648-247-193375155792531/AnsiballZ_edpm_os_net_config_mappings.py'
Jan 10 16:52:59 compute-0 sudo[51067]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:52:59 compute-0 python3.9[51069]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Jan 10 16:52:59 compute-0 sudo[51067]: pam_unix(sudo:session): session closed for user root
Jan 10 16:53:00 compute-0 sudo[51219]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vtviwjesjppanteqvczqhbkuwpfhwtrh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768063979.8470805-256-262508211776444/AnsiballZ_file.py'
Jan 10 16:53:00 compute-0 sudo[51219]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:53:00 compute-0 python3.9[51221]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 16:53:00 compute-0 sudo[51219]: pam_unix(sudo:session): session closed for user root
Jan 10 16:53:00 compute-0 sudo[51371]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-awutrsztxfybbchlkzkkfaaefhbzuezc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768063980.6145387-266-15056188657494/AnsiballZ_stat.py'
Jan 10 16:53:00 compute-0 sudo[51371]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:53:01 compute-0 sudo[51371]: pam_unix(sudo:session): session closed for user root
Jan 10 16:53:01 compute-0 sudo[51494]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oxiqwpozgzqxlsjwdtmpghxxptkyymmf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768063980.6145387-266-15056188657494/AnsiballZ_copy.py'
Jan 10 16:53:01 compute-0 sudo[51494]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:53:01 compute-0 sudo[51494]: pam_unix(sudo:session): session closed for user root
Jan 10 16:53:02 compute-0 sudo[51646]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uzlcuvmrzetouuotitelyqsbanumsghh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768063981.8619766-281-141887114001004/AnsiballZ_slurp.py'
Jan 10 16:53:02 compute-0 sudo[51646]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:53:02 compute-0 python3.9[51648]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Jan 10 16:53:02 compute-0 sudo[51646]: pam_unix(sudo:session): session closed for user root
Jan 10 16:53:03 compute-0 sudo[51821]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vzykchzlaxadphzxfzrieoadhaxzqres ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768063982.8245974-290-35122400301436/async_wrapper.py j968501777284 300 /home/zuul/.ansible/tmp/ansible-tmp-1768063982.8245974-290-35122400301436/AnsiballZ_edpm_os_net_config.py _'
Jan 10 16:53:03 compute-0 sudo[51821]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:53:03 compute-0 ansible-async_wrapper.py[51823]: Invoked with j968501777284 300 /home/zuul/.ansible/tmp/ansible-tmp-1768063982.8245974-290-35122400301436/AnsiballZ_edpm_os_net_config.py _
Jan 10 16:53:03 compute-0 ansible-async_wrapper.py[51826]: Starting module and watcher
Jan 10 16:53:03 compute-0 ansible-async_wrapper.py[51826]: Start watching 51827 (300)
Jan 10 16:53:03 compute-0 ansible-async_wrapper.py[51827]: Start module (51827)
Jan 10 16:53:03 compute-0 ansible-async_wrapper.py[51823]: Return async_wrapper task started.
Jan 10 16:53:03 compute-0 sudo[51821]: pam_unix(sudo:session): session closed for user root
Jan 10 16:53:03 compute-0 python3.9[51828]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Jan 10 16:53:04 compute-0 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Jan 10 16:53:04 compute-0 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Jan 10 16:53:04 compute-0 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Jan 10 16:53:04 compute-0 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Jan 10 16:53:04 compute-0 kernel: cfg80211: failed to load regulatory.db
Jan 10 16:53:06 compute-0 sshd-session[51830]: Connection closed by authenticating user root 216.36.124.133 port 36708 [preauth]
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.0664] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51829 uid=0 result="success"
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.0685] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51829 uid=0 result="success"
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.1452] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.1454] audit: op="connection-add" uuid="23749045-7eb7-469c-8025-95dce7f6a3d3" name="br-ex-br" pid=51829 uid=0 result="success"
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.1475] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.1477] audit: op="connection-add" uuid="572c521b-ef1e-44c2-9e45-2cca971e91bc" name="br-ex-port" pid=51829 uid=0 result="success"
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.1491] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.1493] audit: op="connection-add" uuid="73d0cb0f-8899-4e89-8c0c-62c877854379" name="eth1-port" pid=51829 uid=0 result="success"
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.1506] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.1508] audit: op="connection-add" uuid="3557aafa-0c7c-47f7-8a26-0de2cb1d26f1" name="vlan20-port" pid=51829 uid=0 result="success"
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.1521] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.1523] audit: op="connection-add" uuid="7e53a335-ecc6-41f9-a8f8-655a8fc57559" name="vlan21-port" pid=51829 uid=0 result="success"
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.1535] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.1536] audit: op="connection-add" uuid="4108fd7f-3d23-4e29-bd33-50b051eb3de2" name="vlan22-port" pid=51829 uid=0 result="success"
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.1549] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.1551] audit: op="connection-add" uuid="726bf054-d501-4702-b1b5-e5edf3bf45dd" name="vlan23-port" pid=51829 uid=0 result="success"
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.1574] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="ipv4.dhcp-timeout,ipv4.dhcp-client-id,connection.timestamp,connection.autoconnect-priority,802-3-ethernet.mtu,ipv6.addr-gen-mode,ipv6.method,ipv6.dhcp-timeout" pid=51829 uid=0 result="success"
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.1593] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.1595] audit: op="connection-add" uuid="a87f6957-10a3-478e-ab01-106b44cf4872" name="br-ex-if" pid=51829 uid=0 result="success"
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.1657] audit: op="connection-update" uuid="91161bbf-f289-5cf0-9a28-a3cd6f92331b" name="ci-private-network" args="ovs-interface.type,ipv4.addresses,ipv4.method,ipv4.routes,ipv4.dns,ipv4.never-default,ipv4.routing-rules,connection.master,connection.timestamp,connection.slave-type,connection.port-type,connection.controller,ipv6.addr-gen-mode,ipv6.addresses,ipv6.method,ipv6.routes,ipv6.dns,ipv6.routing-rules,ovs-external-ids.data" pid=51829 uid=0 result="success"
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.1677] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.1679] audit: op="connection-add" uuid="4383ba21-56a7-4018-aef3-ad454d1194e3" name="vlan20-if" pid=51829 uid=0 result="success"
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.1698] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.1700] audit: op="connection-add" uuid="f9d660d7-3e79-4f86-92b3-5afd2ef4ef22" name="vlan21-if" pid=51829 uid=0 result="success"
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.1719] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.1722] audit: op="connection-add" uuid="489eb104-a1ff-4310-adab-54b2c3517112" name="vlan22-if" pid=51829 uid=0 result="success"
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.1742] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/15)
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.1744] audit: op="connection-add" uuid="82fb7275-a56a-46b9-975d-13d2400166d7" name="vlan23-if" pid=51829 uid=0 result="success"
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.1760] audit: op="connection-delete" uuid="3d2c32e1-e902-3a7a-bfe1-2a4ee0361874" name="Wired connection 1" pid=51829 uid=0 result="success"
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.1775] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <warn>  [1768063986.1778] device (br-ex)[Open vSwitch Bridge]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.1785] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.1790] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (23749045-7eb7-469c-8025-95dce7f6a3d3)
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.1790] audit: op="connection-activate" uuid="23749045-7eb7-469c-8025-95dce7f6a3d3" name="br-ex-br" pid=51829 uid=0 result="success"
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.1792] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <warn>  [1768063986.1793] device (br-ex)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.1800] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.1804] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (572c521b-ef1e-44c2-9e45-2cca971e91bc)
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.1806] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <warn>  [1768063986.1807] device (eth1)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.1812] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.1816] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (73d0cb0f-8899-4e89-8c0c-62c877854379)
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.1818] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <warn>  [1768063986.1819] device (vlan20)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.1824] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.1829] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (3557aafa-0c7c-47f7-8a26-0de2cb1d26f1)
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.1831] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <warn>  [1768063986.1832] device (vlan21)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.1837] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.1842] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (7e53a335-ecc6-41f9-a8f8-655a8fc57559)
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.1844] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <warn>  [1768063986.1845] device (vlan22)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.1850] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.1855] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (4108fd7f-3d23-4e29-bd33-50b051eb3de2)
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.1857] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <warn>  [1768063986.1858] device (vlan23)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.1864] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.1869] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (726bf054-d501-4702-b1b5-e5edf3bf45dd)
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.1869] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.1872] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.1874] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.1881] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <warn>  [1768063986.1882] device (br-ex)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.1886] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.1891] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (a87f6957-10a3-478e-ab01-106b44cf4872)
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.1892] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.1895] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.1897] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.1899] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.1900] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.1912] device (eth1): disconnecting for new activation request.
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.1913] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.1917] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.1919] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.1921] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.1924] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <warn>  [1768063986.1925] device (vlan20)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.1928] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.1933] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (4383ba21-56a7-4018-aef3-ad454d1194e3)
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.1934] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.1937] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.1939] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.1940] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.1943] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <warn>  [1768063986.1944] device (vlan21)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.1948] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.1952] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (f9d660d7-3e79-4f86-92b3-5afd2ef4ef22)
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.1954] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.1957] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.1959] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.1961] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.1964] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <warn>  [1768063986.1965] device (vlan22)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.1968] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.1974] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (489eb104-a1ff-4310-adab-54b2c3517112)
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.1974] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.1978] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.1980] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.1982] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.1985] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <warn>  [1768063986.1986] device (vlan23)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.1990] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.1995] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (82fb7275-a56a-46b9-975d-13d2400166d7)
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.1996] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.2002] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.2004] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.2005] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.2008] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.2021] audit: op="device-reapply" interface="eth0" ifindex=2 args="ipv4.dhcp-timeout,ipv4.dhcp-client-id,connection.autoconnect-priority,802-3-ethernet.mtu,ipv6.addr-gen-mode,ipv6.method" pid=51829 uid=0 result="success"
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.2023] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.2026] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.2028] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.2036] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.2040] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.2044] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.2048] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.2050] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.2056] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.2060] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.2064] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.2066] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 10 16:53:06 compute-0 kernel: ovs-system: entered promiscuous mode
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.2071] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.2074] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.2078] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.2080] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.2084] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.2088] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.2092] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.2094] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.2099] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.2104] dhcp4 (eth0): canceled DHCP transaction
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.2104] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.2105] dhcp4 (eth0): state changed no lease
Jan 10 16:53:06 compute-0 kernel: Timeout policy base is empty
Jan 10 16:53:06 compute-0 systemd-udevd[51837]: Network interface NamePolicy= disabled on kernel command line.
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.2106] dhcp4 (eth0): activation: beginning transaction (no timeout)
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.2120] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.2127] audit: op="device-reapply" interface="eth1" ifindex=3 pid=51829 uid=0 result="fail" reason="Device is not activated"
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.2131] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.2137] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Jan 10 16:53:06 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.2174] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.2178] dhcp4 (eth0): state changed new lease, address=38.102.83.74
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.2220] device (eth1): disconnecting for new activation request.
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.2221] audit: op="connection-activate" uuid="91161bbf-f289-5cf0-9a28-a3cd6f92331b" name="ci-private-network" pid=51829 uid=0 result="success"
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.2243] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.2256] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51829 uid=0 result="success"
Jan 10 16:53:06 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.2399] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.2577] device (eth1): Activation: starting connection 'ci-private-network' (91161bbf-f289-5cf0-9a28-a3cd6f92331b)
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.2602] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.2609] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.2619] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.2620] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.2622] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.2623] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.2624] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.2626] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.2627] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.2632] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.2638] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.2642] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.2645] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.2651] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.2655] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.2659] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.2662] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.2665] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.2670] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.2673] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.2678] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.2681] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.2686] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.2690] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.2696] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.2702] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.2753] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.2756] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.2764] device (eth1): Activation: successful, device activated.
Jan 10 16:53:06 compute-0 kernel: br-ex: entered promiscuous mode
Jan 10 16:53:06 compute-0 kernel: vlan22: entered promiscuous mode
Jan 10 16:53:06 compute-0 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Jan 10 16:53:06 compute-0 systemd-udevd[51836]: Network interface NamePolicy= disabled on kernel command line.
Jan 10 16:53:06 compute-0 kernel: vlan23: entered promiscuous mode
Jan 10 16:53:06 compute-0 systemd-udevd[51835]: Network interface NamePolicy= disabled on kernel command line.
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.3093] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.3104] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.3131] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.3133] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.3139] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 10 16:53:06 compute-0 kernel: vlan20: entered promiscuous mode
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.3181] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.3188] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Jan 10 16:53:06 compute-0 systemd-udevd[51950]: Network interface NamePolicy= disabled on kernel command line.
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.3207] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.3214] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.3244] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.3246] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.3247] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.3251] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.3255] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 10 16:53:06 compute-0 kernel: vlan21: entered promiscuous mode
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.3258] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.3309] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.3317] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.3336] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.3337] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.3341] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.3417] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.3426] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.3453] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.3454] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 10 16:53:06 compute-0 NetworkManager[49047]: <info>  [1768063986.3458] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 10 16:53:07 compute-0 NetworkManager[49047]: <info>  [1768063987.4379] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51829 uid=0 result="success"
Jan 10 16:53:07 compute-0 sudo[52189]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yexckwdendtvuidmwoglwqqibnxylmur ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768063986.9132538-290-2824519446638/AnsiballZ_async_status.py'
Jan 10 16:53:07 compute-0 sudo[52189]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:53:07 compute-0 NetworkManager[49047]: <info>  [1768063987.6821] checkpoint[0x5577eccb8950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Jan 10 16:53:07 compute-0 NetworkManager[49047]: <info>  [1768063987.6823] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51829 uid=0 result="success"
Jan 10 16:53:07 compute-0 python3.9[52191]: ansible-ansible.legacy.async_status Invoked with jid=j968501777284.51823 mode=status _async_dir=/root/.ansible_async
Jan 10 16:53:07 compute-0 sudo[52189]: pam_unix(sudo:session): session closed for user root
Jan 10 16:53:08 compute-0 NetworkManager[49047]: <info>  [1768063988.0426] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51829 uid=0 result="success"
Jan 10 16:53:08 compute-0 NetworkManager[49047]: <info>  [1768063988.0442] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51829 uid=0 result="success"
Jan 10 16:53:08 compute-0 NetworkManager[49047]: <info>  [1768063988.3075] audit: op="networking-control" arg="global-dns-configuration" pid=51829 uid=0 result="success"
Jan 10 16:53:08 compute-0 NetworkManager[49047]: <info>  [1768063988.3120] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Jan 10 16:53:08 compute-0 NetworkManager[49047]: <info>  [1768063988.3162] audit: op="networking-control" arg="global-dns-configuration" pid=51829 uid=0 result="success"
Jan 10 16:53:08 compute-0 NetworkManager[49047]: <info>  [1768063988.3186] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51829 uid=0 result="success"
Jan 10 16:53:08 compute-0 NetworkManager[49047]: <info>  [1768063988.4777] checkpoint[0x5577eccb8a20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Jan 10 16:53:08 compute-0 NetworkManager[49047]: <info>  [1768063988.4783] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51829 uid=0 result="success"
Jan 10 16:53:08 compute-0 ansible-async_wrapper.py[51827]: Module complete (51827)
Jan 10 16:53:08 compute-0 ansible-async_wrapper.py[51826]: Done in kid B.
Jan 10 16:53:10 compute-0 sudo[52295]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-spyrxihvdizjdslhzynqklkkihazahdj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768063986.9132538-290-2824519446638/AnsiballZ_async_status.py'
Jan 10 16:53:11 compute-0 sudo[52295]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:53:11 compute-0 python3.9[52298]: ansible-ansible.legacy.async_status Invoked with jid=j968501777284.51823 mode=status _async_dir=/root/.ansible_async
Jan 10 16:53:11 compute-0 sudo[52295]: pam_unix(sudo:session): session closed for user root
Jan 10 16:53:11 compute-0 sudo[52395]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fsbixhwbqutbacgyesdgjsyhpexzlyba ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768063986.9132538-290-2824519446638/AnsiballZ_async_status.py'
Jan 10 16:53:11 compute-0 sudo[52395]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:53:11 compute-0 python3.9[52397]: ansible-ansible.legacy.async_status Invoked with jid=j968501777284.51823 mode=cleanup _async_dir=/root/.ansible_async
Jan 10 16:53:11 compute-0 sudo[52395]: pam_unix(sudo:session): session closed for user root
Jan 10 16:53:12 compute-0 sudo[52547]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kveramqfumotrlmpwaunhdlwnliksacc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768063991.9172146-317-4528830345523/AnsiballZ_stat.py'
Jan 10 16:53:12 compute-0 sudo[52547]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:53:12 compute-0 python3.9[52549]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 16:53:12 compute-0 sudo[52547]: pam_unix(sudo:session): session closed for user root
Jan 10 16:53:12 compute-0 sudo[52670]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nkaxhcaoocwkaoaplizmyqwegtoqzijp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768063991.9172146-317-4528830345523/AnsiballZ_copy.py'
Jan 10 16:53:12 compute-0 sudo[52670]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:53:12 compute-0 python3.9[52672]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1768063991.9172146-317-4528830345523/.source.returncode _original_basename=.evysxujx follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 16:53:12 compute-0 sudo[52670]: pam_unix(sudo:session): session closed for user root
Jan 10 16:53:13 compute-0 sudo[52822]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jvkqagnxfmdabvkokypebzzctkipfkaq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768063993.152727-333-278884552190980/AnsiballZ_stat.py'
Jan 10 16:53:13 compute-0 sudo[52822]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:53:13 compute-0 python3.9[52824]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 16:53:13 compute-0 sudo[52822]: pam_unix(sudo:session): session closed for user root
Jan 10 16:53:13 compute-0 sudo[52945]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ambzehxexuzzstzupnvgrnrvtnmvminx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768063993.152727-333-278884552190980/AnsiballZ_copy.py'
Jan 10 16:53:13 compute-0 sudo[52945]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:53:14 compute-0 python3.9[52947]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1768063993.152727-333-278884552190980/.source.cfg _original_basename=.jhh3vdjf follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 16:53:14 compute-0 sudo[52945]: pam_unix(sudo:session): session closed for user root
Jan 10 16:53:14 compute-0 sudo[53098]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nytqrrwzoqsezpoxaoboonndpyftbcdr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768063994.3502712-348-198822729648740/AnsiballZ_systemd.py'
Jan 10 16:53:14 compute-0 sudo[53098]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:53:14 compute-0 python3.9[53100]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 10 16:53:15 compute-0 systemd[1]: Reloading Network Manager...
Jan 10 16:53:15 compute-0 NetworkManager[49047]: <info>  [1768063995.0304] audit: op="reload" arg="0" pid=53104 uid=0 result="success"
Jan 10 16:53:15 compute-0 NetworkManager[49047]: <info>  [1768063995.0314] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Jan 10 16:53:15 compute-0 systemd[1]: Reloaded Network Manager.
Jan 10 16:53:15 compute-0 sudo[53098]: pam_unix(sudo:session): session closed for user root
Jan 10 16:53:15 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 10 16:53:15 compute-0 sshd-session[45050]: Connection closed by 192.168.122.30 port 52270
Jan 10 16:53:15 compute-0 sshd-session[45047]: pam_unix(sshd:session): session closed for user zuul
Jan 10 16:53:15 compute-0 systemd[1]: session-10.scope: Deactivated successfully.
Jan 10 16:53:15 compute-0 systemd[1]: session-10.scope: Consumed 53.013s CPU time.
Jan 10 16:53:15 compute-0 systemd-logind[798]: Session 10 logged out. Waiting for processes to exit.
Jan 10 16:53:15 compute-0 systemd-logind[798]: Removed session 10.
Jan 10 16:53:21 compute-0 sshd-session[53137]: Accepted publickey for zuul from 192.168.122.30 port 51058 ssh2: ECDSA SHA256:YYROLJW/JwZAyyZtyl+88gzuUs1GqrQIhGb+AzXg9yc
Jan 10 16:53:21 compute-0 systemd-logind[798]: New session 11 of user zuul.
Jan 10 16:53:21 compute-0 systemd[1]: Started Session 11 of User zuul.
Jan 10 16:53:21 compute-0 sshd-session[53137]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 10 16:53:22 compute-0 python3.9[53290]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 10 16:53:23 compute-0 python3.9[53444]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 10 16:53:24 compute-0 python3.9[53638]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 16:53:25 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 10 16:53:25 compute-0 sshd-session[53140]: Connection closed by 192.168.122.30 port 51058
Jan 10 16:53:25 compute-0 sshd-session[53137]: pam_unix(sshd:session): session closed for user zuul
Jan 10 16:53:25 compute-0 systemd[1]: session-11.scope: Deactivated successfully.
Jan 10 16:53:25 compute-0 systemd[1]: session-11.scope: Consumed 2.480s CPU time.
Jan 10 16:53:25 compute-0 systemd-logind[798]: Session 11 logged out. Waiting for processes to exit.
Jan 10 16:53:25 compute-0 systemd-logind[798]: Removed session 11.
Jan 10 16:53:30 compute-0 sshd-session[53667]: Accepted publickey for zuul from 192.168.122.30 port 45614 ssh2: ECDSA SHA256:YYROLJW/JwZAyyZtyl+88gzuUs1GqrQIhGb+AzXg9yc
Jan 10 16:53:30 compute-0 systemd-logind[798]: New session 12 of user zuul.
Jan 10 16:53:30 compute-0 systemd[1]: Started Session 12 of User zuul.
Jan 10 16:53:30 compute-0 sshd-session[53667]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 10 16:53:31 compute-0 python3.9[53820]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 10 16:53:32 compute-0 python3.9[53974]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 10 16:53:33 compute-0 sudo[54129]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tehwcgecgqjtyqtmywdhltnrjzuuyqkd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064013.342871-35-180411319932015/AnsiballZ_setup.py'
Jan 10 16:53:33 compute-0 sudo[54129]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:53:33 compute-0 python3.9[54131]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 10 16:53:34 compute-0 sudo[54129]: pam_unix(sudo:session): session closed for user root
Jan 10 16:53:34 compute-0 sudo[54213]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tdfqccoapexlazrudakzljqetcnncfvq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064013.342871-35-180411319932015/AnsiballZ_dnf.py'
Jan 10 16:53:34 compute-0 sudo[54213]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:53:34 compute-0 python3.9[54215]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 10 16:53:36 compute-0 sudo[54213]: pam_unix(sudo:session): session closed for user root
Jan 10 16:53:37 compute-0 sudo[54367]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wctwshwywvvxzrjjkyxmaqhrpfohubqh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064016.8187907-47-41394022410793/AnsiballZ_setup.py'
Jan 10 16:53:37 compute-0 sudo[54367]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:53:37 compute-0 python3.9[54369]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 10 16:53:37 compute-0 sudo[54367]: pam_unix(sudo:session): session closed for user root
Jan 10 16:53:38 compute-0 sudo[54562]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lcugexknrarzpxtyiuwwpabnjzsxivuc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064017.9294264-58-181992374147988/AnsiballZ_file.py'
Jan 10 16:53:38 compute-0 sudo[54562]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:53:38 compute-0 python3.9[54564]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 16:53:38 compute-0 sudo[54562]: pam_unix(sudo:session): session closed for user root
Jan 10 16:53:39 compute-0 sudo[54714]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ssuoauhjdyfgzarovkmdmepjxrxebjdn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064018.781312-66-65101730353696/AnsiballZ_command.py'
Jan 10 16:53:39 compute-0 sudo[54714]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:53:39 compute-0 python3.9[54716]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 16:53:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-compat2616064998-merged.mount: Deactivated successfully.
Jan 10 16:53:39 compute-0 podman[54717]: 2026-01-10 16:53:39.522741127 +0000 UTC m=+0.080718320 system refresh
Jan 10 16:53:39 compute-0 sudo[54714]: pam_unix(sudo:session): session closed for user root
Jan 10 16:53:40 compute-0 sudo[54877]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ggcejlndndnqlrgzgkpwcajbpanqzfsd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064019.7254739-74-134145731168203/AnsiballZ_stat.py'
Jan 10 16:53:40 compute-0 sudo[54877]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:53:40 compute-0 python3.9[54879]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 16:53:40 compute-0 sudo[54877]: pam_unix(sudo:session): session closed for user root
Jan 10 16:53:40 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 10 16:53:40 compute-0 sudo[55000]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iycnueggefmymalzjfbjktwigsnnnwwe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064019.7254739-74-134145731168203/AnsiballZ_copy.py'
Jan 10 16:53:40 compute-0 sudo[55000]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:53:41 compute-0 python3.9[55002]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768064019.7254739-74-134145731168203/.source.json follow=False _original_basename=podman_network_config.j2 checksum=eb065b71cd2cac8ce28582061d6993c967907242 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 16:53:41 compute-0 sudo[55000]: pam_unix(sudo:session): session closed for user root
Jan 10 16:53:41 compute-0 sudo[55152]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-glbsnfgmowssljvnpkroooxauuproeov ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064021.225296-89-58555669213042/AnsiballZ_stat.py'
Jan 10 16:53:41 compute-0 sudo[55152]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:53:41 compute-0 python3.9[55154]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 16:53:41 compute-0 sudo[55152]: pam_unix(sudo:session): session closed for user root
Jan 10 16:53:42 compute-0 sudo[55275]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ftgffgduyuwqavleoijnwyzlwafselli ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064021.225296-89-58555669213042/AnsiballZ_copy.py'
Jan 10 16:53:42 compute-0 sudo[55275]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:53:42 compute-0 python3.9[55277]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1768064021.225296-89-58555669213042/.source.conf follow=False _original_basename=registries.conf.j2 checksum=e054e42fc917865162376c34713b3d5516074d23 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 10 16:53:42 compute-0 sudo[55275]: pam_unix(sudo:session): session closed for user root
Jan 10 16:53:42 compute-0 sudo[55427]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tthsujqwkzuuyfvdmfsmtxcjemdhjadb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064022.490602-105-51898182373581/AnsiballZ_ini_file.py'
Jan 10 16:53:42 compute-0 sudo[55427]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:53:43 compute-0 python3.9[55429]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 10 16:53:43 compute-0 sudo[55427]: pam_unix(sudo:session): session closed for user root
Jan 10 16:53:43 compute-0 sudo[55579]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tweeprnifzooxdjugiwlvzzmudzyrctv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064023.2992344-105-120429256602674/AnsiballZ_ini_file.py'
Jan 10 16:53:43 compute-0 sudo[55579]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:53:43 compute-0 python3.9[55581]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 10 16:53:43 compute-0 sudo[55579]: pam_unix(sudo:session): session closed for user root
Jan 10 16:53:44 compute-0 sudo[55731]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gmirdwaugbojoonvcscwnfeupeilkrzc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064023.9296064-105-67762505294136/AnsiballZ_ini_file.py'
Jan 10 16:53:44 compute-0 sudo[55731]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:53:44 compute-0 python3.9[55733]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 10 16:53:44 compute-0 sudo[55731]: pam_unix(sudo:session): session closed for user root
Jan 10 16:53:44 compute-0 sudo[55883]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ulkczhcubybzwiyfgdgbblicfbzoiona ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064024.5309505-105-31282247449170/AnsiballZ_ini_file.py'
Jan 10 16:53:44 compute-0 sudo[55883]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:53:45 compute-0 python3.9[55885]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 10 16:53:45 compute-0 sudo[55883]: pam_unix(sudo:session): session closed for user root
Jan 10 16:53:45 compute-0 sudo[56035]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zhnzbjwlqqlpxneikxfuqsgcjablrxfy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064025.3004503-136-213110500755666/AnsiballZ_dnf.py'
Jan 10 16:53:45 compute-0 sudo[56035]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:53:45 compute-0 python3.9[56037]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 10 16:53:47 compute-0 sudo[56035]: pam_unix(sudo:session): session closed for user root
Jan 10 16:53:47 compute-0 sudo[56188]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bcdzljndbsaysqqbypcrjwukqhmxdphz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064027.6381323-147-112597008764507/AnsiballZ_setup.py'
Jan 10 16:53:47 compute-0 sudo[56188]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:53:48 compute-0 python3.9[56190]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 10 16:53:48 compute-0 sudo[56188]: pam_unix(sudo:session): session closed for user root
Jan 10 16:53:48 compute-0 sudo[56342]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xiegtvkdomulstfozvyosgormybruiri ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064028.4888399-155-6136240619991/AnsiballZ_stat.py'
Jan 10 16:53:48 compute-0 sudo[56342]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:53:48 compute-0 python3.9[56344]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 10 16:53:48 compute-0 sudo[56342]: pam_unix(sudo:session): session closed for user root
Jan 10 16:53:49 compute-0 sudo[56494]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rekislylrbyxwjabeypvyedgygizimwa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064029.154524-164-15860503566292/AnsiballZ_stat.py'
Jan 10 16:53:49 compute-0 sudo[56494]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:53:49 compute-0 python3.9[56496]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 10 16:53:49 compute-0 sudo[56494]: pam_unix(sudo:session): session closed for user root
Jan 10 16:53:50 compute-0 sudo[56646]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-frmrslimvnwasxqvpowrwbbwapiczlhy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064029.9474332-174-135984067712214/AnsiballZ_command.py'
Jan 10 16:53:50 compute-0 sudo[56646]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:53:50 compute-0 python3.9[56648]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 16:53:50 compute-0 sudo[56646]: pam_unix(sudo:session): session closed for user root
Jan 10 16:53:51 compute-0 sudo[56799]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yvfmptxipswqwbwquvhwsuzgpravlvqf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064030.7235222-184-263790949021410/AnsiballZ_service_facts.py'
Jan 10 16:53:51 compute-0 sudo[56799]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:53:51 compute-0 python3.9[56801]: ansible-service_facts Invoked
Jan 10 16:53:51 compute-0 network[56818]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 10 16:53:51 compute-0 network[56819]: 'network-scripts' will be removed from distribution in near future.
Jan 10 16:53:51 compute-0 network[56820]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 10 16:53:56 compute-0 sudo[56799]: pam_unix(sudo:session): session closed for user root
Jan 10 16:53:57 compute-0 sudo[57103]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gypuvvzphcprtcazssmitynsnkoktxoi ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1768064036.7832732-199-118130381827063/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1768064036.7832732-199-118130381827063/args'
Jan 10 16:53:57 compute-0 sudo[57103]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:53:57 compute-0 sudo[57103]: pam_unix(sudo:session): session closed for user root
Jan 10 16:53:57 compute-0 sudo[57270]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mltwngwslvvdyfxarmkjtcqmdehwljyq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064037.5538971-210-182190435818368/AnsiballZ_dnf.py'
Jan 10 16:53:57 compute-0 sudo[57270]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:53:58 compute-0 python3.9[57272]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 10 16:53:59 compute-0 sudo[57270]: pam_unix(sudo:session): session closed for user root
Jan 10 16:54:00 compute-0 sudo[57423]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-coaopgvuzurzeqlqcrvoaiqvftdcppdv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064040.2531354-223-235560240223964/AnsiballZ_package_facts.py'
Jan 10 16:54:00 compute-0 sudo[57423]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:54:01 compute-0 python3.9[57425]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Jan 10 16:54:01 compute-0 sudo[57423]: pam_unix(sudo:session): session closed for user root
Jan 10 16:54:02 compute-0 sudo[57575]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gzwbjyenlgmhvsipgivyvgxpuiakahoi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064041.8977213-233-100695284981687/AnsiballZ_stat.py'
Jan 10 16:54:02 compute-0 sudo[57575]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:54:02 compute-0 python3.9[57577]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 16:54:02 compute-0 sudo[57575]: pam_unix(sudo:session): session closed for user root
Jan 10 16:54:02 compute-0 sudo[57700]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nrinaqddtsvpnezhbhdcyrsbtaksrezh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064041.8977213-233-100695284981687/AnsiballZ_copy.py'
Jan 10 16:54:02 compute-0 sudo[57700]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:54:02 compute-0 python3.9[57702]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1768064041.8977213-233-100695284981687/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 16:54:03 compute-0 sudo[57700]: pam_unix(sudo:session): session closed for user root
Jan 10 16:54:03 compute-0 sudo[57854]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-govgogbeqzzhowtzcpjnilwvxuwesaxc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064043.1907427-248-195756869997730/AnsiballZ_stat.py'
Jan 10 16:54:03 compute-0 sudo[57854]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:54:03 compute-0 python3.9[57856]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 16:54:03 compute-0 sudo[57854]: pam_unix(sudo:session): session closed for user root
Jan 10 16:54:04 compute-0 sudo[57979]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nalzvevftjeqxtjiovdrsjoaqaywpfex ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064043.1907427-248-195756869997730/AnsiballZ_copy.py'
Jan 10 16:54:04 compute-0 sudo[57979]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:54:04 compute-0 python3.9[57981]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1768064043.1907427-248-195756869997730/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 16:54:04 compute-0 sudo[57979]: pam_unix(sudo:session): session closed for user root
Jan 10 16:54:05 compute-0 sudo[58133]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gufyjrpetbddphczuvucuumptkgznvgd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064044.778123-269-195759575273460/AnsiballZ_lineinfile.py'
Jan 10 16:54:05 compute-0 sudo[58133]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:54:05 compute-0 python3.9[58135]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 16:54:05 compute-0 sudo[58133]: pam_unix(sudo:session): session closed for user root
Jan 10 16:54:06 compute-0 sudo[58287]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-imluwullxzcewdwgypscmrsmpbhnbitv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064045.9925613-284-47403254130332/AnsiballZ_setup.py'
Jan 10 16:54:06 compute-0 sudo[58287]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:54:06 compute-0 python3.9[58289]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 10 16:54:06 compute-0 sudo[58287]: pam_unix(sudo:session): session closed for user root
Jan 10 16:54:07 compute-0 sudo[58371]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dqzxliqngzzaghloifvrepgfpzqyzrhs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064045.9925613-284-47403254130332/AnsiballZ_systemd.py'
Jan 10 16:54:07 compute-0 sudo[58371]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:54:07 compute-0 python3.9[58373]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 10 16:54:07 compute-0 sudo[58371]: pam_unix(sudo:session): session closed for user root
Jan 10 16:54:08 compute-0 sudo[58525]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-asymkpyedzlblwgjatbhtsplylqfacky ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064048.2303169-300-54852536121910/AnsiballZ_setup.py'
Jan 10 16:54:08 compute-0 sudo[58525]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:54:08 compute-0 python3.9[58527]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 10 16:54:09 compute-0 sudo[58525]: pam_unix(sudo:session): session closed for user root
Jan 10 16:54:09 compute-0 sudo[58609]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xggvvbdpzdwvtfnnjsgnpocsmrvtypfo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064048.2303169-300-54852536121910/AnsiballZ_systemd.py'
Jan 10 16:54:09 compute-0 sudo[58609]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:54:09 compute-0 python3.9[58611]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 10 16:54:09 compute-0 chronyd[785]: chronyd exiting
Jan 10 16:54:09 compute-0 systemd[1]: Stopping NTP client/server...
Jan 10 16:54:09 compute-0 systemd[1]: chronyd.service: Deactivated successfully.
Jan 10 16:54:09 compute-0 systemd[1]: Stopped NTP client/server.
Jan 10 16:54:09 compute-0 systemd[1]: Starting NTP client/server...
Jan 10 16:54:09 compute-0 chronyd[58619]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Jan 10 16:54:09 compute-0 chronyd[58619]: Frequency -28.332 +/- 0.098 ppm read from /var/lib/chrony/drift
Jan 10 16:54:09 compute-0 chronyd[58619]: Loaded seccomp filter (level 2)
Jan 10 16:54:09 compute-0 systemd[1]: Started NTP client/server.
Jan 10 16:54:09 compute-0 sudo[58609]: pam_unix(sudo:session): session closed for user root
Jan 10 16:54:10 compute-0 sshd-session[53670]: Connection closed by 192.168.122.30 port 45614
Jan 10 16:54:10 compute-0 sshd-session[53667]: pam_unix(sshd:session): session closed for user zuul
Jan 10 16:54:10 compute-0 systemd[1]: session-12.scope: Deactivated successfully.
Jan 10 16:54:10 compute-0 systemd[1]: session-12.scope: Consumed 27.578s CPU time.
Jan 10 16:54:10 compute-0 systemd-logind[798]: Session 12 logged out. Waiting for processes to exit.
Jan 10 16:54:10 compute-0 systemd-logind[798]: Removed session 12.
Jan 10 16:54:15 compute-0 sshd-session[58645]: Accepted publickey for zuul from 192.168.122.30 port 50956 ssh2: ECDSA SHA256:YYROLJW/JwZAyyZtyl+88gzuUs1GqrQIhGb+AzXg9yc
Jan 10 16:54:15 compute-0 systemd-logind[798]: New session 13 of user zuul.
Jan 10 16:54:15 compute-0 systemd[1]: Started Session 13 of User zuul.
Jan 10 16:54:15 compute-0 sshd-session[58645]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 10 16:54:16 compute-0 sudo[58798]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wegrcjfidubhzlnjcecfyhvdqihfzlie ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064056.033305-17-173916387453449/AnsiballZ_file.py'
Jan 10 16:54:16 compute-0 sudo[58798]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:54:16 compute-0 python3.9[58800]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 16:54:16 compute-0 sudo[58798]: pam_unix(sudo:session): session closed for user root
Jan 10 16:54:17 compute-0 sudo[58950]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rtzfwhtonbnydilbisjtgdobuegtredb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064057.0053873-29-92772517778425/AnsiballZ_stat.py'
Jan 10 16:54:17 compute-0 sudo[58950]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:54:17 compute-0 python3.9[58952]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 16:54:17 compute-0 sudo[58950]: pam_unix(sudo:session): session closed for user root
Jan 10 16:54:18 compute-0 sudo[59073]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dmftxglrlerigjsynigosjfadkpjcgcl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064057.0053873-29-92772517778425/AnsiballZ_copy.py'
Jan 10 16:54:18 compute-0 sudo[59073]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:54:18 compute-0 python3.9[59075]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/ceph-networks.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1768064057.0053873-29-92772517778425/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=729ea8396013e3343245d6e934e0dcef55029ad2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 16:54:18 compute-0 sudo[59073]: pam_unix(sudo:session): session closed for user root
Jan 10 16:54:18 compute-0 sshd-session[58648]: Connection closed by 192.168.122.30 port 50956
Jan 10 16:54:18 compute-0 sshd-session[58645]: pam_unix(sshd:session): session closed for user zuul
Jan 10 16:54:18 compute-0 systemd[1]: session-13.scope: Deactivated successfully.
Jan 10 16:54:18 compute-0 systemd[1]: session-13.scope: Consumed 2.013s CPU time.
Jan 10 16:54:18 compute-0 systemd-logind[798]: Session 13 logged out. Waiting for processes to exit.
Jan 10 16:54:18 compute-0 systemd-logind[798]: Removed session 13.
Jan 10 16:54:24 compute-0 sshd-session[59100]: Accepted publickey for zuul from 192.168.122.30 port 33760 ssh2: ECDSA SHA256:YYROLJW/JwZAyyZtyl+88gzuUs1GqrQIhGb+AzXg9yc
Jan 10 16:54:24 compute-0 systemd-logind[798]: New session 14 of user zuul.
Jan 10 16:54:24 compute-0 systemd[1]: Started Session 14 of User zuul.
Jan 10 16:54:24 compute-0 sshd-session[59100]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 10 16:54:25 compute-0 python3.9[59253]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 10 16:54:26 compute-0 sudo[59407]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yhfyrxwxwdcjipttbhwwdpdddawisqmq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064066.512266-28-164053180653051/AnsiballZ_file.py'
Jan 10 16:54:26 compute-0 sudo[59407]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:54:27 compute-0 python3.9[59409]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 16:54:27 compute-0 sudo[59407]: pam_unix(sudo:session): session closed for user root
Jan 10 16:54:27 compute-0 sudo[59582]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cdkqoiameooawrbfukywcnilejhxwvgq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064067.3679338-36-117475129275537/AnsiballZ_stat.py'
Jan 10 16:54:27 compute-0 sudo[59582]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:54:28 compute-0 python3.9[59584]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 16:54:28 compute-0 sudo[59582]: pam_unix(sudo:session): session closed for user root
Jan 10 16:54:28 compute-0 sudo[59705]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-blbrbjynadmerbokniqhwzmaxcyfqwpl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064067.3679338-36-117475129275537/AnsiballZ_copy.py'
Jan 10 16:54:28 compute-0 sudo[59705]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:54:28 compute-0 python3.9[59707]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1768064067.3679338-36-117475129275537/.source.json _original_basename=.yxzqarkb follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 16:54:28 compute-0 sudo[59705]: pam_unix(sudo:session): session closed for user root
Jan 10 16:54:29 compute-0 sudo[59857]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nbfnomoeisqolbdpvpdlidtrmbfafjhy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064069.2496724-59-128005440004113/AnsiballZ_stat.py'
Jan 10 16:54:29 compute-0 sudo[59857]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:54:29 compute-0 python3.9[59859]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 16:54:29 compute-0 sudo[59857]: pam_unix(sudo:session): session closed for user root
Jan 10 16:54:30 compute-0 sudo[59980]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ulhmdmoqpbpqxccmjnsseskdqzmzbuvr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064069.2496724-59-128005440004113/AnsiballZ_copy.py'
Jan 10 16:54:30 compute-0 sudo[59980]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:54:30 compute-0 python3.9[59982]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1768064069.2496724-59-128005440004113/.source _original_basename=.i_7mca7n follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 16:54:30 compute-0 sudo[59980]: pam_unix(sudo:session): session closed for user root
Jan 10 16:54:30 compute-0 sudo[60132]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-shhbwssvgxhoesrvjmgrufsrsruoixem ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064070.4224446-75-130625116053364/AnsiballZ_file.py'
Jan 10 16:54:30 compute-0 sudo[60132]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:54:30 compute-0 python3.9[60134]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 10 16:54:30 compute-0 sudo[60132]: pam_unix(sudo:session): session closed for user root
Jan 10 16:54:31 compute-0 sudo[60284]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iwqubykreydaiildqxtzgikhqdfopvdo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064071.0787003-83-192343670111554/AnsiballZ_stat.py'
Jan 10 16:54:31 compute-0 sudo[60284]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:54:31 compute-0 python3.9[60286]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 16:54:31 compute-0 sudo[60284]: pam_unix(sudo:session): session closed for user root
Jan 10 16:54:31 compute-0 sudo[60407]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kiepejbpuceiyishemubgttiafjesxvq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064071.0787003-83-192343670111554/AnsiballZ_copy.py'
Jan 10 16:54:31 compute-0 sudo[60407]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:54:32 compute-0 python3.9[60409]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1768064071.0787003-83-192343670111554/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 10 16:54:32 compute-0 sudo[60407]: pam_unix(sudo:session): session closed for user root
Jan 10 16:54:32 compute-0 sudo[60559]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xkgjdkzmsaybszoitytwlmsjpvresuei ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064072.3497322-83-266489083251085/AnsiballZ_stat.py'
Jan 10 16:54:32 compute-0 sudo[60559]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:54:32 compute-0 python3.9[60561]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 16:54:32 compute-0 sudo[60559]: pam_unix(sudo:session): session closed for user root
Jan 10 16:54:33 compute-0 sudo[60682]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ilahbkfquliditbbopsxrjyxoadityzi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064072.3497322-83-266489083251085/AnsiballZ_copy.py'
Jan 10 16:54:33 compute-0 sudo[60682]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:54:33 compute-0 python3.9[60684]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1768064072.3497322-83-266489083251085/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 10 16:54:33 compute-0 sudo[60682]: pam_unix(sudo:session): session closed for user root
Jan 10 16:54:33 compute-0 sudo[60834]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jdrrydfyxrtwfhhrmryfnshqacubwovs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064073.592827-112-100715227855789/AnsiballZ_file.py'
Jan 10 16:54:33 compute-0 sudo[60834]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:54:34 compute-0 python3.9[60836]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 16:54:34 compute-0 sudo[60834]: pam_unix(sudo:session): session closed for user root
Jan 10 16:54:34 compute-0 sudo[60986]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dqrxdxdawsaybmamnwszwlncyqqnfwun ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064074.278659-120-135229664252619/AnsiballZ_stat.py'
Jan 10 16:54:34 compute-0 sudo[60986]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:54:34 compute-0 python3.9[60988]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 16:54:34 compute-0 sudo[60986]: pam_unix(sudo:session): session closed for user root
Jan 10 16:54:35 compute-0 sudo[61109]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qcrqecxzbrzralnesvzkhkeunlqbwnat ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064074.278659-120-135229664252619/AnsiballZ_copy.py'
Jan 10 16:54:35 compute-0 sudo[61109]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:54:35 compute-0 python3.9[61111]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768064074.278659-120-135229664252619/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 16:54:35 compute-0 sudo[61109]: pam_unix(sudo:session): session closed for user root
Jan 10 16:54:35 compute-0 sudo[61261]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwtbaxxhgaibbnikdmlveubfsydxkzsp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064075.6654694-135-32427567519276/AnsiballZ_stat.py'
Jan 10 16:54:35 compute-0 sudo[61261]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:54:36 compute-0 python3.9[61263]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 16:54:36 compute-0 sudo[61261]: pam_unix(sudo:session): session closed for user root
Jan 10 16:54:36 compute-0 sudo[61384]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-syfmvdnaslsomandjhjkxncsopwaaorx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064075.6654694-135-32427567519276/AnsiballZ_copy.py'
Jan 10 16:54:36 compute-0 sudo[61384]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:54:36 compute-0 python3.9[61386]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768064075.6654694-135-32427567519276/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 16:54:36 compute-0 sudo[61384]: pam_unix(sudo:session): session closed for user root
Jan 10 16:54:37 compute-0 sudo[61536]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kpshaajcrakqyaisemversdqfnsncsvk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064076.8942351-150-169215427361968/AnsiballZ_systemd.py'
Jan 10 16:54:37 compute-0 sudo[61536]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:54:37 compute-0 python3.9[61538]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 10 16:54:37 compute-0 systemd[1]: Reloading.
Jan 10 16:54:37 compute-0 systemd-rc-local-generator[61567]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 16:54:37 compute-0 systemd-sysv-generator[61571]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 10 16:54:38 compute-0 systemd[1]: Reloading.
Jan 10 16:54:38 compute-0 systemd-sysv-generator[61602]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 10 16:54:38 compute-0 systemd-rc-local-generator[61599]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 16:54:38 compute-0 systemd[1]: Starting EDPM Container Shutdown...
Jan 10 16:54:38 compute-0 systemd[1]: Finished EDPM Container Shutdown.
Jan 10 16:54:38 compute-0 sudo[61536]: pam_unix(sudo:session): session closed for user root
Jan 10 16:54:38 compute-0 sudo[61762]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-llzjradyrunofcjdvcbnopvmrlogoimc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064078.5615966-158-259876866495762/AnsiballZ_stat.py'
Jan 10 16:54:38 compute-0 sudo[61762]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:54:39 compute-0 python3.9[61764]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 16:54:39 compute-0 sudo[61762]: pam_unix(sudo:session): session closed for user root
Jan 10 16:54:39 compute-0 sudo[61885]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sgsrvicfzyzoonxeifgfhmxztzezkvcf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064078.5615966-158-259876866495762/AnsiballZ_copy.py'
Jan 10 16:54:39 compute-0 sudo[61885]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:54:39 compute-0 python3.9[61887]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768064078.5615966-158-259876866495762/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 16:54:39 compute-0 sudo[61885]: pam_unix(sudo:session): session closed for user root
Jan 10 16:54:39 compute-0 sudo[62037]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-itrzsctpqruhlndammsnnqkdhxyjgvdz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064079.708673-173-213946176817083/AnsiballZ_stat.py'
Jan 10 16:54:39 compute-0 sudo[62037]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:54:40 compute-0 python3.9[62039]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 16:54:40 compute-0 sudo[62037]: pam_unix(sudo:session): session closed for user root
Jan 10 16:54:40 compute-0 sudo[62160]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qbshtlsixtzckfcpftlwwesyefbsuiqe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064079.708673-173-213946176817083/AnsiballZ_copy.py'
Jan 10 16:54:40 compute-0 sudo[62160]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:54:40 compute-0 python3.9[62162]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768064079.708673-173-213946176817083/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 16:54:40 compute-0 sudo[62160]: pam_unix(sudo:session): session closed for user root
Jan 10 16:54:41 compute-0 sudo[62312]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bakocoejyvsnuoxwikjcoasxsdwteqkm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064080.9573445-188-276449535716329/AnsiballZ_systemd.py'
Jan 10 16:54:41 compute-0 sudo[62312]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:54:41 compute-0 python3.9[62314]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 10 16:54:41 compute-0 systemd[1]: Reloading.
Jan 10 16:54:41 compute-0 systemd-rc-local-generator[62343]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 16:54:41 compute-0 systemd-sysv-generator[62347]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 10 16:54:41 compute-0 systemd[1]: Reloading.
Jan 10 16:54:41 compute-0 systemd-rc-local-generator[62379]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 16:54:41 compute-0 systemd-sysv-generator[62382]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 10 16:54:42 compute-0 systemd[1]: Starting Create netns directory...
Jan 10 16:54:42 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 10 16:54:42 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 10 16:54:42 compute-0 systemd[1]: Finished Create netns directory.
Jan 10 16:54:42 compute-0 sudo[62312]: pam_unix(sudo:session): session closed for user root
Jan 10 16:54:42 compute-0 python3.9[62540]: ansible-ansible.builtin.service_facts Invoked
Jan 10 16:54:43 compute-0 network[62557]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 10 16:54:43 compute-0 network[62558]: 'network-scripts' will be removed from distribution in near future.
Jan 10 16:54:43 compute-0 network[62559]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 10 16:54:48 compute-0 sudo[62819]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qmfsuyzenzwzyrcojsoststtgkjyoodo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064088.3507314-204-154651988157925/AnsiballZ_systemd.py'
Jan 10 16:54:48 compute-0 sudo[62819]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:54:49 compute-0 python3.9[62821]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 10 16:54:49 compute-0 systemd[1]: Reloading.
Jan 10 16:54:49 compute-0 systemd-rc-local-generator[62845]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 16:54:49 compute-0 systemd-sysv-generator[62851]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 10 16:54:49 compute-0 systemd[1]: Stopping IPv4 firewall with iptables...
Jan 10 16:54:49 compute-0 iptables.init[62861]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Jan 10 16:54:49 compute-0 iptables.init[62861]: iptables: Flushing firewall rules: [  OK  ]
Jan 10 16:54:49 compute-0 systemd[1]: iptables.service: Deactivated successfully.
Jan 10 16:54:49 compute-0 systemd[1]: Stopped IPv4 firewall with iptables.
Jan 10 16:54:49 compute-0 sudo[62819]: pam_unix(sudo:session): session closed for user root
Jan 10 16:54:50 compute-0 sudo[63055]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-asfhkrogacyxxgcqfyhbcazigxgyklgb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064089.9065483-204-207440572785777/AnsiballZ_systemd.py'
Jan 10 16:54:50 compute-0 sudo[63055]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:54:50 compute-0 python3.9[63057]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 10 16:54:50 compute-0 sudo[63055]: pam_unix(sudo:session): session closed for user root
Jan 10 16:54:51 compute-0 sudo[63209]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ktpfkrrbjsymsgkiooeqikjmmjlhldds ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064090.8411405-220-42381321120257/AnsiballZ_systemd.py'
Jan 10 16:54:51 compute-0 sudo[63209]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:54:51 compute-0 python3.9[63211]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 10 16:54:51 compute-0 systemd[1]: Reloading.
Jan 10 16:54:51 compute-0 systemd-rc-local-generator[63241]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 16:54:51 compute-0 systemd-sysv-generator[63245]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 10 16:54:51 compute-0 systemd[1]: Starting Netfilter Tables...
Jan 10 16:54:51 compute-0 systemd[1]: Finished Netfilter Tables.
Jan 10 16:54:51 compute-0 sudo[63209]: pam_unix(sudo:session): session closed for user root
Jan 10 16:54:52 compute-0 sshd-session[63212]: Invalid user admin from 216.36.124.133 port 37816
Jan 10 16:54:52 compute-0 sudo[63402]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xhmjxfhpqssgcfxzkjtvoskqnjzetrll ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064091.9593973-228-108826816280869/AnsiballZ_command.py'
Jan 10 16:54:52 compute-0 sudo[63402]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:54:52 compute-0 python3.9[63404]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 16:54:52 compute-0 sudo[63402]: pam_unix(sudo:session): session closed for user root
Jan 10 16:54:53 compute-0 sshd-session[63212]: Connection closed by invalid user admin 216.36.124.133 port 37816 [preauth]
Jan 10 16:54:53 compute-0 sudo[63555]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hsqysjujtmlfyowqrgaximsifmeqjniq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064092.9674098-242-123190661667199/AnsiballZ_stat.py'
Jan 10 16:54:53 compute-0 sudo[63555]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:54:53 compute-0 python3.9[63557]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 16:54:53 compute-0 sudo[63555]: pam_unix(sudo:session): session closed for user root
Jan 10 16:54:53 compute-0 sudo[63680]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jzzgnnijlwiidmcwobmvhgpoqupjmqya ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064092.9674098-242-123190661667199/AnsiballZ_copy.py'
Jan 10 16:54:53 compute-0 sudo[63680]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:54:54 compute-0 python3.9[63682]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1768064092.9674098-242-123190661667199/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 16:54:54 compute-0 sudo[63680]: pam_unix(sudo:session): session closed for user root
Jan 10 16:54:54 compute-0 sudo[63833]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-umrmrkfxdmmbezdyxnchxnfjxuqvniej ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064094.365733-257-68977195141807/AnsiballZ_systemd.py'
Jan 10 16:54:54 compute-0 sudo[63833]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:54:54 compute-0 python3.9[63835]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 10 16:54:55 compute-0 systemd[1]: Reloading OpenSSH server daemon...
Jan 10 16:54:55 compute-0 sshd[1007]: Received SIGHUP; restarting.
Jan 10 16:54:55 compute-0 sshd[1007]: Server listening on 0.0.0.0 port 22.
Jan 10 16:54:55 compute-0 sshd[1007]: Server listening on :: port 22.
Jan 10 16:54:55 compute-0 systemd[1]: Reloaded OpenSSH server daemon.
Jan 10 16:54:55 compute-0 sudo[63833]: pam_unix(sudo:session): session closed for user root
Jan 10 16:54:55 compute-0 sudo[63989]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bptdfgwlqqljbkfrzihtjokcunxavdqe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064095.3078115-265-140508914081509/AnsiballZ_file.py'
Jan 10 16:54:55 compute-0 sudo[63989]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:54:55 compute-0 python3.9[63991]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 16:54:55 compute-0 sudo[63989]: pam_unix(sudo:session): session closed for user root
Jan 10 16:54:56 compute-0 sudo[64141]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-szmoouewnevmloljapcafwaqjvpexkrw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064095.9862332-273-66539776044936/AnsiballZ_stat.py'
Jan 10 16:54:56 compute-0 sudo[64141]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:54:56 compute-0 python3.9[64143]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 16:54:56 compute-0 sudo[64141]: pam_unix(sudo:session): session closed for user root
Jan 10 16:54:56 compute-0 sudo[64264]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zncjmbwcsihhknpjwkeeptnfbwaszomr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064095.9862332-273-66539776044936/AnsiballZ_copy.py'
Jan 10 16:54:56 compute-0 sudo[64264]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:54:57 compute-0 python3.9[64266]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768064095.9862332-273-66539776044936/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 16:54:57 compute-0 sudo[64264]: pam_unix(sudo:session): session closed for user root
Jan 10 16:54:57 compute-0 sudo[64416]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ocejntqynhgkljpcmnyienctgvlwzeby ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064097.3631775-291-75661324798631/AnsiballZ_timezone.py'
Jan 10 16:54:57 compute-0 sudo[64416]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:54:58 compute-0 python3.9[64418]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Jan 10 16:54:58 compute-0 systemd[1]: Starting Time & Date Service...
Jan 10 16:54:58 compute-0 systemd[1]: Started Time & Date Service.
Jan 10 16:54:58 compute-0 sudo[64416]: pam_unix(sudo:session): session closed for user root
Jan 10 16:54:58 compute-0 sudo[64572]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-efghahqsjofohnoynihupsahwlrgrtsd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064098.3533254-300-173875800128675/AnsiballZ_file.py'
Jan 10 16:54:58 compute-0 sudo[64572]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:54:58 compute-0 python3.9[64574]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 16:54:58 compute-0 sudo[64572]: pam_unix(sudo:session): session closed for user root
Jan 10 16:54:59 compute-0 sudo[64724]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dubouwqdtmcbzkgysaxmdabcbzipiwwj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064099.0083985-308-237106065279914/AnsiballZ_stat.py'
Jan 10 16:54:59 compute-0 sudo[64724]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:54:59 compute-0 python3.9[64726]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 16:54:59 compute-0 sudo[64724]: pam_unix(sudo:session): session closed for user root
Jan 10 16:54:59 compute-0 sudo[64847]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ccuqgxqmrbrzhhqnbvfnqyfmgqhpnvfj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064099.0083985-308-237106065279914/AnsiballZ_copy.py'
Jan 10 16:54:59 compute-0 sudo[64847]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:55:00 compute-0 python3.9[64849]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1768064099.0083985-308-237106065279914/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 16:55:00 compute-0 sudo[64847]: pam_unix(sudo:session): session closed for user root
Jan 10 16:55:00 compute-0 sudo[64999]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qilghirgvonweriyazkycelvqqlypzcp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064100.2082083-323-171115577821664/AnsiballZ_stat.py'
Jan 10 16:55:00 compute-0 sudo[64999]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:55:00 compute-0 python3.9[65001]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 16:55:00 compute-0 sudo[64999]: pam_unix(sudo:session): session closed for user root
Jan 10 16:55:01 compute-0 sudo[65122]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iinsklfibcgfdrmraekaxsqpqszozpzj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064100.2082083-323-171115577821664/AnsiballZ_copy.py'
Jan 10 16:55:01 compute-0 sudo[65122]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:55:01 compute-0 python3.9[65124]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1768064100.2082083-323-171115577821664/.source.yaml _original_basename=.6jocap2c follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 16:55:01 compute-0 sudo[65122]: pam_unix(sudo:session): session closed for user root
Jan 10 16:55:01 compute-0 sudo[65274]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rdinmvwrtoazzwhowrpkknbhormszuoh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064101.4290164-338-43507799260864/AnsiballZ_stat.py'
Jan 10 16:55:01 compute-0 sudo[65274]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:55:01 compute-0 python3.9[65276]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 16:55:01 compute-0 sudo[65274]: pam_unix(sudo:session): session closed for user root
Jan 10 16:55:02 compute-0 sudo[65397]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pcrbklarbmczmloxeknigvfwclhnjzwe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064101.4290164-338-43507799260864/AnsiballZ_copy.py'
Jan 10 16:55:02 compute-0 sudo[65397]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:55:02 compute-0 python3.9[65399]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768064101.4290164-338-43507799260864/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 16:55:02 compute-0 sudo[65397]: pam_unix(sudo:session): session closed for user root
Jan 10 16:55:02 compute-0 sudo[65550]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rvbwraphyxcidvzxfpgygqvfdfjfysca ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064102.6820583-353-105944994315617/AnsiballZ_command.py'
Jan 10 16:55:02 compute-0 sudo[65550]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:55:03 compute-0 python3.9[65552]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 16:55:03 compute-0 sudo[65550]: pam_unix(sudo:session): session closed for user root
Jan 10 16:55:03 compute-0 sudo[65703]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-toipnrsriuongmottskehrlifinhppnn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064103.292782-361-236762835629391/AnsiballZ_command.py'
Jan 10 16:55:03 compute-0 sudo[65703]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:55:03 compute-0 python3.9[65705]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 16:55:03 compute-0 sudo[65703]: pam_unix(sudo:session): session closed for user root
Jan 10 16:55:04 compute-0 sudo[65856]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zdpaspngyscmtrpftogjdysmznsohjyl ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1768064103.9537148-369-40399510733290/AnsiballZ_edpm_nftables_from_files.py'
Jan 10 16:55:04 compute-0 sudo[65856]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:55:04 compute-0 python3[65858]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 10 16:55:04 compute-0 sudo[65856]: pam_unix(sudo:session): session closed for user root
Jan 10 16:55:05 compute-0 sudo[66008]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vkqusoxeiydjvqmxzjfccvjebnfvjsty ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064104.78193-377-27315577260614/AnsiballZ_stat.py'
Jan 10 16:55:05 compute-0 sudo[66008]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:55:05 compute-0 python3.9[66010]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 16:55:05 compute-0 sudo[66008]: pam_unix(sudo:session): session closed for user root
Jan 10 16:55:05 compute-0 sudo[66131]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-exnkauzgwbuuivchzzyyryoputdccpas ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064104.78193-377-27315577260614/AnsiballZ_copy.py'
Jan 10 16:55:05 compute-0 sudo[66131]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:55:05 compute-0 python3.9[66133]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768064104.78193-377-27315577260614/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 16:55:05 compute-0 sudo[66131]: pam_unix(sudo:session): session closed for user root
Jan 10 16:55:06 compute-0 sudo[66283]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gpsktxrfgvblgkqrnnsjguwwsljemgmf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064106.0899832-392-98624383713792/AnsiballZ_stat.py'
Jan 10 16:55:06 compute-0 sudo[66283]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:55:06 compute-0 python3.9[66285]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 16:55:06 compute-0 sudo[66283]: pam_unix(sudo:session): session closed for user root
Jan 10 16:55:06 compute-0 sudo[66406]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zhmizxspwdcyfjoviicaxkcaapdcwjvv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064106.0899832-392-98624383713792/AnsiballZ_copy.py'
Jan 10 16:55:06 compute-0 sudo[66406]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:55:07 compute-0 python3.9[66408]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768064106.0899832-392-98624383713792/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 16:55:07 compute-0 sudo[66406]: pam_unix(sudo:session): session closed for user root
Jan 10 16:55:07 compute-0 sudo[66558]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sgqbhimmmfsttgndtpzgefcoazucglsb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064107.3581686-407-59295820658138/AnsiballZ_stat.py'
Jan 10 16:55:07 compute-0 sudo[66558]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:55:07 compute-0 python3.9[66560]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 16:55:07 compute-0 sudo[66558]: pam_unix(sudo:session): session closed for user root
Jan 10 16:55:08 compute-0 sudo[66681]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ngoynnfscxlakycvruwkwjxnvgojaogg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064107.3581686-407-59295820658138/AnsiballZ_copy.py'
Jan 10 16:55:08 compute-0 sudo[66681]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:55:08 compute-0 python3.9[66683]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768064107.3581686-407-59295820658138/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 16:55:08 compute-0 sudo[66681]: pam_unix(sudo:session): session closed for user root
Jan 10 16:55:08 compute-0 sudo[66833]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bgbjaczwnwhkftyczyzkwwupyfszwmst ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064108.542549-422-62426910618462/AnsiballZ_stat.py'
Jan 10 16:55:08 compute-0 sudo[66833]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:55:09 compute-0 python3.9[66835]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 16:55:09 compute-0 sudo[66833]: pam_unix(sudo:session): session closed for user root
Jan 10 16:55:09 compute-0 sudo[66956]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tmbiviwpypohkzwhuhgfavjbkepmiuqr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064108.542549-422-62426910618462/AnsiballZ_copy.py'
Jan 10 16:55:09 compute-0 sudo[66956]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:55:09 compute-0 python3.9[66958]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768064108.542549-422-62426910618462/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 16:55:09 compute-0 sudo[66956]: pam_unix(sudo:session): session closed for user root
Jan 10 16:55:10 compute-0 sudo[67108]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-prljxsbknwpipndubtacyzadiooyxlyw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064109.858952-437-137312440973/AnsiballZ_stat.py'
Jan 10 16:55:10 compute-0 sudo[67108]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:55:10 compute-0 python3.9[67110]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 16:55:10 compute-0 sudo[67108]: pam_unix(sudo:session): session closed for user root
Jan 10 16:55:10 compute-0 sudo[67231]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rsyhmgfklcaktbffirjzjjbfzsvkpqhp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064109.858952-437-137312440973/AnsiballZ_copy.py'
Jan 10 16:55:10 compute-0 sudo[67231]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:55:11 compute-0 python3.9[67233]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768064109.858952-437-137312440973/.source.nft follow=False _original_basename=ruleset.j2 checksum=693377dc03e5b6b24713cb537b18b88774724e35 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 16:55:11 compute-0 sudo[67231]: pam_unix(sudo:session): session closed for user root
Jan 10 16:55:11 compute-0 sudo[67383]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zvlfjhslbydeznsxmekhhbndbwqenwbd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064111.273789-452-131424806401983/AnsiballZ_file.py'
Jan 10 16:55:11 compute-0 sudo[67383]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:55:11 compute-0 python3.9[67385]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 16:55:11 compute-0 sudo[67383]: pam_unix(sudo:session): session closed for user root
Jan 10 16:55:12 compute-0 sudo[67535]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hssctcnpuwsddtbpqpocrztcnmdgoimp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064111.9715607-460-17788675259388/AnsiballZ_command.py'
Jan 10 16:55:12 compute-0 sudo[67535]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:55:12 compute-0 python3.9[67537]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 16:55:12 compute-0 sudo[67535]: pam_unix(sudo:session): session closed for user root
Jan 10 16:55:13 compute-0 sudo[67694]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-omleetwfgxwmdikqahptzvggrdxvucwa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064112.7263992-468-10457544941950/AnsiballZ_blockinfile.py'
Jan 10 16:55:13 compute-0 sudo[67694]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:55:13 compute-0 python3.9[67696]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                            include "/etc/nftables/edpm-chains.nft"
                                            include "/etc/nftables/edpm-rules.nft"
                                            include "/etc/nftables/edpm-jumps.nft"
                                             path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 16:55:13 compute-0 sudo[67694]: pam_unix(sudo:session): session closed for user root
Jan 10 16:55:14 compute-0 sudo[67847]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-padefbcwxqpoadkqadmouhgzujxhhycw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064113.6971579-477-55407442305315/AnsiballZ_file.py'
Jan 10 16:55:14 compute-0 sudo[67847]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:55:14 compute-0 python3.9[67849]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 16:55:14 compute-0 sudo[67847]: pam_unix(sudo:session): session closed for user root
Jan 10 16:55:14 compute-0 sudo[67999]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gahdstsmiazexybkekeyniycdcfpexvu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064114.3627193-477-262877632710202/AnsiballZ_file.py'
Jan 10 16:55:14 compute-0 sudo[67999]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:55:14 compute-0 python3.9[68001]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 16:55:14 compute-0 sudo[67999]: pam_unix(sudo:session): session closed for user root
Jan 10 16:55:15 compute-0 sudo[68151]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tlfwbmhdtczwueohcyeintnhvztjnnpf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064115.126222-492-80705736471655/AnsiballZ_mount.py'
Jan 10 16:55:15 compute-0 sudo[68151]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:55:15 compute-0 python3.9[68153]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 10 16:55:15 compute-0 sudo[68151]: pam_unix(sudo:session): session closed for user root
Jan 10 16:55:16 compute-0 sudo[68304]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zsrbbwokbwxmnudpzztuifimpvvxlnhv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064116.0191562-492-62519197712415/AnsiballZ_mount.py'
Jan 10 16:55:16 compute-0 sudo[68304]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:55:16 compute-0 python3.9[68306]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 10 16:55:16 compute-0 sudo[68304]: pam_unix(sudo:session): session closed for user root
Jan 10 16:55:17 compute-0 sshd-session[59103]: Connection closed by 192.168.122.30 port 33760
Jan 10 16:55:17 compute-0 sshd-session[59100]: pam_unix(sshd:session): session closed for user zuul
Jan 10 16:55:17 compute-0 systemd[1]: session-14.scope: Deactivated successfully.
Jan 10 16:55:17 compute-0 systemd[1]: session-14.scope: Consumed 37.985s CPU time.
Jan 10 16:55:17 compute-0 systemd-logind[798]: Session 14 logged out. Waiting for processes to exit.
Jan 10 16:55:17 compute-0 systemd-logind[798]: Removed session 14.
Jan 10 16:55:22 compute-0 sshd-session[68332]: Accepted publickey for zuul from 192.168.122.30 port 38054 ssh2: ECDSA SHA256:YYROLJW/JwZAyyZtyl+88gzuUs1GqrQIhGb+AzXg9yc
Jan 10 16:55:22 compute-0 systemd-logind[798]: New session 15 of user zuul.
Jan 10 16:55:22 compute-0 systemd[1]: Started Session 15 of User zuul.
Jan 10 16:55:22 compute-0 sshd-session[68332]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 10 16:55:23 compute-0 sudo[68485]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zbfaivddljrgedzkcmrrhdmyuucvrwlo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064122.696199-16-22833422100967/AnsiballZ_tempfile.py'
Jan 10 16:55:23 compute-0 sudo[68485]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:55:23 compute-0 python3.9[68487]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Jan 10 16:55:23 compute-0 sudo[68485]: pam_unix(sudo:session): session closed for user root
Jan 10 16:55:24 compute-0 sudo[68637]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vqrfqpqzvkyioybwoxoojgmpbodcgkye ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064123.6286006-28-85775567595106/AnsiballZ_stat.py'
Jan 10 16:55:24 compute-0 sudo[68637]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:55:24 compute-0 python3.9[68639]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 10 16:55:24 compute-0 sudo[68637]: pam_unix(sudo:session): session closed for user root
Jan 10 16:55:25 compute-0 sudo[68789]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hutzhxmfoiwordttuojsgoldfstnpvpm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064124.532008-38-218677738739864/AnsiballZ_setup.py'
Jan 10 16:55:25 compute-0 sudo[68789]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:55:25 compute-0 python3.9[68791]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 10 16:55:25 compute-0 sudo[68789]: pam_unix(sudo:session): session closed for user root
Jan 10 16:55:26 compute-0 sudo[68941]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-umydzxhukdjzbdhbodogddqoxtkmmpox ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064125.616362-47-45447282076986/AnsiballZ_blockinfile.py'
Jan 10 16:55:26 compute-0 sudo[68941]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:55:26 compute-0 python3.9[68943]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLbf1u7QZKIo5G+YWiNhcXI+Bt6YV4GfE/ux3dizYMgWBt9o+PmlYYMiVREbRw0Bbw1ytXXbF5+nj3Xb2CXI8ussGl0WspjKSeiZ6iZLcZTiCJLgJ/9hsvwXR//dQk9MHjPU21/f9Bmm5bXO7JD6wyeZ6BhNNSRil+tMQ9dtlaRlLoSzr5CXtKSgvp0EnFO/wO0yIjn5vj0Kg53pKe6PklqqbDKQe4B3RTSjCo711H66GqFuA0OZDkpKEVqdQFy9HUPAxgflwamxh1bRZYQ4oZ+sRK0y7Aau5nyIxefmh+nrgkwpuGnfu/PBcFHlgDpGdK5SR2MN7oUwfJtJl+qp1MFaUz+TRF7THXK8e6MCD0RPGfqlim6D6qGfKkbBYM50kTncYakPtGOrLbf/hARiTSEduglbNBYv0vatpv1emwjOPwkAu3DZdOi4PokhOq+BnOnG95UH3ZzOWO+UnNEiCQgCu7NbzJOFb/KoBU8XRT1o8yPWdpwQ+mKGFE1PGsA7k=
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICVw/TzKh+QQYsI9HFUl2xKC/Iozkh6C2Rlm1r7qShYC
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIHuUq5M0wkVhsnk90cNjQOZixGqQR1X/PXyTQuPIQfBmEkOk4KlPkJk1al+bzULcCOXjdbnilDQbL6yRpQlhrU=
                                             create=True mode=0644 path=/tmp/ansible.v4k87suj state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 16:55:26 compute-0 sudo[68941]: pam_unix(sudo:session): session closed for user root
Jan 10 16:55:26 compute-0 sudo[69093]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cxgmnvihksulqrnicwvvccwzunznkcke ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064126.4396155-55-53274676037365/AnsiballZ_command.py'
Jan 10 16:55:26 compute-0 sudo[69093]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:55:27 compute-0 python3.9[69095]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.v4k87suj' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 16:55:27 compute-0 sudo[69093]: pam_unix(sudo:session): session closed for user root
Jan 10 16:55:27 compute-0 sudo[69247]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vnxkjrvxpwjvtfmpgoandjrjnaiijetn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064127.2509441-63-68142026650039/AnsiballZ_file.py'
Jan 10 16:55:27 compute-0 sudo[69247]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:55:27 compute-0 python3.9[69249]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.v4k87suj state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 16:55:27 compute-0 sudo[69247]: pam_unix(sudo:session): session closed for user root
Jan 10 16:55:28 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Jan 10 16:55:28 compute-0 sshd-session[68335]: Connection closed by 192.168.122.30 port 38054
Jan 10 16:55:28 compute-0 sshd-session[68332]: pam_unix(sshd:session): session closed for user zuul
Jan 10 16:55:28 compute-0 systemd[1]: session-15.scope: Deactivated successfully.
Jan 10 16:55:28 compute-0 systemd[1]: session-15.scope: Consumed 3.653s CPU time.
Jan 10 16:55:28 compute-0 systemd-logind[798]: Session 15 logged out. Waiting for processes to exit.
Jan 10 16:55:28 compute-0 systemd-logind[798]: Removed session 15.
Jan 10 16:55:33 compute-0 sshd-session[69276]: Accepted publickey for zuul from 192.168.122.30 port 40406 ssh2: ECDSA SHA256:YYROLJW/JwZAyyZtyl+88gzuUs1GqrQIhGb+AzXg9yc
Jan 10 16:55:33 compute-0 systemd-logind[798]: New session 16 of user zuul.
Jan 10 16:55:33 compute-0 systemd[1]: Started Session 16 of User zuul.
Jan 10 16:55:33 compute-0 sshd-session[69276]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 10 16:55:34 compute-0 python3.9[69429]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 10 16:55:35 compute-0 sudo[69583]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qdcuwsscgkandnkrrbduqimoxcnzhsup ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064134.7511969-27-169602066739990/AnsiballZ_systemd.py'
Jan 10 16:55:35 compute-0 sudo[69583]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:55:35 compute-0 python3.9[69585]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Jan 10 16:55:35 compute-0 sudo[69583]: pam_unix(sudo:session): session closed for user root
Jan 10 16:55:36 compute-0 sudo[69737]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ogpgdqsdmbvssclrzpbbxjqbyfcasupm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064135.9070387-35-104218986407173/AnsiballZ_systemd.py'
Jan 10 16:55:36 compute-0 sudo[69737]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:55:36 compute-0 python3.9[69739]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 10 16:55:37 compute-0 sudo[69737]: pam_unix(sudo:session): session closed for user root
Jan 10 16:55:38 compute-0 sudo[69890]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-omrngoapuccrzccbdrbjidfrfnvnsfot ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064137.78991-44-17495616512679/AnsiballZ_command.py'
Jan 10 16:55:38 compute-0 sudo[69890]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:55:38 compute-0 python3.9[69892]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 16:55:38 compute-0 sudo[69890]: pam_unix(sudo:session): session closed for user root
Jan 10 16:55:39 compute-0 sudo[70043]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-stqbiqpxtrddtfeqxqyswforpfmirstd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064138.6735685-52-39631162938313/AnsiballZ_stat.py'
Jan 10 16:55:39 compute-0 sudo[70043]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:55:39 compute-0 python3.9[70045]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 10 16:55:39 compute-0 sudo[70043]: pam_unix(sudo:session): session closed for user root
Jan 10 16:55:39 compute-0 sudo[70197]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aamwtxcrpozcxcqicknmhpshcjzzeeuq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064139.4417672-60-263168223014296/AnsiballZ_command.py'
Jan 10 16:55:39 compute-0 sudo[70197]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:55:39 compute-0 python3.9[70199]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 16:55:39 compute-0 sudo[70197]: pam_unix(sudo:session): session closed for user root
Jan 10 16:55:40 compute-0 sudo[70352]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kcjctunlkxbypqbrmjxlyetovkkxrvgb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064140.1812165-68-19431402523091/AnsiballZ_file.py'
Jan 10 16:55:40 compute-0 sudo[70352]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:55:40 compute-0 python3.9[70354]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 16:55:40 compute-0 sudo[70352]: pam_unix(sudo:session): session closed for user root
Jan 10 16:55:41 compute-0 sshd-session[69279]: Connection closed by 192.168.122.30 port 40406
Jan 10 16:55:41 compute-0 sshd-session[69276]: pam_unix(sshd:session): session closed for user zuul
Jan 10 16:55:41 compute-0 systemd[1]: session-16.scope: Deactivated successfully.
Jan 10 16:55:41 compute-0 systemd[1]: session-16.scope: Consumed 4.741s CPU time.
Jan 10 16:55:41 compute-0 systemd-logind[798]: Session 16 logged out. Waiting for processes to exit.
Jan 10 16:55:41 compute-0 systemd-logind[798]: Removed session 16.
Jan 10 16:55:46 compute-0 sshd-session[70379]: Accepted publickey for zuul from 192.168.122.30 port 52534 ssh2: ECDSA SHA256:YYROLJW/JwZAyyZtyl+88gzuUs1GqrQIhGb+AzXg9yc
Jan 10 16:55:46 compute-0 systemd-logind[798]: New session 17 of user zuul.
Jan 10 16:55:46 compute-0 systemd[1]: Started Session 17 of User zuul.
Jan 10 16:55:46 compute-0 sshd-session[70379]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 10 16:55:47 compute-0 python3.9[70532]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 10 16:55:48 compute-0 sudo[70686]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nqusfxpspegrwtkpjhihrdhcznjhinlz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064148.1965258-29-121664645154873/AnsiballZ_setup.py'
Jan 10 16:55:48 compute-0 sudo[70686]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:55:48 compute-0 python3.9[70688]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 10 16:55:49 compute-0 sudo[70686]: pam_unix(sudo:session): session closed for user root
Jan 10 16:55:49 compute-0 sudo[70770]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jqfdbiohefinukqxafsuxegaoianyfax ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064148.1965258-29-121664645154873/AnsiballZ_dnf.py'
Jan 10 16:55:49 compute-0 sudo[70770]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:55:49 compute-0 python3.9[70772]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 10 16:55:51 compute-0 sudo[70770]: pam_unix(sudo:session): session closed for user root
Jan 10 16:55:52 compute-0 python3.9[70923]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 16:55:53 compute-0 python3.9[71074]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 10 16:55:54 compute-0 python3.9[71224]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 10 16:55:54 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 10 16:55:54 compute-0 python3.9[71375]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 10 16:55:55 compute-0 sshd-session[70382]: Connection closed by 192.168.122.30 port 52534
Jan 10 16:55:55 compute-0 sshd-session[70379]: pam_unix(sshd:session): session closed for user zuul
Jan 10 16:55:55 compute-0 systemd[1]: session-17.scope: Deactivated successfully.
Jan 10 16:55:55 compute-0 systemd[1]: session-17.scope: Consumed 6.417s CPU time.
Jan 10 16:55:55 compute-0 systemd-logind[798]: Session 17 logged out. Waiting for processes to exit.
Jan 10 16:55:55 compute-0 systemd-logind[798]: Removed session 17.
Jan 10 16:56:02 compute-0 sshd-session[71400]: Accepted publickey for zuul from 38.102.83.82 port 45756 ssh2: RSA SHA256:dyXfdFt4JSR1rmxb/SO9ENtHN43FPPABVlLhSeU8+co
Jan 10 16:56:02 compute-0 systemd-logind[798]: New session 18 of user zuul.
Jan 10 16:56:02 compute-0 systemd[1]: Started Session 18 of User zuul.
Jan 10 16:56:02 compute-0 sshd-session[71400]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 10 16:56:03 compute-0 sudo[71476]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vxoxvvtjbmniwtvdfxqbxjmwrjwdzmaz ; /usr/bin/python3'
Jan 10 16:56:03 compute-0 sudo[71476]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:56:03 compute-0 useradd[71480]: new group: name=ceph-admin, GID=42478
Jan 10 16:56:03 compute-0 useradd[71480]: new user: name=ceph-admin, UID=42477, GID=42478, home=/home/ceph-admin, shell=/bin/bash, from=none
Jan 10 16:56:03 compute-0 sudo[71476]: pam_unix(sudo:session): session closed for user root
Jan 10 16:56:03 compute-0 sudo[71562]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-urltezvxewpqqqygjzmmzkmxxjiqxcno ; /usr/bin/python3'
Jan 10 16:56:03 compute-0 sudo[71562]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:56:03 compute-0 sudo[71562]: pam_unix(sudo:session): session closed for user root
Jan 10 16:56:04 compute-0 sudo[71635]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rxalzztnzkgjqpkedvapzodccfnizduh ; /usr/bin/python3'
Jan 10 16:56:04 compute-0 sudo[71635]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:56:04 compute-0 sudo[71635]: pam_unix(sudo:session): session closed for user root
Jan 10 16:56:04 compute-0 sudo[71685]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tqbixspsukvgvedaisbdnehoyhrnzvkp ; /usr/bin/python3'
Jan 10 16:56:04 compute-0 sudo[71685]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:56:04 compute-0 sudo[71685]: pam_unix(sudo:session): session closed for user root
Jan 10 16:56:05 compute-0 sudo[71711]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lnsogmlxuvoccauecgoagqrhlseaodkv ; /usr/bin/python3'
Jan 10 16:56:05 compute-0 sudo[71711]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:56:05 compute-0 sudo[71711]: pam_unix(sudo:session): session closed for user root
Jan 10 16:56:05 compute-0 sudo[71737]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nqxathnpcmdayqsssgssjcqgehyanmne ; /usr/bin/python3'
Jan 10 16:56:05 compute-0 sudo[71737]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:56:05 compute-0 sudo[71737]: pam_unix(sudo:session): session closed for user root
Jan 10 16:56:05 compute-0 sudo[71763]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eduprpmuxkaasibqihxhayghssqxhoyb ; /usr/bin/python3'
Jan 10 16:56:05 compute-0 sudo[71763]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:56:05 compute-0 sudo[71763]: pam_unix(sudo:session): session closed for user root
Jan 10 16:56:06 compute-0 sudo[71841]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tmuteihlstrmvcemboyronhwlhhjzjxw ; /usr/bin/python3'
Jan 10 16:56:06 compute-0 sudo[71841]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:56:06 compute-0 sudo[71841]: pam_unix(sudo:session): session closed for user root
Jan 10 16:56:06 compute-0 sudo[71914]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ldqtzgdiddsbhmdazrzrwqvyxjpvuffu ; /usr/bin/python3'
Jan 10 16:56:06 compute-0 sudo[71914]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:56:06 compute-0 sudo[71914]: pam_unix(sudo:session): session closed for user root
Jan 10 16:56:07 compute-0 sudo[72016]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wvzwaxzqxbzezctizicizcicbcpfnton ; /usr/bin/python3'
Jan 10 16:56:07 compute-0 sudo[72016]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:56:07 compute-0 sudo[72016]: pam_unix(sudo:session): session closed for user root
Jan 10 16:56:07 compute-0 sudo[72089]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-adgtwncoiabkcgjvcczbrljkwqaopgnv ; /usr/bin/python3'
Jan 10 16:56:07 compute-0 sudo[72089]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:56:07 compute-0 sudo[72089]: pam_unix(sudo:session): session closed for user root
Jan 10 16:56:08 compute-0 sudo[72139]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ihugxyzihfmcltdrbufplfsvtbkmgsgv ; /usr/bin/python3'
Jan 10 16:56:08 compute-0 sudo[72139]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:56:08 compute-0 python3[72141]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 10 16:56:09 compute-0 sudo[72139]: pam_unix(sudo:session): session closed for user root
Jan 10 16:56:09 compute-0 sudo[72234]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-moibqodmhdskopvhupuciipndrusuumi ; /usr/bin/python3'
Jan 10 16:56:09 compute-0 sudo[72234]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:56:10 compute-0 python3[72236]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 10 16:56:11 compute-0 sudo[72234]: pam_unix(sudo:session): session closed for user root
Jan 10 16:56:11 compute-0 sudo[72261]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ndfgrczmyloparucgydensvhbamfxgok ; /usr/bin/python3'
Jan 10 16:56:11 compute-0 sudo[72261]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:56:11 compute-0 python3[72263]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 10 16:56:11 compute-0 sudo[72261]: pam_unix(sudo:session): session closed for user root
Jan 10 16:56:12 compute-0 sudo[72287]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uwvuoksyqhciztbbxpkhrppbeywusrnd ; /usr/bin/python3'
Jan 10 16:56:12 compute-0 sudo[72287]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:56:12 compute-0 python3[72289]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=20G
                                          losetup /dev/loop3 /var/lib/ceph-osd-0.img
                                          lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 16:56:12 compute-0 kernel: loop: module loaded
Jan 10 16:56:12 compute-0 kernel: loop3: detected capacity change from 0 to 41943040
Jan 10 16:56:12 compute-0 sudo[72287]: pam_unix(sudo:session): session closed for user root
Jan 10 16:56:12 compute-0 sudo[72322]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ydnvxtdptspioslfumnfzrmgcbtqepxk ; /usr/bin/python3'
Jan 10 16:56:12 compute-0 sudo[72322]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:56:12 compute-0 python3[72324]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3
                                          vgcreate ceph_vg0 /dev/loop3
                                          lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0
                                          lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 16:56:12 compute-0 lvm[72327]: PV /dev/loop3 not used.
Jan 10 16:56:12 compute-0 lvm[72329]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 10 16:56:12 compute-0 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Jan 10 16:56:12 compute-0 lvm[72339]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 10 16:56:12 compute-0 lvm[72339]: VG ceph_vg0 finished
Jan 10 16:56:12 compute-0 lvm[72337]:   1 logical volume(s) in volume group "ceph_vg0" now active
Jan 10 16:56:12 compute-0 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Jan 10 16:56:12 compute-0 sudo[72322]: pam_unix(sudo:session): session closed for user root
Jan 10 16:56:13 compute-0 sudo[72415]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-imyspvfkabrxfhilqcwrgymtcsugtjai ; /usr/bin/python3'
Jan 10 16:56:13 compute-0 sudo[72415]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:56:13 compute-0 python3[72417]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 10 16:56:13 compute-0 sudo[72415]: pam_unix(sudo:session): session closed for user root
Jan 10 16:56:13 compute-0 sudo[72488]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jonskrcsuzgjawmzveialdywrrerrpxc ; /usr/bin/python3'
Jan 10 16:56:13 compute-0 sudo[72488]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:56:13 compute-0 python3[72490]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1768064172.9395354-36189-63096646957512/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 16:56:13 compute-0 sudo[72488]: pam_unix(sudo:session): session closed for user root
Jan 10 16:56:14 compute-0 sudo[72538]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gjsicagkzsrifrhuixhnkjzdkcbpappk ; /usr/bin/python3'
Jan 10 16:56:14 compute-0 sudo[72538]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:56:14 compute-0 python3[72540]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 10 16:56:14 compute-0 systemd[1]: Reloading.
Jan 10 16:56:14 compute-0 systemd-rc-local-generator[72568]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 16:56:14 compute-0 systemd-sysv-generator[72572]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 10 16:56:14 compute-0 systemd[1]: Starting Ceph OSD losetup...
Jan 10 16:56:14 compute-0 bash[72579]: /dev/loop3: [64513]:4348699 (/var/lib/ceph-osd-0.img)
Jan 10 16:56:14 compute-0 systemd[1]: Finished Ceph OSD losetup.
Jan 10 16:56:14 compute-0 sudo[72538]: pam_unix(sudo:session): session closed for user root
Jan 10 16:56:14 compute-0 lvm[72580]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 10 16:56:14 compute-0 lvm[72580]: VG ceph_vg0 finished
Jan 10 16:56:15 compute-0 sudo[72604]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mfcojnggkptcxnkldljojpjvqgktbnuq ; /usr/bin/python3'
Jan 10 16:56:15 compute-0 sudo[72604]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:56:15 compute-0 python3[72606]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 10 16:56:16 compute-0 sudo[72604]: pam_unix(sudo:session): session closed for user root
Jan 10 16:56:16 compute-0 sudo[72631]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-juzqnfyajcnfxrvzrqhbscvthmowrsmo ; /usr/bin/python3'
Jan 10 16:56:16 compute-0 sudo[72631]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:56:17 compute-0 python3[72633]: ansible-ansible.builtin.stat Invoked with path=/dev/loop4 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 10 16:56:17 compute-0 sudo[72631]: pam_unix(sudo:session): session closed for user root
Jan 10 16:56:17 compute-0 sudo[72657]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvwxzcaoaycyceokkvdkxlksmdchwsfu ; /usr/bin/python3'
Jan 10 16:56:17 compute-0 sudo[72657]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:56:17 compute-0 python3[72659]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-1.img bs=1 count=0 seek=20G
                                          losetup /dev/loop4 /var/lib/ceph-osd-1.img
                                          lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 16:56:17 compute-0 kernel: loop4: detected capacity change from 0 to 41943040
Jan 10 16:56:17 compute-0 sudo[72657]: pam_unix(sudo:session): session closed for user root
Jan 10 16:56:17 compute-0 sudo[72689]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xkjmmyyfymtzepghumtpewlqfuiqzfch ; /usr/bin/python3'
Jan 10 16:56:17 compute-0 sudo[72689]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:56:17 compute-0 python3[72691]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop4
                                          vgcreate ceph_vg1 /dev/loop4
                                          lvcreate -n ceph_lv1 -l +100%FREE ceph_vg1
                                          lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 16:56:17 compute-0 lvm[72694]: PV /dev/loop4 not used.
Jan 10 16:56:17 compute-0 lvm[72696]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 10 16:56:17 compute-0 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg1.
Jan 10 16:56:17 compute-0 lvm[72702]:   1 logical volume(s) in volume group "ceph_vg1" now active
Jan 10 16:56:17 compute-0 lvm[72707]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 10 16:56:17 compute-0 lvm[72707]: VG ceph_vg1 finished
Jan 10 16:56:17 compute-0 systemd[1]: lvm-activate-ceph_vg1.service: Deactivated successfully.
Jan 10 16:56:18 compute-0 sudo[72689]: pam_unix(sudo:session): session closed for user root
Jan 10 16:56:18 compute-0 sudo[72783]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nzbkxdqssjdunyzmfphuurhgvgfurzho ; /usr/bin/python3'
Jan 10 16:56:18 compute-0 sudo[72783]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:56:18 compute-0 python3[72785]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-1.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 10 16:56:18 compute-0 sudo[72783]: pam_unix(sudo:session): session closed for user root
Jan 10 16:56:18 compute-0 sudo[72856]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tvlsdqkitnomaylzoyiaphjwaweqjqqk ; /usr/bin/python3'
Jan 10 16:56:18 compute-0 sudo[72856]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:56:18 compute-0 python3[72858]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1768064178.1484025-36218-113938940877657/source dest=/etc/systemd/system/ceph-osd-losetup-1.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=19612168ea279db4171b94ee1f8625de1ec44b58 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 16:56:18 compute-0 sudo[72856]: pam_unix(sudo:session): session closed for user root
Jan 10 16:56:19 compute-0 sudo[72906]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zcruolqxytqmabfiefvlqznuzufmdcph ; /usr/bin/python3'
Jan 10 16:56:19 compute-0 sudo[72906]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:56:19 compute-0 chronyd[58619]: Selected source 142.4.192.253 (pool.ntp.org)
Jan 10 16:56:19 compute-0 python3[72908]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-1.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 10 16:56:19 compute-0 systemd[1]: Reloading.
Jan 10 16:56:19 compute-0 systemd-rc-local-generator[72937]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 16:56:19 compute-0 systemd-sysv-generator[72941]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 10 16:56:19 compute-0 systemd[1]: Starting Ceph OSD losetup...
Jan 10 16:56:19 compute-0 bash[72948]: /dev/loop4: [64513]:4348710 (/var/lib/ceph-osd-1.img)
Jan 10 16:56:19 compute-0 systemd[1]: Finished Ceph OSD losetup.
Jan 10 16:56:19 compute-0 sudo[72906]: pam_unix(sudo:session): session closed for user root
Jan 10 16:56:19 compute-0 lvm[72949]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 10 16:56:19 compute-0 lvm[72949]: VG ceph_vg1 finished
Jan 10 16:56:19 compute-0 sudo[72973]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-drjmsmijvxajxoervzlcxvneozdmzecg ; /usr/bin/python3'
Jan 10 16:56:19 compute-0 sudo[72973]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:56:20 compute-0 python3[72975]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 10 16:56:21 compute-0 sudo[72973]: pam_unix(sudo:session): session closed for user root
Jan 10 16:56:21 compute-0 sudo[73000]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-awfncfwyaulvtvgkmhkmmrsajoaomxwg ; /usr/bin/python3'
Jan 10 16:56:21 compute-0 sudo[73000]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:56:21 compute-0 python3[73002]: ansible-ansible.builtin.stat Invoked with path=/dev/loop5 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 10 16:56:21 compute-0 sudo[73000]: pam_unix(sudo:session): session closed for user root
Jan 10 16:56:21 compute-0 sudo[73026]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wvcxbxmshwqdwwqxmgomcwqkuzjdbitp ; /usr/bin/python3'
Jan 10 16:56:21 compute-0 sudo[73026]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:56:22 compute-0 python3[73028]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-2.img bs=1 count=0 seek=20G
                                          losetup /dev/loop5 /var/lib/ceph-osd-2.img
                                          lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 16:56:22 compute-0 kernel: loop5: detected capacity change from 0 to 41943040
Jan 10 16:56:22 compute-0 sudo[73026]: pam_unix(sudo:session): session closed for user root
Jan 10 16:56:22 compute-0 sudo[73058]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-igcfakjifiqunjrrktqumdhgnhnsjjvz ; /usr/bin/python3'
Jan 10 16:56:22 compute-0 sudo[73058]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:56:22 compute-0 python3[73060]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop5
                                          vgcreate ceph_vg2 /dev/loop5
                                          lvcreate -n ceph_lv2 -l +100%FREE ceph_vg2
                                          lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 16:56:22 compute-0 lvm[73063]: PV /dev/loop5 not used.
Jan 10 16:56:22 compute-0 lvm[73065]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 10 16:56:22 compute-0 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg2.
Jan 10 16:56:22 compute-0 lvm[73076]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 10 16:56:22 compute-0 lvm[73076]: VG ceph_vg2 finished
Jan 10 16:56:22 compute-0 lvm[73074]:   1 logical volume(s) in volume group "ceph_vg2" now active
Jan 10 16:56:22 compute-0 systemd[1]: lvm-activate-ceph_vg2.service: Deactivated successfully.
Jan 10 16:56:22 compute-0 sudo[73058]: pam_unix(sudo:session): session closed for user root
Jan 10 16:56:23 compute-0 sudo[73152]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cdoxdjbawiveossaeizwvuvxuhpdeiby ; /usr/bin/python3'
Jan 10 16:56:23 compute-0 sudo[73152]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:56:23 compute-0 python3[73154]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-2.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 10 16:56:23 compute-0 sudo[73152]: pam_unix(sudo:session): session closed for user root
Jan 10 16:56:23 compute-0 sudo[73225]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rsonpphfdjfjoubadlewcdmicktgofgv ; /usr/bin/python3'
Jan 10 16:56:23 compute-0 sudo[73225]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:56:23 compute-0 python3[73227]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1768064182.9052546-36245-239685696138934/source dest=/etc/systemd/system/ceph-osd-losetup-2.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=4c5b1bc5693c499ffe2edaa97d63f5df7075d845 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 16:56:23 compute-0 sudo[73225]: pam_unix(sudo:session): session closed for user root
Jan 10 16:56:23 compute-0 sudo[73275]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mtkvssyboakfyhpbpgqbzzusjsrdvsfs ; /usr/bin/python3'
Jan 10 16:56:23 compute-0 sudo[73275]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:56:24 compute-0 python3[73277]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-2.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 10 16:56:24 compute-0 systemd[1]: Reloading.
Jan 10 16:56:24 compute-0 systemd-rc-local-generator[73305]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 16:56:24 compute-0 systemd-sysv-generator[73309]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 10 16:56:24 compute-0 systemd[1]: Starting Ceph OSD losetup...
Jan 10 16:56:24 compute-0 bash[73316]: /dev/loop5: [64513]:4348783 (/var/lib/ceph-osd-2.img)
Jan 10 16:56:24 compute-0 systemd[1]: Finished Ceph OSD losetup.
Jan 10 16:56:24 compute-0 lvm[73317]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 10 16:56:24 compute-0 lvm[73317]: VG ceph_vg2 finished
Jan 10 16:56:24 compute-0 sudo[73275]: pam_unix(sudo:session): session closed for user root
Jan 10 16:56:26 compute-0 python3[73341]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 10 16:56:28 compute-0 sudo[73432]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jojcwgycosrvgtiwhowxneofwzigxtiv ; /usr/bin/python3'
Jan 10 16:56:28 compute-0 sudo[73432]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:56:28 compute-0 python3[73434]: ansible-ansible.legacy.dnf Invoked with name=['centos-release-ceph-tentacle'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 10 16:56:31 compute-0 sudo[73432]: pam_unix(sudo:session): session closed for user root
Jan 10 16:56:31 compute-0 sudo[73489]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lueozgckuqonrjpsjxpjqzaelrloelhu ; /usr/bin/python3'
Jan 10 16:56:31 compute-0 sudo[73489]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:56:31 compute-0 python3[73491]: ansible-ansible.legacy.dnf Invoked with name=['cephadm'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 10 16:56:34 compute-0 groupadd[73501]: group added to /etc/group: name=cephadm, GID=993
Jan 10 16:56:34 compute-0 groupadd[73501]: group added to /etc/gshadow: name=cephadm
Jan 10 16:56:34 compute-0 groupadd[73501]: new group: name=cephadm, GID=993
Jan 10 16:56:34 compute-0 useradd[73508]: new user: name=cephadm, UID=992, GID=993, home=/var/lib/cephadm, shell=/bin/bash, from=none
Jan 10 16:56:34 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 10 16:56:34 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 10 16:56:35 compute-0 sudo[73489]: pam_unix(sudo:session): session closed for user root
Jan 10 16:56:35 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 10 16:56:35 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 10 16:56:35 compute-0 systemd[1]: run-r4b63edc7b24945c2b06fa4660b50dc25.service: Deactivated successfully.
Jan 10 16:56:35 compute-0 sudo[73609]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ahbautszejmowsfqsoqtrqgxlbvcizfd ; /usr/bin/python3'
Jan 10 16:56:35 compute-0 sudo[73609]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:56:35 compute-0 python3[73611]: ansible-ansible.builtin.stat Invoked with path=/usr/sbin/cephadm follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 10 16:56:35 compute-0 sudo[73609]: pam_unix(sudo:session): session closed for user root
Jan 10 16:56:35 compute-0 sudo[73637]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lbketdkhrqcvigmtcqhcneuolugvfixv ; /usr/bin/python3'
Jan 10 16:56:35 compute-0 sudo[73637]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:56:35 compute-0 python3[73639]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm ls --no-detail _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 16:56:36 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 10 16:56:36 compute-0 sudo[73637]: pam_unix(sudo:session): session closed for user root
Jan 10 16:56:36 compute-0 sudo[73677]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wuhkusnluqbtgunohpkkyhojasxzawsl ; /usr/bin/python3'
Jan 10 16:56:36 compute-0 sudo[73677]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:56:36 compute-0 python3[73679]: ansible-ansible.builtin.file Invoked with path=/etc/ceph state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 16:56:36 compute-0 sudo[73677]: pam_unix(sudo:session): session closed for user root
Jan 10 16:56:36 compute-0 sudo[73703]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lpulopirpaqpovsjzozuzfanncyvqzgr ; /usr/bin/python3'
Jan 10 16:56:36 compute-0 sudo[73703]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:56:37 compute-0 python3[73705]: ansible-ansible.builtin.file Invoked with path=/home/ceph-admin/specs owner=ceph-admin group=ceph-admin mode=0755 state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 16:56:37 compute-0 sudo[73703]: pam_unix(sudo:session): session closed for user root
Jan 10 16:56:37 compute-0 sudo[73781]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-inusctzqdzbkhexaegjhcdhefovtqqai ; /usr/bin/python3'
Jan 10 16:56:37 compute-0 sudo[73781]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:56:37 compute-0 python3[73783]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 10 16:56:37 compute-0 sudo[73781]: pam_unix(sudo:session): session closed for user root
Jan 10 16:56:38 compute-0 sudo[73854]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xpwbqazsniiyjxgemlaaxxzerosafjhs ; /usr/bin/python3'
Jan 10 16:56:38 compute-0 sudo[73854]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:56:38 compute-0 python3[73856]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1768064197.579233-36393-169975549169158/source dest=/home/ceph-admin/specs/ceph_spec.yaml owner=ceph-admin group=ceph-admin mode=0644 _original_basename=ceph_spec.yml follow=False checksum=bb83c53af4ffd926a3f1eafe26a8be437df6401f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 16:56:38 compute-0 sudo[73854]: pam_unix(sudo:session): session closed for user root
Jan 10 16:56:38 compute-0 sudo[73956]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lycbozewfydgortncaabqthrfegrbtth ; /usr/bin/python3'
Jan 10 16:56:38 compute-0 sudo[73956]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:56:39 compute-0 python3[73958]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 10 16:56:39 compute-0 sudo[73956]: pam_unix(sudo:session): session closed for user root
Jan 10 16:56:39 compute-0 sudo[74029]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djugimxysczyqaugosmxxdqpqhzznzud ; /usr/bin/python3'
Jan 10 16:56:39 compute-0 sudo[74029]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:56:39 compute-0 python3[74031]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1768064198.7679796-36411-175565344358162/source dest=/home/ceph-admin/assimilate_ceph.conf owner=ceph-admin group=ceph-admin mode=0644 _original_basename=initial_ceph.conf follow=False checksum=41828f7c2442fdf376911255e33c12863fc3b1b3 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 16:56:39 compute-0 sudo[74029]: pam_unix(sudo:session): session closed for user root
Jan 10 16:56:39 compute-0 sudo[74079]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gwsngtzlhbmjupohmzjchelukkpjdxsq ; /usr/bin/python3'
Jan 10 16:56:39 compute-0 sudo[74079]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:56:39 compute-0 python3[74081]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 10 16:56:39 compute-0 sudo[74079]: pam_unix(sudo:session): session closed for user root
Jan 10 16:56:39 compute-0 sudo[74107]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xfnihvajxnfihviadwumgogqzwhztqem ; /usr/bin/python3'
Jan 10 16:56:39 compute-0 sudo[74107]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:56:40 compute-0 python3[74109]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa.pub follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 10 16:56:40 compute-0 sudo[74107]: pam_unix(sudo:session): session closed for user root
Jan 10 16:56:40 compute-0 sudo[74135]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oxzgkgmzduexgyzlaeiyvgeqybzaavtz ; /usr/bin/python3'
Jan 10 16:56:40 compute-0 sudo[74135]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:56:40 compute-0 python3[74137]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 10 16:56:40 compute-0 sudo[74135]: pam_unix(sudo:session): session closed for user root
Jan 10 16:56:40 compute-0 sudo[74163]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-igpmgvhskckswtqdarzycxutkisvtnvc ; /usr/bin/python3'
Jan 10 16:56:40 compute-0 sudo[74163]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:56:40 compute-0 python3[74165]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm bootstrap --skip-firewalld --ssh-private-key /home/ceph-admin/.ssh/id_rsa --ssh-public-key /home/ceph-admin/.ssh/id_rsa.pub --ssh-user ceph-admin --allow-fqdn-hostname --output-keyring /etc/ceph/ceph.client.admin.keyring --output-config /etc/ceph/ceph.conf --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 --config /home/ceph-admin/assimilate_ceph.conf \--single-host-defaults \--skip-monitoring-stack --skip-dashboard --mon-ip 192.168.122.100
                                           _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 16:56:41 compute-0 sshd-session[74169]: Accepted publickey for ceph-admin from 192.168.122.100 port 37618 ssh2: RSA SHA256:OwXsRrMUqFeNidRfyqqHnD8cFQm/QSlnm0xkW+qjdao
Jan 10 16:56:41 compute-0 systemd-logind[798]: New session 19 of user ceph-admin.
Jan 10 16:56:41 compute-0 systemd[1]: Created slice User Slice of UID 42477.
Jan 10 16:56:41 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42477...
Jan 10 16:56:41 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42477.
Jan 10 16:56:41 compute-0 systemd[1]: Starting User Manager for UID 42477...
Jan 10 16:56:41 compute-0 systemd[74173]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 10 16:56:41 compute-0 systemd[74173]: Queued start job for default target Main User Target.
Jan 10 16:56:41 compute-0 systemd[74173]: Created slice User Application Slice.
Jan 10 16:56:41 compute-0 systemd[74173]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 10 16:56:41 compute-0 systemd[74173]: Started Daily Cleanup of User's Temporary Directories.
Jan 10 16:56:41 compute-0 systemd[74173]: Reached target Paths.
Jan 10 16:56:41 compute-0 systemd[74173]: Reached target Timers.
Jan 10 16:56:41 compute-0 systemd[74173]: Starting D-Bus User Message Bus Socket...
Jan 10 16:56:41 compute-0 systemd[74173]: Starting Create User's Volatile Files and Directories...
Jan 10 16:56:41 compute-0 systemd[74173]: Finished Create User's Volatile Files and Directories.
Jan 10 16:56:41 compute-0 systemd[74173]: Listening on D-Bus User Message Bus Socket.
Jan 10 16:56:41 compute-0 systemd[74173]: Reached target Sockets.
Jan 10 16:56:41 compute-0 systemd[74173]: Reached target Basic System.
Jan 10 16:56:41 compute-0 systemd[1]: Started User Manager for UID 42477.
Jan 10 16:56:41 compute-0 systemd[74173]: Reached target Main User Target.
Jan 10 16:56:41 compute-0 systemd[74173]: Startup finished in 143ms.
Jan 10 16:56:41 compute-0 systemd[1]: Started Session 19 of User ceph-admin.
Jan 10 16:56:41 compute-0 sshd-session[74169]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 10 16:56:41 compute-0 sudo[74190]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/echo
Jan 10 16:56:41 compute-0 sudo[74190]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 16:56:41 compute-0 sudo[74190]: pam_unix(sudo:session): session closed for user root
Jan 10 16:56:41 compute-0 sshd-session[74189]: Received disconnect from 192.168.122.100 port 37618:11: disconnected by user
Jan 10 16:56:41 compute-0 sshd-session[74189]: Disconnected from user ceph-admin 192.168.122.100 port 37618
Jan 10 16:56:41 compute-0 sshd-session[74169]: pam_unix(sshd:session): session closed for user ceph-admin
Jan 10 16:56:41 compute-0 systemd[1]: session-19.scope: Deactivated successfully.
Jan 10 16:56:41 compute-0 systemd-logind[798]: Session 19 logged out. Waiting for processes to exit.
Jan 10 16:56:41 compute-0 systemd-logind[798]: Removed session 19.
Jan 10 16:56:41 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 10 16:56:41 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 10 16:56:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-compat2903914786-lower\x2dmapped.mount: Deactivated successfully.
Jan 10 16:56:48 compute-0 sshd-session[74311]: Connection closed by authenticating user root 216.36.124.133 port 38944 [preauth]
Jan 10 16:56:51 compute-0 systemd[1]: Stopping User Manager for UID 42477...
Jan 10 16:56:51 compute-0 systemd[74173]: Activating special unit Exit the Session...
Jan 10 16:56:51 compute-0 systemd[74173]: Stopped target Main User Target.
Jan 10 16:56:51 compute-0 systemd[74173]: Stopped target Basic System.
Jan 10 16:56:51 compute-0 systemd[74173]: Stopped target Paths.
Jan 10 16:56:51 compute-0 systemd[74173]: Stopped target Sockets.
Jan 10 16:56:51 compute-0 systemd[74173]: Stopped target Timers.
Jan 10 16:56:51 compute-0 systemd[74173]: Stopped Mark boot as successful after the user session has run 2 minutes.
Jan 10 16:56:51 compute-0 systemd[74173]: Stopped Daily Cleanup of User's Temporary Directories.
Jan 10 16:56:51 compute-0 systemd[74173]: Closed D-Bus User Message Bus Socket.
Jan 10 16:56:51 compute-0 systemd[74173]: Stopped Create User's Volatile Files and Directories.
Jan 10 16:56:51 compute-0 systemd[74173]: Removed slice User Application Slice.
Jan 10 16:56:51 compute-0 systemd[74173]: Reached target Shutdown.
Jan 10 16:56:51 compute-0 systemd[74173]: Finished Exit the Session.
Jan 10 16:56:51 compute-0 systemd[74173]: Reached target Exit the Session.
Jan 10 16:56:51 compute-0 systemd[1]: user@42477.service: Deactivated successfully.
Jan 10 16:56:51 compute-0 systemd[1]: Stopped User Manager for UID 42477.
Jan 10 16:56:51 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Jan 10 16:56:51 compute-0 systemd[1]: run-user-42477.mount: Deactivated successfully.
Jan 10 16:56:51 compute-0 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Jan 10 16:56:51 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Jan 10 16:56:51 compute-0 systemd[1]: Removed slice User Slice of UID 42477.
Jan 10 16:57:11 compute-0 podman[74267]: 2026-01-10 16:57:11.859651117 +0000 UTC m=+30.117093316 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 16:57:11 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 10 16:57:12 compute-0 podman[74379]: 2026-01-10 16:57:11.926566953 +0000 UTC m=+0.033486680 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 16:57:12 compute-0 podman[74379]: 2026-01-10 16:57:12.131291583 +0000 UTC m=+0.238211310 container create e802adbb93cb267bd67473dde14e2df87ec3cec800c3718b454ce4fff9b3cdb3 (image=quay.io/ceph/ceph:v20, name=sharp_shirley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 10 16:57:12 compute-0 systemd[1]: Created slice Virtual Machine and Container Slice.
Jan 10 16:57:12 compute-0 systemd[1]: Started libpod-conmon-e802adbb93cb267bd67473dde14e2df87ec3cec800c3718b454ce4fff9b3cdb3.scope.
Jan 10 16:57:12 compute-0 systemd[1]: Started libcrun container.
Jan 10 16:57:12 compute-0 podman[74379]: 2026-01-10 16:57:12.275257072 +0000 UTC m=+0.382176869 container init e802adbb93cb267bd67473dde14e2df87ec3cec800c3718b454ce4fff9b3cdb3 (image=quay.io/ceph/ceph:v20, name=sharp_shirley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 10 16:57:12 compute-0 podman[74379]: 2026-01-10 16:57:12.285691797 +0000 UTC m=+0.392611524 container start e802adbb93cb267bd67473dde14e2df87ec3cec800c3718b454ce4fff9b3cdb3 (image=quay.io/ceph/ceph:v20, name=sharp_shirley, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 10 16:57:12 compute-0 podman[74379]: 2026-01-10 16:57:12.289342231 +0000 UTC m=+0.396261988 container attach e802adbb93cb267bd67473dde14e2df87ec3cec800c3718b454ce4fff9b3cdb3 (image=quay.io/ceph/ceph:v20, name=sharp_shirley, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 16:57:12 compute-0 sharp_shirley[74395]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable)
Jan 10 16:57:12 compute-0 systemd[1]: libpod-e802adbb93cb267bd67473dde14e2df87ec3cec800c3718b454ce4fff9b3cdb3.scope: Deactivated successfully.
Jan 10 16:57:12 compute-0 podman[74379]: 2026-01-10 16:57:12.400593843 +0000 UTC m=+0.507513550 container died e802adbb93cb267bd67473dde14e2df87ec3cec800c3718b454ce4fff9b3cdb3 (image=quay.io/ceph/ceph:v20, name=sharp_shirley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 16:57:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-ea7eb89fcda6dc3265814bb505d59de5d40338a2ed4b3d91da45e3b3d1f136f5-merged.mount: Deactivated successfully.
Jan 10 16:57:12 compute-0 podman[74379]: 2026-01-10 16:57:12.442529011 +0000 UTC m=+0.549448718 container remove e802adbb93cb267bd67473dde14e2df87ec3cec800c3718b454ce4fff9b3cdb3 (image=quay.io/ceph/ceph:v20, name=sharp_shirley, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 10 16:57:12 compute-0 systemd[1]: libpod-conmon-e802adbb93cb267bd67473dde14e2df87ec3cec800c3718b454ce4fff9b3cdb3.scope: Deactivated successfully.
Jan 10 16:57:12 compute-0 podman[74411]: 2026-01-10 16:57:12.515090626 +0000 UTC m=+0.047247859 container create b72d23be6a458b237322b042eea08a758dd09e4607d74733b011e58535d3e8e4 (image=quay.io/ceph/ceph:v20, name=lucid_panini, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 10 16:57:12 compute-0 systemd[1]: Started libpod-conmon-b72d23be6a458b237322b042eea08a758dd09e4607d74733b011e58535d3e8e4.scope.
Jan 10 16:57:12 compute-0 systemd[1]: Started libcrun container.
Jan 10 16:57:12 compute-0 podman[74411]: 2026-01-10 16:57:12.584292867 +0000 UTC m=+0.116450110 container init b72d23be6a458b237322b042eea08a758dd09e4607d74733b011e58535d3e8e4 (image=quay.io/ceph/ceph:v20, name=lucid_panini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030)
Jan 10 16:57:12 compute-0 podman[74411]: 2026-01-10 16:57:12.591853581 +0000 UTC m=+0.124010814 container start b72d23be6a458b237322b042eea08a758dd09e4607d74733b011e58535d3e8e4 (image=quay.io/ceph/ceph:v20, name=lucid_panini, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 16:57:12 compute-0 podman[74411]: 2026-01-10 16:57:12.496669264 +0000 UTC m=+0.028826507 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 16:57:12 compute-0 podman[74411]: 2026-01-10 16:57:12.595451843 +0000 UTC m=+0.127609066 container attach b72d23be6a458b237322b042eea08a758dd09e4607d74733b011e58535d3e8e4 (image=quay.io/ceph/ceph:v20, name=lucid_panini, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 16:57:12 compute-0 lucid_panini[74427]: 167 167
Jan 10 16:57:12 compute-0 systemd[1]: libpod-b72d23be6a458b237322b042eea08a758dd09e4607d74733b011e58535d3e8e4.scope: Deactivated successfully.
Jan 10 16:57:12 compute-0 podman[74411]: 2026-01-10 16:57:12.597142011 +0000 UTC m=+0.129299234 container died b72d23be6a458b237322b042eea08a758dd09e4607d74733b011e58535d3e8e4 (image=quay.io/ceph/ceph:v20, name=lucid_panini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 16:57:12 compute-0 podman[74411]: 2026-01-10 16:57:12.637219286 +0000 UTC m=+0.169376509 container remove b72d23be6a458b237322b042eea08a758dd09e4607d74733b011e58535d3e8e4 (image=quay.io/ceph/ceph:v20, name=lucid_panini, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 16:57:12 compute-0 systemd[1]: libpod-conmon-b72d23be6a458b237322b042eea08a758dd09e4607d74733b011e58535d3e8e4.scope: Deactivated successfully.
Jan 10 16:57:12 compute-0 podman[74447]: 2026-01-10 16:57:12.70266735 +0000 UTC m=+0.044620275 container create c9de09510d04b42b100ff032a92b43e29d0b6f4404dadc7f90185ad77a0f9035 (image=quay.io/ceph/ceph:v20, name=sharp_sanderson, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True)
Jan 10 16:57:12 compute-0 systemd[1]: Started libpod-conmon-c9de09510d04b42b100ff032a92b43e29d0b6f4404dadc7f90185ad77a0f9035.scope.
Jan 10 16:57:12 compute-0 systemd[1]: Started libcrun container.
Jan 10 16:57:12 compute-0 podman[74447]: 2026-01-10 16:57:12.765050008 +0000 UTC m=+0.107002973 container init c9de09510d04b42b100ff032a92b43e29d0b6f4404dadc7f90185ad77a0f9035 (image=quay.io/ceph/ceph:v20, name=sharp_sanderson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 16:57:12 compute-0 podman[74447]: 2026-01-10 16:57:12.77288377 +0000 UTC m=+0.114836705 container start c9de09510d04b42b100ff032a92b43e29d0b6f4404dadc7f90185ad77a0f9035 (image=quay.io/ceph/ceph:v20, name=sharp_sanderson, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 16:57:12 compute-0 podman[74447]: 2026-01-10 16:57:12.681357207 +0000 UTC m=+0.023310212 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 16:57:12 compute-0 podman[74447]: 2026-01-10 16:57:12.776283936 +0000 UTC m=+0.118236891 container attach c9de09510d04b42b100ff032a92b43e29d0b6f4404dadc7f90185ad77a0f9035 (image=quay.io/ceph/ceph:v20, name=sharp_sanderson, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 10 16:57:12 compute-0 sharp_sanderson[74463]: AQDohGJp80dVMBAAeHZKHzzJeJb07qXvmjPA9w==
Jan 10 16:57:12 compute-0 systemd[1]: libpod-c9de09510d04b42b100ff032a92b43e29d0b6f4404dadc7f90185ad77a0f9035.scope: Deactivated successfully.
Jan 10 16:57:12 compute-0 podman[74447]: 2026-01-10 16:57:12.815125207 +0000 UTC m=+0.157078162 container died c9de09510d04b42b100ff032a92b43e29d0b6f4404dadc7f90185ad77a0f9035 (image=quay.io/ceph/ceph:v20, name=sharp_sanderson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 10 16:57:12 compute-0 podman[74447]: 2026-01-10 16:57:12.853573216 +0000 UTC m=+0.195526151 container remove c9de09510d04b42b100ff032a92b43e29d0b6f4404dadc7f90185ad77a0f9035 (image=quay.io/ceph/ceph:v20, name=sharp_sanderson, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 16:57:12 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 10 16:57:12 compute-0 systemd[1]: libpod-conmon-c9de09510d04b42b100ff032a92b43e29d0b6f4404dadc7f90185ad77a0f9035.scope: Deactivated successfully.
Jan 10 16:57:12 compute-0 podman[74483]: 2026-01-10 16:57:12.959495137 +0000 UTC m=+0.071701403 container create c70089789ba3cc7f961fbcba7116217b7e3a91f962ae5eabd9274e58fba79fa5 (image=quay.io/ceph/ceph:v20, name=gallant_herschel, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 16:57:12 compute-0 systemd[1]: Started libpod-conmon-c70089789ba3cc7f961fbcba7116217b7e3a91f962ae5eabd9274e58fba79fa5.scope.
Jan 10 16:57:13 compute-0 systemd[1]: Started libcrun container.
Jan 10 16:57:13 compute-0 podman[74483]: 2026-01-10 16:57:13.02457037 +0000 UTC m=+0.136776666 container init c70089789ba3cc7f961fbcba7116217b7e3a91f962ae5eabd9274e58fba79fa5 (image=quay.io/ceph/ceph:v20, name=gallant_herschel, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 16:57:13 compute-0 podman[74483]: 2026-01-10 16:57:12.930108464 +0000 UTC m=+0.042314810 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 16:57:13 compute-0 podman[74483]: 2026-01-10 16:57:13.031480506 +0000 UTC m=+0.143686762 container start c70089789ba3cc7f961fbcba7116217b7e3a91f962ae5eabd9274e58fba79fa5 (image=quay.io/ceph/ceph:v20, name=gallant_herschel, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 10 16:57:13 compute-0 podman[74483]: 2026-01-10 16:57:13.035220302 +0000 UTC m=+0.147426598 container attach c70089789ba3cc7f961fbcba7116217b7e3a91f962ae5eabd9274e58fba79fa5 (image=quay.io/ceph/ceph:v20, name=gallant_herschel, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0)
Jan 10 16:57:13 compute-0 gallant_herschel[74499]: AQDphGJpi72pAxAAsj9DdzBXQVI15FyQV5JZkg==
Jan 10 16:57:13 compute-0 systemd[1]: libpod-c70089789ba3cc7f961fbcba7116217b7e3a91f962ae5eabd9274e58fba79fa5.scope: Deactivated successfully.
Jan 10 16:57:13 compute-0 podman[74483]: 2026-01-10 16:57:13.065748307 +0000 UTC m=+0.177954563 container died c70089789ba3cc7f961fbcba7116217b7e3a91f962ae5eabd9274e58fba79fa5 (image=quay.io/ceph/ceph:v20, name=gallant_herschel, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 16:57:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-ccef3c0ee65cfd73bbb86db1e59557317f390da9e22aaa8eebce1232127c8a59-merged.mount: Deactivated successfully.
Jan 10 16:57:13 compute-0 podman[74483]: 2026-01-10 16:57:13.109101975 +0000 UTC m=+0.221308271 container remove c70089789ba3cc7f961fbcba7116217b7e3a91f962ae5eabd9274e58fba79fa5 (image=quay.io/ceph/ceph:v20, name=gallant_herschel, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 10 16:57:13 compute-0 systemd[1]: libpod-conmon-c70089789ba3cc7f961fbcba7116217b7e3a91f962ae5eabd9274e58fba79fa5.scope: Deactivated successfully.
Jan 10 16:57:13 compute-0 podman[74517]: 2026-01-10 16:57:13.191808978 +0000 UTC m=+0.057184821 container create b73823b28b88e78befca3dd5f4a2fc558ef5dad76082d2e479f09abf1c25b78f (image=quay.io/ceph/ceph:v20, name=elegant_galileo, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 10 16:57:13 compute-0 systemd[1]: Started libpod-conmon-b73823b28b88e78befca3dd5f4a2fc558ef5dad76082d2e479f09abf1c25b78f.scope.
Jan 10 16:57:13 compute-0 podman[74517]: 2026-01-10 16:57:13.166065599 +0000 UTC m=+0.031441512 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 16:57:13 compute-0 systemd[1]: Started libcrun container.
Jan 10 16:57:13 compute-0 podman[74517]: 2026-01-10 16:57:13.274922453 +0000 UTC m=+0.140298436 container init b73823b28b88e78befca3dd5f4a2fc558ef5dad76082d2e479f09abf1c25b78f (image=quay.io/ceph/ceph:v20, name=elegant_galileo, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 16:57:13 compute-0 podman[74517]: 2026-01-10 16:57:13.280686316 +0000 UTC m=+0.146062209 container start b73823b28b88e78befca3dd5f4a2fc558ef5dad76082d2e479f09abf1c25b78f (image=quay.io/ceph/ceph:v20, name=elegant_galileo, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 10 16:57:13 compute-0 podman[74517]: 2026-01-10 16:57:13.284474904 +0000 UTC m=+0.149850777 container attach b73823b28b88e78befca3dd5f4a2fc558ef5dad76082d2e479f09abf1c25b78f (image=quay.io/ceph/ceph:v20, name=elegant_galileo, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 16:57:13 compute-0 elegant_galileo[74534]: AQDphGJpbr0kEhAAJ0IYbjf4Qbpg+ZFhEbWXjA==
Jan 10 16:57:13 compute-0 systemd[1]: libpod-b73823b28b88e78befca3dd5f4a2fc558ef5dad76082d2e479f09abf1c25b78f.scope: Deactivated successfully.
Jan 10 16:57:13 compute-0 podman[74517]: 2026-01-10 16:57:13.308453163 +0000 UTC m=+0.173829016 container died b73823b28b88e78befca3dd5f4a2fc558ef5dad76082d2e479f09abf1c25b78f (image=quay.io/ceph/ceph:v20, name=elegant_galileo, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 16:57:13 compute-0 podman[74517]: 2026-01-10 16:57:13.348979031 +0000 UTC m=+0.214354874 container remove b73823b28b88e78befca3dd5f4a2fc558ef5dad76082d2e479f09abf1c25b78f (image=quay.io/ceph/ceph:v20, name=elegant_galileo, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 16:57:13 compute-0 systemd[1]: libpod-conmon-b73823b28b88e78befca3dd5f4a2fc558ef5dad76082d2e479f09abf1c25b78f.scope: Deactivated successfully.
Jan 10 16:57:13 compute-0 podman[74553]: 2026-01-10 16:57:13.460246753 +0000 UTC m=+0.075952152 container create 16b41617cfeecc3df36c40749dae1995380a7d14defd1dbe09c3d200eb211a36 (image=quay.io/ceph/ceph:v20, name=nervous_lewin, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 16:57:13 compute-0 systemd[1]: Started libpod-conmon-16b41617cfeecc3df36c40749dae1995380a7d14defd1dbe09c3d200eb211a36.scope.
Jan 10 16:57:13 compute-0 podman[74553]: 2026-01-10 16:57:13.431506849 +0000 UTC m=+0.047212298 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 16:57:13 compute-0 systemd[1]: Started libcrun container.
Jan 10 16:57:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15d49bcf623d76ed34b2319c4723529e2a300102eb4332b3d9e68a4776ac86d9/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Jan 10 16:57:13 compute-0 podman[74553]: 2026-01-10 16:57:13.538362176 +0000 UTC m=+0.154067565 container init 16b41617cfeecc3df36c40749dae1995380a7d14defd1dbe09c3d200eb211a36 (image=quay.io/ceph/ceph:v20, name=nervous_lewin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 16:57:13 compute-0 podman[74553]: 2026-01-10 16:57:13.546333472 +0000 UTC m=+0.162038841 container start 16b41617cfeecc3df36c40749dae1995380a7d14defd1dbe09c3d200eb211a36 (image=quay.io/ceph/ceph:v20, name=nervous_lewin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 10 16:57:13 compute-0 podman[74553]: 2026-01-10 16:57:13.550464889 +0000 UTC m=+0.166170458 container attach 16b41617cfeecc3df36c40749dae1995380a7d14defd1dbe09c3d200eb211a36 (image=quay.io/ceph/ceph:v20, name=nervous_lewin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 10 16:57:13 compute-0 nervous_lewin[74569]: /usr/bin/monmaptool: monmap file /tmp/monmap
Jan 10 16:57:13 compute-0 nervous_lewin[74569]: setting min_mon_release = tentacle
Jan 10 16:57:13 compute-0 nervous_lewin[74569]: /usr/bin/monmaptool: set fsid to a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4
Jan 10 16:57:13 compute-0 nervous_lewin[74569]: /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors)
Jan 10 16:57:13 compute-0 systemd[1]: libpod-16b41617cfeecc3df36c40749dae1995380a7d14defd1dbe09c3d200eb211a36.scope: Deactivated successfully.
Jan 10 16:57:13 compute-0 podman[74553]: 2026-01-10 16:57:13.596854904 +0000 UTC m=+0.212560283 container died 16b41617cfeecc3df36c40749dae1995380a7d14defd1dbe09c3d200eb211a36 (image=quay.io/ceph/ceph:v20, name=nervous_lewin, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 16:57:13 compute-0 podman[74553]: 2026-01-10 16:57:13.640416978 +0000 UTC m=+0.256122357 container remove 16b41617cfeecc3df36c40749dae1995380a7d14defd1dbe09c3d200eb211a36 (image=quay.io/ceph/ceph:v20, name=nervous_lewin, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 10 16:57:13 compute-0 systemd[1]: libpod-conmon-16b41617cfeecc3df36c40749dae1995380a7d14defd1dbe09c3d200eb211a36.scope: Deactivated successfully.
Jan 10 16:57:13 compute-0 podman[74587]: 2026-01-10 16:57:13.703041642 +0000 UTC m=+0.039891951 container create 61a37880fdd8209f914bd8056e9d679c097f9d94f91ad86a1ec79c3c8aff2e16 (image=quay.io/ceph/ceph:v20, name=quirky_bartik, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 16:57:13 compute-0 systemd[1]: Started libpod-conmon-61a37880fdd8209f914bd8056e9d679c097f9d94f91ad86a1ec79c3c8aff2e16.scope.
Jan 10 16:57:13 compute-0 systemd[1]: Started libcrun container.
Jan 10 16:57:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b60aeee7fbf04c75393da24ad7f3060d5f037fbeb15e4b152749232d7c3a8a7/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Jan 10 16:57:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b60aeee7fbf04c75393da24ad7f3060d5f037fbeb15e4b152749232d7c3a8a7/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Jan 10 16:57:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b60aeee7fbf04c75393da24ad7f3060d5f037fbeb15e4b152749232d7c3a8a7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 16:57:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b60aeee7fbf04c75393da24ad7f3060d5f037fbeb15e4b152749232d7c3a8a7/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 10 16:57:13 compute-0 podman[74587]: 2026-01-10 16:57:13.774310661 +0000 UTC m=+0.111160980 container init 61a37880fdd8209f914bd8056e9d679c097f9d94f91ad86a1ec79c3c8aff2e16 (image=quay.io/ceph/ceph:v20, name=quirky_bartik, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 16:57:13 compute-0 podman[74587]: 2026-01-10 16:57:13.685953978 +0000 UTC m=+0.022804297 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 16:57:13 compute-0 podman[74587]: 2026-01-10 16:57:13.782638957 +0000 UTC m=+0.119489256 container start 61a37880fdd8209f914bd8056e9d679c097f9d94f91ad86a1ec79c3c8aff2e16 (image=quay.io/ceph/ceph:v20, name=quirky_bartik, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 16:57:13 compute-0 podman[74587]: 2026-01-10 16:57:13.786044344 +0000 UTC m=+0.122894673 container attach 61a37880fdd8209f914bd8056e9d679c097f9d94f91ad86a1ec79c3c8aff2e16 (image=quay.io/ceph/ceph:v20, name=quirky_bartik, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 16:57:13 compute-0 systemd[1]: libpod-61a37880fdd8209f914bd8056e9d679c097f9d94f91ad86a1ec79c3c8aff2e16.scope: Deactivated successfully.
Jan 10 16:57:13 compute-0 podman[74587]: 2026-01-10 16:57:13.895657179 +0000 UTC m=+0.232507478 container died 61a37880fdd8209f914bd8056e9d679c097f9d94f91ad86a1ec79c3c8aff2e16 (image=quay.io/ceph/ceph:v20, name=quirky_bartik, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 10 16:57:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-3b60aeee7fbf04c75393da24ad7f3060d5f037fbeb15e4b152749232d7c3a8a7-merged.mount: Deactivated successfully.
Jan 10 16:57:13 compute-0 podman[74587]: 2026-01-10 16:57:13.934463658 +0000 UTC m=+0.271313957 container remove 61a37880fdd8209f914bd8056e9d679c097f9d94f91ad86a1ec79c3c8aff2e16 (image=quay.io/ceph/ceph:v20, name=quirky_bartik, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 10 16:57:13 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 10 16:57:13 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 10 16:57:13 compute-0 systemd[1]: libpod-conmon-61a37880fdd8209f914bd8056e9d679c097f9d94f91ad86a1ec79c3c8aff2e16.scope: Deactivated successfully.
Jan 10 16:57:14 compute-0 systemd[1]: Reloading.
Jan 10 16:57:14 compute-0 systemd-sysv-generator[74671]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 10 16:57:14 compute-0 systemd-rc-local-generator[74668]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 16:57:14 compute-0 systemd[1]: Reloading.
Jan 10 16:57:14 compute-0 systemd-rc-local-generator[74707]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 16:57:14 compute-0 systemd-sysv-generator[74710]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 10 16:57:14 compute-0 systemd[1]: Reached target All Ceph clusters and services.
Jan 10 16:57:14 compute-0 systemd[1]: Reloading.
Jan 10 16:57:14 compute-0 systemd-sysv-generator[74747]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 10 16:57:14 compute-0 systemd-rc-local-generator[74744]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 16:57:14 compute-0 systemd[1]: Reached target Ceph cluster a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4.
Jan 10 16:57:14 compute-0 systemd[1]: Reloading.
Jan 10 16:57:14 compute-0 systemd-sysv-generator[74784]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 10 16:57:14 compute-0 systemd-rc-local-generator[74780]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 16:57:15 compute-0 systemd[1]: Reloading.
Jan 10 16:57:15 compute-0 systemd-rc-local-generator[74822]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 16:57:15 compute-0 systemd-sysv-generator[74825]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 10 16:57:15 compute-0 systemd[1]: Created slice Slice /system/ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4.
Jan 10 16:57:15 compute-0 systemd[1]: Reached target System Time Set.
Jan 10 16:57:15 compute-0 systemd[1]: Reached target System Time Synchronized.
Jan 10 16:57:15 compute-0 systemd[1]: Starting Ceph mon.compute-0 for a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4...
Jan 10 16:57:15 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 10 16:57:15 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 10 16:57:15 compute-0 podman[74880]: 2026-01-10 16:57:15.605490079 +0000 UTC m=+0.043597156 container create fc0dc41683eedbc6201d5d514ea031b915d9d764138e295128ca72ee12d667a8 (image=quay.io/ceph/ceph:v20, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-mon-compute-0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 10 16:57:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73b51d649077bbebaa9dcafc0ac0cfb2a3594384a6a1f53a28703540f70ab88f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 16:57:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73b51d649077bbebaa9dcafc0ac0cfb2a3594384a6a1f53a28703540f70ab88f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 16:57:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73b51d649077bbebaa9dcafc0ac0cfb2a3594384a6a1f53a28703540f70ab88f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 16:57:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73b51d649077bbebaa9dcafc0ac0cfb2a3594384a6a1f53a28703540f70ab88f/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 10 16:57:15 compute-0 podman[74880]: 2026-01-10 16:57:15.678091106 +0000 UTC m=+0.116198163 container init fc0dc41683eedbc6201d5d514ea031b915d9d764138e295128ca72ee12d667a8 (image=quay.io/ceph/ceph:v20, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 10 16:57:15 compute-0 podman[74880]: 2026-01-10 16:57:15.684074796 +0000 UTC m=+0.122181833 container start fc0dc41683eedbc6201d5d514ea031b915d9d764138e295128ca72ee12d667a8 (image=quay.io/ceph/ceph:v20, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 16:57:15 compute-0 podman[74880]: 2026-01-10 16:57:15.588476657 +0000 UTC m=+0.026583714 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 16:57:15 compute-0 bash[74880]: fc0dc41683eedbc6201d5d514ea031b915d9d764138e295128ca72ee12d667a8
Jan 10 16:57:15 compute-0 systemd[1]: Started Ceph mon.compute-0 for a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4.
Jan 10 16:57:15 compute-0 ceph-mon[74900]: set uid:gid to 167:167 (ceph:ceph)
Jan 10 16:57:15 compute-0 ceph-mon[74900]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mon, pid 2
Jan 10 16:57:15 compute-0 ceph-mon[74900]: pidfile_write: ignore empty --pid-file
Jan 10 16:57:15 compute-0 ceph-mon[74900]: load: jerasure load: lrc 
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb: RocksDB version: 7.9.2
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb: Git sha 0
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb: Compile date 2025-10-30 15:42:43
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb: DB SUMMARY
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb: DB Session ID:  CJOGPED9GW0POJY2FBQK
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb: CURRENT file:  CURRENT
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb: IDENTITY file:  IDENTITY
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 0, files: 
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000004.log size: 807 ; 
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:                         Options.error_if_exists: 0
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:                       Options.create_if_missing: 0
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:                         Options.paranoid_checks: 1
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:                                     Options.env: 0x55984a925440
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:                                      Options.fs: PosixFileSystem
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:                                Options.info_log: 0x55984cb7b3e0
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:                Options.max_file_opening_threads: 16
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:                              Options.statistics: (nil)
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:                               Options.use_fsync: 0
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:                       Options.max_log_file_size: 0
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:                         Options.allow_fallocate: 1
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:                        Options.use_direct_reads: 0
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:          Options.create_missing_column_families: 0
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:                              Options.db_log_dir: 
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:                                 Options.wal_dir: 
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:                   Options.advise_random_on_open: 1
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:                    Options.write_buffer_manager: 0x55984cafa140
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:                            Options.rate_limiter: (nil)
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:                  Options.unordered_write: 0
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:                               Options.row_cache: None
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:                              Options.wal_filter: None
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:             Options.allow_ingest_behind: 0
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:             Options.two_write_queues: 0
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:             Options.manual_wal_flush: 0
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:             Options.wal_compression: 0
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:             Options.atomic_flush: 0
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:                 Options.log_readahead_size: 0
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:             Options.allow_data_in_errors: 0
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:             Options.db_host_id: __hostname__
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:             Options.max_background_jobs: 2
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:             Options.max_background_compactions: -1
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:             Options.max_subcompactions: 1
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:             Options.max_total_wal_size: 0
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:                          Options.max_open_files: -1
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:                          Options.bytes_per_sync: 0
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:       Options.compaction_readahead_size: 0
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:                  Options.max_background_flushes: -1
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb: Compression algorithms supported:
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:         kZSTD supported: 0
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:         kXpressCompression supported: 0
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:         kBZip2Compression supported: 0
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:         kZSTDNotFinalCompression supported: 0
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:         kLZ4Compression supported: 1
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:         kZlibCompression supported: 1
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:         kLZ4HCCompression supported: 1
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:         kSnappyCompression supported: 1
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:           Options.merge_operator: 
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:        Options.compaction_filter: None
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:        Options.compaction_filter_factory: None
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:  Options.sst_partitioner_factory: None
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55984cb06600)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55984caeb8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:        Options.write_buffer_size: 33554432
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:  Options.max_write_buffer_number: 2
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:          Options.compression: NoCompression
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:       Options.prefix_extractor: nullptr
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:             Options.num_levels: 7
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:                  Options.compression_opts.level: 32767
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:               Options.compression_opts.strategy: 0
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:                  Options.compression_opts.enabled: false
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:                        Options.arena_block_size: 1048576
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:                Options.disable_auto_compactions: 0
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:                   Options.inplace_update_support: 0
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:                           Options.bloom_locality: 0
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:                    Options.max_successive_merges: 0
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:                Options.paranoid_file_checks: 0
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:                Options.force_consistency_checks: 1
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:                Options.report_bg_io_stats: 0
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:                               Options.ttl: 2592000
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:                       Options.enable_blob_files: false
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:                           Options.min_blob_size: 0
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:                          Options.blob_file_size: 268435456
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb:                Options.blob_file_starting_level: 0
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 7f71f9c2-f3c5-4fc3-bcd9-6ffe346ae9d4
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768064235729896, "job": 1, "event": "recovery_started", "wal_files": [4]}
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768064235732603, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1944, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 819, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 696, "raw_average_value_size": 139, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768064235, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f71f9c2-f3c5-4fc3-bcd9-6ffe346ae9d4", "db_session_id": "CJOGPED9GW0POJY2FBQK", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768064235732756, "job": 1, "event": "recovery_finished"}
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55984cb18e00
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb: DB pointer 0x55984cc64000
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 10 16:57:15 compute-0 ceph-mon[74900]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.90 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.7      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.90 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.7      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.7      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.7      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.14 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.14 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55984caeb8d0#2 capacity: 512.00 MB usage: 0.22 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 10 16:57:15 compute-0 ceph-mon[74900]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4
Jan 10 16:57:15 compute-0 ceph-mon[74900]: mon.compute-0@-1(???) e0 preinit fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4
Jan 10 16:57:15 compute-0 ceph-mon[74900]: mon.compute-0@-1(probing) e0  my rank is now 0 (was -1)
Jan 10 16:57:15 compute-0 ceph-mon[74900]: mon.compute-0@0(probing) e0 win_standalone_election
Jan 10 16:57:15 compute-0 ceph-mon[74900]: paxos.0).electionLogic(0) init, first boot, initializing epoch at 1 
Jan 10 16:57:15 compute-0 ceph-mon[74900]: mon.compute-0@0(electing) e0 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 10 16:57:15 compute-0 ceph-mon[74900]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 10 16:57:15 compute-0 ceph-mon[74900]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Jan 10 16:57:15 compute-0 ceph-mon[74900]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Jan 10 16:57:15 compute-0 ceph-mon[74900]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Jan 10 16:57:15 compute-0 ceph-mon[74900]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Jan 10 16:57:15 compute-0 ceph-mon[74900]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 10 16:57:15 compute-0 ceph-mon[74900]: mon.compute-0@0(leader) e0 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Jan 10 16:57:15 compute-0 ceph-mon[74900]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Jan 10 16:57:15 compute-0 ceph-mon[74900]: mon.compute-0@0(probing) e1 win_standalone_election
Jan 10 16:57:15 compute-0 ceph-mon[74900]: paxos.0).electionLogic(2) init, last seen epoch 2
Jan 10 16:57:15 compute-0 ceph-mon[74900]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 10 16:57:15 compute-0 ceph-mon[74900]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 10 16:57:15 compute-0 ceph-mon[74900]: log_channel(cluster) log [DBG] : monmap epoch 1
Jan 10 16:57:15 compute-0 ceph-mon[74900]: log_channel(cluster) log [DBG] : fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4
Jan 10 16:57:15 compute-0 ceph-mon[74900]: log_channel(cluster) log [DBG] : last_changed 2026-01-10T16:57:13.592121+0000
Jan 10 16:57:15 compute-0 ceph-mon[74900]: log_channel(cluster) log [DBG] : created 2026-01-10T16:57:13.592121+0000
Jan 10 16:57:15 compute-0 ceph-mon[74900]: log_channel(cluster) log [DBG] : min_mon_release 20 (tentacle)
Jan 10 16:57:15 compute-0 ceph-mon[74900]: log_channel(cluster) log [DBG] : election_strategy: 1
Jan 10 16:57:15 compute-0 ceph-mon[74900]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Jan 10 16:57:15 compute-0 ceph-mon[74900]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 10 16:57:15 compute-0 ceph-mon[74900]: mgrc update_daemon_metadata mon.compute-0 metadata {addrs=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],arch=x86_64,ceph_release=tentacle,ceph_version=ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo),ceph_version_short=20.2.0,ceph_version_when_created=ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo),compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-0,container_image=quay.io/ceph/ceph:v20,cpu=AMD EPYC-Rome Processor,created_at=2026-01-10T16:57:13.831154Z,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-0,kernel_description=#1 SMP PREEMPT_DYNAMIC Mon Dec 29 08:24:22 UTC 2025,kernel_version=5.14.0-655.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864312,os=Linux}
Jan 10 16:57:15 compute-0 ceph-mon[74900]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Jan 10 16:57:15 compute-0 ceph-mon[74900]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Jan 10 16:57:15 compute-0 ceph-mon[74900]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Jan 10 16:57:15 compute-0 ceph-mon[74900]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Jan 10 16:57:15 compute-0 ceph-mon[74900]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 10 16:57:15 compute-0 ceph-mon[74900]: mon.compute-0@0(leader) e1 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout,16=squid ondisk layout,17=tentacle ondisk layout}
Jan 10 16:57:15 compute-0 ceph-mon[74900]: mon.compute-0@0(leader).mds e1 new map
Jan 10 16:57:15 compute-0 ceph-mon[74900]: mon.compute-0@0(leader).mds e1 print_map
                                           e1
                                           btime 2026-01-10T16:57:15:771836+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: -1
                                            
                                           No filesystems configured
Jan 10 16:57:15 compute-0 ceph-mon[74900]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Jan 10 16:57:15 compute-0 ceph-mon[74900]: log_channel(cluster) log [DBG] : fsmap 
Jan 10 16:57:15 compute-0 podman[74901]: 2026-01-10 16:57:15.782514154 +0000 UTC m=+0.057174660 container create ea6b8b56ef0e1e84f8798acc555fc0716469d5437b1b1fc1fe55de955e48c042 (image=quay.io/ceph/ceph:v20, name=hopeful_mahavira, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 16:57:15 compute-0 ceph-mon[74900]: mon.compute-0@0(leader).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Jan 10 16:57:15 compute-0 ceph-mon[74900]: mon.compute-0@0(leader).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Jan 10 16:57:15 compute-0 ceph-mon[74900]: mon.compute-0@0(leader).osd e1 e1: 0 total, 0 up, 0 in
Jan 10 16:57:15 compute-0 ceph-mon[74900]: mon.compute-0@0(leader).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Jan 10 16:57:15 compute-0 ceph-mon[74900]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 10 16:57:15 compute-0 ceph-mon[74900]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 10 16:57:15 compute-0 ceph-mon[74900]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 10 16:57:15 compute-0 ceph-mon[74900]: mkfs a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4
Jan 10 16:57:15 compute-0 ceph-mon[74900]: mon.compute-0@0(leader).paxosservice(auth 1..1) refresh upgraded, format 0 -> 3
Jan 10 16:57:15 compute-0 ceph-mon[74900]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Jan 10 16:57:15 compute-0 ceph-mon[74900]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Jan 10 16:57:15 compute-0 ceph-mon[74900]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 10 16:57:15 compute-0 systemd[1]: Started libpod-conmon-ea6b8b56ef0e1e84f8798acc555fc0716469d5437b1b1fc1fe55de955e48c042.scope.
Jan 10 16:57:15 compute-0 podman[74901]: 2026-01-10 16:57:15.758799813 +0000 UTC m=+0.033460379 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 16:57:15 compute-0 systemd[1]: Started libcrun container.
Jan 10 16:57:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f566ea5fc515376a682034bfbf98e7e00582f79273c577c5ab68c7e0c0023ff/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 10 16:57:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f566ea5fc515376a682034bfbf98e7e00582f79273c577c5ab68c7e0c0023ff/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 16:57:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f566ea5fc515376a682034bfbf98e7e00582f79273c577c5ab68c7e0c0023ff/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 10 16:57:15 compute-0 podman[74901]: 2026-01-10 16:57:15.880459149 +0000 UTC m=+0.155119705 container init ea6b8b56ef0e1e84f8798acc555fc0716469d5437b1b1fc1fe55de955e48c042 (image=quay.io/ceph/ceph:v20, name=hopeful_mahavira, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 10 16:57:15 compute-0 podman[74901]: 2026-01-10 16:57:15.891624316 +0000 UTC m=+0.166284862 container start ea6b8b56ef0e1e84f8798acc555fc0716469d5437b1b1fc1fe55de955e48c042 (image=quay.io/ceph/ceph:v20, name=hopeful_mahavira, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 10 16:57:15 compute-0 podman[74901]: 2026-01-10 16:57:15.89565896 +0000 UTC m=+0.170319506 container attach ea6b8b56ef0e1e84f8798acc555fc0716469d5437b1b1fc1fe55de955e48c042 (image=quay.io/ceph/ceph:v20, name=hopeful_mahavira, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 10 16:57:16 compute-0 ceph-mon[74900]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0)
Jan 10 16:57:16 compute-0 ceph-mon[74900]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3885305628' entity='client.admin' cmd={"prefix": "status"} : dispatch
Jan 10 16:57:16 compute-0 hopeful_mahavira[74955]:   cluster:
Jan 10 16:57:16 compute-0 hopeful_mahavira[74955]:     id:     a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4
Jan 10 16:57:16 compute-0 hopeful_mahavira[74955]:     health: HEALTH_OK
Jan 10 16:57:16 compute-0 hopeful_mahavira[74955]:  
Jan 10 16:57:16 compute-0 hopeful_mahavira[74955]:   services:
Jan 10 16:57:16 compute-0 hopeful_mahavira[74955]:     mon: 1 daemons, quorum compute-0 (age 0.327597s) [leader: compute-0]
Jan 10 16:57:16 compute-0 hopeful_mahavira[74955]:     mgr: no daemons active
Jan 10 16:57:16 compute-0 hopeful_mahavira[74955]:     osd: 0 osds: 0 up, 0 in
Jan 10 16:57:16 compute-0 hopeful_mahavira[74955]:  
Jan 10 16:57:16 compute-0 hopeful_mahavira[74955]:   data:
Jan 10 16:57:16 compute-0 hopeful_mahavira[74955]:     pools:   0 pools, 0 pgs
Jan 10 16:57:16 compute-0 hopeful_mahavira[74955]:     objects: 0 objects, 0 B
Jan 10 16:57:16 compute-0 hopeful_mahavira[74955]:     usage:   0 B used, 0 B / 0 B avail
Jan 10 16:57:16 compute-0 hopeful_mahavira[74955]:     pgs:     
Jan 10 16:57:16 compute-0 hopeful_mahavira[74955]:  
Jan 10 16:57:16 compute-0 systemd[1]: libpod-ea6b8b56ef0e1e84f8798acc555fc0716469d5437b1b1fc1fe55de955e48c042.scope: Deactivated successfully.
Jan 10 16:57:16 compute-0 podman[74901]: 2026-01-10 16:57:16.11486566 +0000 UTC m=+0.389526226 container died ea6b8b56ef0e1e84f8798acc555fc0716469d5437b1b1fc1fe55de955e48c042 (image=quay.io/ceph/ceph:v20, name=hopeful_mahavira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 16:57:16 compute-0 podman[74901]: 2026-01-10 16:57:16.156783158 +0000 UTC m=+0.431443674 container remove ea6b8b56ef0e1e84f8798acc555fc0716469d5437b1b1fc1fe55de955e48c042 (image=quay.io/ceph/ceph:v20, name=hopeful_mahavira, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 16:57:16 compute-0 systemd[1]: libpod-conmon-ea6b8b56ef0e1e84f8798acc555fc0716469d5437b1b1fc1fe55de955e48c042.scope: Deactivated successfully.
Jan 10 16:57:16 compute-0 podman[74991]: 2026-01-10 16:57:16.224680981 +0000 UTC m=+0.041751213 container create a1a64b7b990e2722cc5942f6dd3313660acd2dc60a81fce09e8fbf606f99b5ff (image=quay.io/ceph/ceph:v20, name=sweet_hellman, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default)
Jan 10 16:57:16 compute-0 systemd[1]: Started libpod-conmon-a1a64b7b990e2722cc5942f6dd3313660acd2dc60a81fce09e8fbf606f99b5ff.scope.
Jan 10 16:57:16 compute-0 systemd[1]: Started libcrun container.
Jan 10 16:57:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a691c6c1886a783f431d104110c79e92b747c4deb543b636b479fda00bb37517/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 16:57:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a691c6c1886a783f431d104110c79e92b747c4deb543b636b479fda00bb37517/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 10 16:57:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a691c6c1886a783f431d104110c79e92b747c4deb543b636b479fda00bb37517/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 16:57:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a691c6c1886a783f431d104110c79e92b747c4deb543b636b479fda00bb37517/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 10 16:57:16 compute-0 podman[74991]: 2026-01-10 16:57:16.299680816 +0000 UTC m=+0.116751138 container init a1a64b7b990e2722cc5942f6dd3313660acd2dc60a81fce09e8fbf606f99b5ff (image=quay.io/ceph/ceph:v20, name=sweet_hellman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 10 16:57:16 compute-0 podman[74991]: 2026-01-10 16:57:16.210665824 +0000 UTC m=+0.027736076 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 16:57:16 compute-0 podman[74991]: 2026-01-10 16:57:16.308411874 +0000 UTC m=+0.125482106 container start a1a64b7b990e2722cc5942f6dd3313660acd2dc60a81fce09e8fbf606f99b5ff (image=quay.io/ceph/ceph:v20, name=sweet_hellman, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 10 16:57:16 compute-0 podman[74991]: 2026-01-10 16:57:16.31147012 +0000 UTC m=+0.128540352 container attach a1a64b7b990e2722cc5942f6dd3313660acd2dc60a81fce09e8fbf606f99b5ff (image=quay.io/ceph/ceph:v20, name=sweet_hellman, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 16:57:16 compute-0 ceph-mon[74900]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Jan 10 16:57:16 compute-0 ceph-mon[74900]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3973554417' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Jan 10 16:57:16 compute-0 ceph-mon[74900]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3973554417' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Jan 10 16:57:16 compute-0 sweet_hellman[75008]: 
Jan 10 16:57:16 compute-0 sweet_hellman[75008]: [global]
Jan 10 16:57:16 compute-0 sweet_hellman[75008]:         fsid = a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4
Jan 10 16:57:16 compute-0 sweet_hellman[75008]:         mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Jan 10 16:57:16 compute-0 sweet_hellman[75008]:         osd_crush_chooseleaf_type = 0
Jan 10 16:57:16 compute-0 systemd[1]: libpod-a1a64b7b990e2722cc5942f6dd3313660acd2dc60a81fce09e8fbf606f99b5ff.scope: Deactivated successfully.
Jan 10 16:57:16 compute-0 podman[74991]: 2026-01-10 16:57:16.549659528 +0000 UTC m=+0.366729790 container died a1a64b7b990e2722cc5942f6dd3313660acd2dc60a81fce09e8fbf606f99b5ff (image=quay.io/ceph/ceph:v20, name=sweet_hellman, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 16:57:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-a691c6c1886a783f431d104110c79e92b747c4deb543b636b479fda00bb37517-merged.mount: Deactivated successfully.
Jan 10 16:57:16 compute-0 podman[74991]: 2026-01-10 16:57:16.60901877 +0000 UTC m=+0.426089012 container remove a1a64b7b990e2722cc5942f6dd3313660acd2dc60a81fce09e8fbf606f99b5ff (image=quay.io/ceph/ceph:v20, name=sweet_hellman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 10 16:57:16 compute-0 systemd[1]: libpod-conmon-a1a64b7b990e2722cc5942f6dd3313660acd2dc60a81fce09e8fbf606f99b5ff.scope: Deactivated successfully.
Jan 10 16:57:16 compute-0 podman[75045]: 2026-01-10 16:57:16.661136847 +0000 UTC m=+0.033985064 container create f1956d13a97d4f28c354df51b36e6e6243c53c2c4331f6be0ae3cda9e86543c1 (image=quay.io/ceph/ceph:v20, name=sad_wilbur, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 16:57:16 compute-0 systemd[1]: Started libpod-conmon-f1956d13a97d4f28c354df51b36e6e6243c53c2c4331f6be0ae3cda9e86543c1.scope.
Jan 10 16:57:16 compute-0 systemd[1]: Started libcrun container.
Jan 10 16:57:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd9e44e0c93417860a8e16e8f86a3e3f03ea23f600caa6cb190507920d08e008/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 16:57:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd9e44e0c93417860a8e16e8f86a3e3f03ea23f600caa6cb190507920d08e008/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 10 16:57:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd9e44e0c93417860a8e16e8f86a3e3f03ea23f600caa6cb190507920d08e008/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 16:57:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd9e44e0c93417860a8e16e8f86a3e3f03ea23f600caa6cb190507920d08e008/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 10 16:57:16 compute-0 podman[75045]: 2026-01-10 16:57:16.740404872 +0000 UTC m=+0.113253109 container init f1956d13a97d4f28c354df51b36e6e6243c53c2c4331f6be0ae3cda9e86543c1 (image=quay.io/ceph/ceph:v20, name=sad_wilbur, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 10 16:57:16 compute-0 podman[75045]: 2026-01-10 16:57:16.646167392 +0000 UTC m=+0.019015639 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 16:57:16 compute-0 podman[75045]: 2026-01-10 16:57:16.748931134 +0000 UTC m=+0.121779371 container start f1956d13a97d4f28c354df51b36e6e6243c53c2c4331f6be0ae3cda9e86543c1 (image=quay.io/ceph/ceph:v20, name=sad_wilbur, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0)
Jan 10 16:57:16 compute-0 podman[75045]: 2026-01-10 16:57:16.752551426 +0000 UTC m=+0.125399673 container attach f1956d13a97d4f28c354df51b36e6e6243c53c2c4331f6be0ae3cda9e86543c1 (image=quay.io/ceph/ceph:v20, name=sad_wilbur, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 16:57:16 compute-0 ceph-mon[74900]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 10 16:57:16 compute-0 ceph-mon[74900]: monmap epoch 1
Jan 10 16:57:16 compute-0 ceph-mon[74900]: fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4
Jan 10 16:57:16 compute-0 ceph-mon[74900]: last_changed 2026-01-10T16:57:13.592121+0000
Jan 10 16:57:16 compute-0 ceph-mon[74900]: created 2026-01-10T16:57:13.592121+0000
Jan 10 16:57:16 compute-0 ceph-mon[74900]: min_mon_release 20 (tentacle)
Jan 10 16:57:16 compute-0 ceph-mon[74900]: election_strategy: 1
Jan 10 16:57:16 compute-0 ceph-mon[74900]: 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Jan 10 16:57:16 compute-0 ceph-mon[74900]: fsmap 
Jan 10 16:57:16 compute-0 ceph-mon[74900]: osdmap e1: 0 total, 0 up, 0 in
Jan 10 16:57:16 compute-0 ceph-mon[74900]: mgrmap e1: no daemons active
Jan 10 16:57:16 compute-0 ceph-mon[74900]: from='client.? 192.168.122.100:0/3885305628' entity='client.admin' cmd={"prefix": "status"} : dispatch
Jan 10 16:57:16 compute-0 ceph-mon[74900]: from='client.? 192.168.122.100:0/3973554417' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Jan 10 16:57:16 compute-0 ceph-mon[74900]: from='client.? 192.168.122.100:0/3973554417' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Jan 10 16:57:16 compute-0 ceph-mon[74900]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 16:57:16 compute-0 ceph-mon[74900]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3015864717' entity='client.admin' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 16:57:17 compute-0 systemd[1]: libpod-f1956d13a97d4f28c354df51b36e6e6243c53c2c4331f6be0ae3cda9e86543c1.scope: Deactivated successfully.
Jan 10 16:57:17 compute-0 podman[75045]: 2026-01-10 16:57:17.008833367 +0000 UTC m=+0.381681594 container died f1956d13a97d4f28c354df51b36e6e6243c53c2c4331f6be0ae3cda9e86543c1 (image=quay.io/ceph/ceph:v20, name=sad_wilbur, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 10 16:57:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-fd9e44e0c93417860a8e16e8f86a3e3f03ea23f600caa6cb190507920d08e008-merged.mount: Deactivated successfully.
Jan 10 16:57:17 compute-0 podman[75045]: 2026-01-10 16:57:17.059563584 +0000 UTC m=+0.432411811 container remove f1956d13a97d4f28c354df51b36e6e6243c53c2c4331f6be0ae3cda9e86543c1 (image=quay.io/ceph/ceph:v20, name=sad_wilbur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 16:57:17 compute-0 systemd[1]: libpod-conmon-f1956d13a97d4f28c354df51b36e6e6243c53c2c4331f6be0ae3cda9e86543c1.scope: Deactivated successfully.
Jan 10 16:57:17 compute-0 systemd[1]: Stopping Ceph mon.compute-0 for a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4...
Jan 10 16:57:17 compute-0 ceph-mon[74900]: received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Jan 10 16:57:17 compute-0 ceph-mon[74900]: mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Jan 10 16:57:17 compute-0 ceph-mon[74900]: mon.compute-0@0(leader) e1 shutdown
Jan 10 16:57:17 compute-0 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-mon-compute-0[74896]: 2026-01-10T16:57:17.505+0000 7fe694f5f640 -1 received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Jan 10 16:57:17 compute-0 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-mon-compute-0[74896]: 2026-01-10T16:57:17.505+0000 7fe694f5f640 -1 mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Jan 10 16:57:17 compute-0 ceph-mon[74900]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Jan 10 16:57:17 compute-0 ceph-mon[74900]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Jan 10 16:57:17 compute-0 podman[75129]: 2026-01-10 16:57:17.660592811 +0000 UTC m=+0.460833356 container died fc0dc41683eedbc6201d5d514ea031b915d9d764138e295128ca72ee12d667a8 (image=quay.io/ceph/ceph:v20, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-mon-compute-0, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 16:57:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-73b51d649077bbebaa9dcafc0ac0cfb2a3594384a6a1f53a28703540f70ab88f-merged.mount: Deactivated successfully.
Jan 10 16:57:17 compute-0 podman[75129]: 2026-01-10 16:57:17.745795235 +0000 UTC m=+0.546035780 container remove fc0dc41683eedbc6201d5d514ea031b915d9d764138e295128ca72ee12d667a8 (image=quay.io/ceph/ceph:v20, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-mon-compute-0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030)
Jan 10 16:57:17 compute-0 bash[75129]: ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-mon-compute-0
Jan 10 16:57:17 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 10 16:57:17 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 10 16:57:17 compute-0 systemd[1]: ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4@mon.compute-0.service: Deactivated successfully.
Jan 10 16:57:17 compute-0 systemd[1]: Stopped Ceph mon.compute-0 for a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4.
Jan 10 16:57:17 compute-0 systemd[1]: Starting Ceph mon.compute-0 for a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4...
Jan 10 16:57:18 compute-0 podman[75230]: 2026-01-10 16:57:18.114317465 +0000 UTC m=+0.047463115 container create 69622407e4b336ab6e593d34ac16bfb19f7f8835a32ed22c7a89e50ee8c8d8e7 (image=quay.io/ceph/ceph:v20, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 10 16:57:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4312961fc0b9ccac8b0042cd1f5dad4c56fac1f78ca131cba8ae3afdfcf03fd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 16:57:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4312961fc0b9ccac8b0042cd1f5dad4c56fac1f78ca131cba8ae3afdfcf03fd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 16:57:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4312961fc0b9ccac8b0042cd1f5dad4c56fac1f78ca131cba8ae3afdfcf03fd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 16:57:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4312961fc0b9ccac8b0042cd1f5dad4c56fac1f78ca131cba8ae3afdfcf03fd/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 10 16:57:18 compute-0 podman[75230]: 2026-01-10 16:57:18.091754526 +0000 UTC m=+0.024900156 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 16:57:18 compute-0 podman[75230]: 2026-01-10 16:57:18.1981448 +0000 UTC m=+0.131290500 container init 69622407e4b336ab6e593d34ac16bfb19f7f8835a32ed22c7a89e50ee8c8d8e7 (image=quay.io/ceph/ceph:v20, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 10 16:57:18 compute-0 podman[75230]: 2026-01-10 16:57:18.208206465 +0000 UTC m=+0.141352115 container start 69622407e4b336ab6e593d34ac16bfb19f7f8835a32ed22c7a89e50ee8c8d8e7 (image=quay.io/ceph/ceph:v20, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 16:57:18 compute-0 bash[75230]: 69622407e4b336ab6e593d34ac16bfb19f7f8835a32ed22c7a89e50ee8c8d8e7
Jan 10 16:57:18 compute-0 systemd[1]: Started Ceph mon.compute-0 for a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4.
Jan 10 16:57:18 compute-0 ceph-mon[75249]: set uid:gid to 167:167 (ceph:ceph)
Jan 10 16:57:18 compute-0 ceph-mon[75249]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mon, pid 2
Jan 10 16:57:18 compute-0 ceph-mon[75249]: pidfile_write: ignore empty --pid-file
Jan 10 16:57:18 compute-0 ceph-mon[75249]: load: jerasure load: lrc 
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb: RocksDB version: 7.9.2
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb: Git sha 0
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb: Compile date 2025-10-30 15:42:43
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb: DB SUMMARY
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb: DB Session ID:  VPFJD76VNV79HUMFHEYZ
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb: CURRENT file:  CURRENT
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb: IDENTITY file:  IDENTITY
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb: MANIFEST file:  MANIFEST-000010 size: 179 Bytes
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 1, files: 000008.sst 
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000009.log size: 60239 ; 
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:                         Options.error_if_exists: 0
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:                       Options.create_if_missing: 0
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:                         Options.paranoid_checks: 1
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:                                     Options.env: 0x55efa203d440
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:                                      Options.fs: PosixFileSystem
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:                                Options.info_log: 0x55efa2bb3e80
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:                Options.max_file_opening_threads: 16
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:                              Options.statistics: (nil)
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:                               Options.use_fsync: 0
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:                       Options.max_log_file_size: 0
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:                         Options.allow_fallocate: 1
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:                        Options.use_direct_reads: 0
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:          Options.create_missing_column_families: 0
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:                              Options.db_log_dir: 
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:                                 Options.wal_dir: 
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:                   Options.advise_random_on_open: 1
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:                    Options.write_buffer_manager: 0x55efa2bfe140
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:                            Options.rate_limiter: (nil)
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:                  Options.unordered_write: 0
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:                               Options.row_cache: None
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:                              Options.wal_filter: None
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:             Options.allow_ingest_behind: 0
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:             Options.two_write_queues: 0
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:             Options.manual_wal_flush: 0
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:             Options.wal_compression: 0
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:             Options.atomic_flush: 0
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:                 Options.log_readahead_size: 0
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:             Options.allow_data_in_errors: 0
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:             Options.db_host_id: __hostname__
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:             Options.max_background_jobs: 2
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:             Options.max_background_compactions: -1
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:             Options.max_subcompactions: 1
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:             Options.max_total_wal_size: 0
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:                          Options.max_open_files: -1
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:                          Options.bytes_per_sync: 0
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:       Options.compaction_readahead_size: 0
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:                  Options.max_background_flushes: -1
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb: Compression algorithms supported:
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:         kZSTD supported: 0
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:         kXpressCompression supported: 0
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:         kBZip2Compression supported: 0
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:         kZSTDNotFinalCompression supported: 0
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:         kLZ4Compression supported: 1
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:         kZlibCompression supported: 1
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:         kLZ4HCCompression supported: 1
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:         kSnappyCompression supported: 1
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:           Options.merge_operator: 
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:        Options.compaction_filter: None
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:        Options.compaction_filter_factory: None
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:  Options.sst_partitioner_factory: None
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55efa2c0aa00)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55efa2bef8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:        Options.write_buffer_size: 33554432
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:  Options.max_write_buffer_number: 2
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:          Options.compression: NoCompression
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:       Options.prefix_extractor: nullptr
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:             Options.num_levels: 7
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:                  Options.compression_opts.level: 32767
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:               Options.compression_opts.strategy: 0
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:                  Options.compression_opts.enabled: false
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:                        Options.arena_block_size: 1048576
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:                Options.disable_auto_compactions: 0
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:                   Options.inplace_update_support: 0
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:                           Options.bloom_locality: 0
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:                    Options.max_successive_merges: 0
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:                Options.paranoid_file_checks: 0
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:                Options.force_consistency_checks: 1
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:                Options.report_bg_io_stats: 0
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:                               Options.ttl: 2592000
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:                       Options.enable_blob_files: false
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:                           Options.min_blob_size: 0
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:                          Options.blob_file_size: 268435456
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb:                Options.blob_file_starting_level: 0
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 7f71f9c2-f3c5-4fc3-bcd9-6ffe346ae9d4
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768064238277182, "job": 1, "event": "recovery_started", "wal_files": [9]}
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768064238282034, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 59960, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 143, "table_properties": {"data_size": 58438, "index_size": 164, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 325, "raw_key_size": 3403, "raw_average_key_size": 30, "raw_value_size": 55790, "raw_average_value_size": 507, "num_data_blocks": 9, "num_entries": 110, "num_filter_entries": 110, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768064238, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f71f9c2-f3c5-4fc3-bcd9-6ffe346ae9d4", "db_session_id": "VPFJD76VNV79HUMFHEYZ", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}}
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768064238282376, "job": 1, "event": "recovery_finished"}
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb: [db/version_set.cc:5047] Creating manifest 15
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55efa2c1ce00
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb: DB pointer 0x55efa2d66000
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 10 16:57:18 compute-0 ceph-mon[75249]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0   60.45 KB   0.5      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     13.8      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      2/0   60.45 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     13.8      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     13.8      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     13.8      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 2.85 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 2.85 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55efa2bef8d0#2 capacity: 512.00 MB usage: 0.84 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 7.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(2,0.48 KB,9.23872e-05%) IndexBlock(2,0.36 KB,6.85453e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 10 16:57:18 compute-0 ceph-mon[75249]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4
Jan 10 16:57:18 compute-0 ceph-mon[75249]: mon.compute-0@-1(???) e1 preinit fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4
Jan 10 16:57:18 compute-0 ceph-mon[75249]: mon.compute-0@-1(???).mds e1 new map
Jan 10 16:57:18 compute-0 ceph-mon[75249]: mon.compute-0@-1(???).mds e1 print_map
                                           e1
                                           btime 2026-01-10T16:57:15:771836+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: -1
                                            
                                           No filesystems configured
Jan 10 16:57:18 compute-0 ceph-mon[75249]: mon.compute-0@-1(???).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Jan 10 16:57:18 compute-0 ceph-mon[75249]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 10 16:57:18 compute-0 ceph-mon[75249]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 10 16:57:18 compute-0 ceph-mon[75249]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 10 16:57:18 compute-0 ceph-mon[75249]: mon.compute-0@-1(???).paxosservice(auth 1..2) refresh upgraded, format 0 -> 3
Jan 10 16:57:18 compute-0 ceph-mon[75249]: mon.compute-0@-1(probing) e1  my rank is now 0 (was -1)
Jan 10 16:57:18 compute-0 ceph-mon[75249]: mon.compute-0@0(probing) e1 win_standalone_election
Jan 10 16:57:18 compute-0 ceph-mon[75249]: paxos.0).electionLogic(3) init, last seen epoch 3, mid-election, bumping
Jan 10 16:57:18 compute-0 ceph-mon[75249]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 10 16:57:18 compute-0 ceph-mon[75249]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 10 16:57:18 compute-0 ceph-mon[75249]: log_channel(cluster) log [DBG] : monmap epoch 1
Jan 10 16:57:18 compute-0 ceph-mon[75249]: log_channel(cluster) log [DBG] : fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4
Jan 10 16:57:18 compute-0 ceph-mon[75249]: log_channel(cluster) log [DBG] : last_changed 2026-01-10T16:57:13.592121+0000
Jan 10 16:57:18 compute-0 ceph-mon[75249]: log_channel(cluster) log [DBG] : created 2026-01-10T16:57:13.592121+0000
Jan 10 16:57:18 compute-0 ceph-mon[75249]: log_channel(cluster) log [DBG] : min_mon_release 20 (tentacle)
Jan 10 16:57:18 compute-0 ceph-mon[75249]: log_channel(cluster) log [DBG] : election_strategy: 1
Jan 10 16:57:18 compute-0 ceph-mon[75249]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Jan 10 16:57:18 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 10 16:57:18 compute-0 ceph-mon[75249]: log_channel(cluster) log [DBG] : fsmap 
Jan 10 16:57:18 compute-0 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Jan 10 16:57:18 compute-0 ceph-mon[75249]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Jan 10 16:57:18 compute-0 podman[75250]: 2026-01-10 16:57:18.324876211 +0000 UTC m=+0.067427262 container create df6846cd9bd69b81b14b0678969522465c4e2743faea850b40fe4dc305ee3b11 (image=quay.io/ceph/ceph:v20, name=naughty_shockley, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 10 16:57:18 compute-0 systemd[1]: Started libpod-conmon-df6846cd9bd69b81b14b0678969522465c4e2743faea850b40fe4dc305ee3b11.scope.
Jan 10 16:57:18 compute-0 ceph-mon[75249]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 10 16:57:18 compute-0 ceph-mon[75249]: monmap epoch 1
Jan 10 16:57:18 compute-0 ceph-mon[75249]: fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4
Jan 10 16:57:18 compute-0 ceph-mon[75249]: last_changed 2026-01-10T16:57:13.592121+0000
Jan 10 16:57:18 compute-0 ceph-mon[75249]: created 2026-01-10T16:57:13.592121+0000
Jan 10 16:57:18 compute-0 ceph-mon[75249]: min_mon_release 20 (tentacle)
Jan 10 16:57:18 compute-0 ceph-mon[75249]: election_strategy: 1
Jan 10 16:57:18 compute-0 ceph-mon[75249]: 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Jan 10 16:57:18 compute-0 ceph-mon[75249]: fsmap 
Jan 10 16:57:18 compute-0 ceph-mon[75249]: osdmap e1: 0 total, 0 up, 0 in
Jan 10 16:57:18 compute-0 ceph-mon[75249]: mgrmap e1: no daemons active
Jan 10 16:57:18 compute-0 podman[75250]: 2026-01-10 16:57:18.297369681 +0000 UTC m=+0.039920802 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 16:57:18 compute-0 systemd[1]: Started libcrun container.
Jan 10 16:57:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57afa50e227d5ef495bbd77f26302504f770f0fff0d7ea7255dbb17b5db74a5f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 16:57:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57afa50e227d5ef495bbd77f26302504f770f0fff0d7ea7255dbb17b5db74a5f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 16:57:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57afa50e227d5ef495bbd77f26302504f770f0fff0d7ea7255dbb17b5db74a5f/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 10 16:57:18 compute-0 podman[75250]: 2026-01-10 16:57:18.430298517 +0000 UTC m=+0.172849588 container init df6846cd9bd69b81b14b0678969522465c4e2743faea850b40fe4dc305ee3b11 (image=quay.io/ceph/ceph:v20, name=naughty_shockley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 10 16:57:18 compute-0 podman[75250]: 2026-01-10 16:57:18.438590742 +0000 UTC m=+0.181141773 container start df6846cd9bd69b81b14b0678969522465c4e2743faea850b40fe4dc305ee3b11 (image=quay.io/ceph/ceph:v20, name=naughty_shockley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 10 16:57:18 compute-0 podman[75250]: 2026-01-10 16:57:18.442560815 +0000 UTC m=+0.185111866 container attach df6846cd9bd69b81b14b0678969522465c4e2743faea850b40fe4dc305ee3b11 (image=quay.io/ceph/ceph:v20, name=naughty_shockley, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 16:57:18 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=public_network}] v 0)
Jan 10 16:57:18 compute-0 systemd[1]: libpod-df6846cd9bd69b81b14b0678969522465c4e2743faea850b40fe4dc305ee3b11.scope: Deactivated successfully.
Jan 10 16:57:18 compute-0 podman[75250]: 2026-01-10 16:57:18.67499733 +0000 UTC m=+0.417548471 container died df6846cd9bd69b81b14b0678969522465c4e2743faea850b40fe4dc305ee3b11 (image=quay.io/ceph/ceph:v20, name=naughty_shockley, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 16:57:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-57afa50e227d5ef495bbd77f26302504f770f0fff0d7ea7255dbb17b5db74a5f-merged.mount: Deactivated successfully.
Jan 10 16:57:18 compute-0 podman[75250]: 2026-01-10 16:57:18.718973516 +0000 UTC m=+0.461524557 container remove df6846cd9bd69b81b14b0678969522465c4e2743faea850b40fe4dc305ee3b11 (image=quay.io/ceph/ceph:v20, name=naughty_shockley, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 16:57:18 compute-0 systemd[1]: libpod-conmon-df6846cd9bd69b81b14b0678969522465c4e2743faea850b40fe4dc305ee3b11.scope: Deactivated successfully.
Jan 10 16:57:18 compute-0 podman[75338]: 2026-01-10 16:57:18.796235905 +0000 UTC m=+0.048011921 container create 16041761f56f6676f0d0a05c95bc07b6f5d1b31f474fef0156a8d661e8ba38e3 (image=quay.io/ceph/ceph:v20, name=musing_liskov, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 10 16:57:18 compute-0 systemd[1]: Started libpod-conmon-16041761f56f6676f0d0a05c95bc07b6f5d1b31f474fef0156a8d661e8ba38e3.scope.
Jan 10 16:57:18 compute-0 systemd[1]: Started libcrun container.
Jan 10 16:57:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40e189d81349384d2f06289f51c29c8456e6a5c0ede33183dda9acb43205afc3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 16:57:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40e189d81349384d2f06289f51c29c8456e6a5c0ede33183dda9acb43205afc3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 16:57:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40e189d81349384d2f06289f51c29c8456e6a5c0ede33183dda9acb43205afc3/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 10 16:57:18 compute-0 podman[75338]: 2026-01-10 16:57:18.775384034 +0000 UTC m=+0.027160060 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 16:57:18 compute-0 podman[75338]: 2026-01-10 16:57:18.892289976 +0000 UTC m=+0.144066022 container init 16041761f56f6676f0d0a05c95bc07b6f5d1b31f474fef0156a8d661e8ba38e3 (image=quay.io/ceph/ceph:v20, name=musing_liskov, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 16:57:18 compute-0 podman[75338]: 2026-01-10 16:57:18.897223856 +0000 UTC m=+0.148999842 container start 16041761f56f6676f0d0a05c95bc07b6f5d1b31f474fef0156a8d661e8ba38e3 (image=quay.io/ceph/ceph:v20, name=musing_liskov, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 10 16:57:18 compute-0 podman[75338]: 2026-01-10 16:57:18.900729885 +0000 UTC m=+0.152505891 container attach 16041761f56f6676f0d0a05c95bc07b6f5d1b31f474fef0156a8d661e8ba38e3 (image=quay.io/ceph/ceph:v20, name=musing_liskov, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 16:57:19 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=cluster_network}] v 0)
Jan 10 16:57:19 compute-0 systemd[1]: libpod-16041761f56f6676f0d0a05c95bc07b6f5d1b31f474fef0156a8d661e8ba38e3.scope: Deactivated successfully.
Jan 10 16:57:19 compute-0 podman[75338]: 2026-01-10 16:57:19.158457917 +0000 UTC m=+0.410233933 container died 16041761f56f6676f0d0a05c95bc07b6f5d1b31f474fef0156a8d661e8ba38e3 (image=quay.io/ceph/ceph:v20, name=musing_liskov, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 16:57:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-40e189d81349384d2f06289f51c29c8456e6a5c0ede33183dda9acb43205afc3-merged.mount: Deactivated successfully.
Jan 10 16:57:19 compute-0 podman[75338]: 2026-01-10 16:57:19.21258227 +0000 UTC m=+0.464358296 container remove 16041761f56f6676f0d0a05c95bc07b6f5d1b31f474fef0156a8d661e8ba38e3 (image=quay.io/ceph/ceph:v20, name=musing_liskov, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 10 16:57:19 compute-0 systemd[1]: libpod-conmon-16041761f56f6676f0d0a05c95bc07b6f5d1b31f474fef0156a8d661e8ba38e3.scope: Deactivated successfully.
Jan 10 16:57:19 compute-0 systemd[1]: Reloading.
Jan 10 16:57:19 compute-0 systemd-rc-local-generator[75421]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 16:57:19 compute-0 systemd-sysv-generator[75425]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 10 16:57:19 compute-0 systemd[1]: Reloading.
Jan 10 16:57:19 compute-0 systemd-rc-local-generator[75461]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 16:57:19 compute-0 systemd-sysv-generator[75464]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 10 16:57:19 compute-0 systemd[1]: Starting Ceph mgr.compute-0.mkxlpr for a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4...
Jan 10 16:57:20 compute-0 podman[75519]: 2026-01-10 16:57:20.03605952 +0000 UTC m=+0.064133838 container create 1966a4894cf3ff35a13e7374e82388f167dfb75f8d810989bdba971104607200 (image=quay.io/ceph/ceph:v20, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-mgr-compute-0-mkxlpr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 10 16:57:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c70cdc44d3161d71786d28e126207e74f502112ea31f741261facd75befa6011/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 16:57:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c70cdc44d3161d71786d28e126207e74f502112ea31f741261facd75befa6011/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 16:57:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c70cdc44d3161d71786d28e126207e74f502112ea31f741261facd75befa6011/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 16:57:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c70cdc44d3161d71786d28e126207e74f502112ea31f741261facd75befa6011/merged/var/lib/ceph/mgr/ceph-compute-0.mkxlpr supports timestamps until 2038 (0x7fffffff)
Jan 10 16:57:20 compute-0 podman[75519]: 2026-01-10 16:57:20.098262532 +0000 UTC m=+0.126336860 container init 1966a4894cf3ff35a13e7374e82388f167dfb75f8d810989bdba971104607200 (image=quay.io/ceph/ceph:v20, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-mgr-compute-0-mkxlpr, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 16:57:20 compute-0 podman[75519]: 2026-01-10 16:57:20.012176193 +0000 UTC m=+0.040250591 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 16:57:20 compute-0 podman[75519]: 2026-01-10 16:57:20.110932811 +0000 UTC m=+0.139007129 container start 1966a4894cf3ff35a13e7374e82388f167dfb75f8d810989bdba971104607200 (image=quay.io/ceph/ceph:v20, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-mgr-compute-0-mkxlpr, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 16:57:20 compute-0 bash[75519]: 1966a4894cf3ff35a13e7374e82388f167dfb75f8d810989bdba971104607200
Jan 10 16:57:20 compute-0 systemd[1]: Started Ceph mgr.compute-0.mkxlpr for a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4.
Jan 10 16:57:20 compute-0 ceph-mgr[75538]: set uid:gid to 167:167 (ceph:ceph)
Jan 10 16:57:20 compute-0 ceph-mgr[75538]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mgr, pid 2
Jan 10 16:57:20 compute-0 ceph-mgr[75538]: pidfile_write: ignore empty --pid-file
Jan 10 16:57:20 compute-0 ceph-mgr[75538]: mgr[py] Loading python module 'alerts'
Jan 10 16:57:20 compute-0 podman[75539]: 2026-01-10 16:57:20.202681361 +0000 UTC m=+0.046914581 container create 54ef4c0c6d7d71730ab5aefd8536741cc7fc5fe76d4860d21aebed65a7e37222 (image=quay.io/ceph/ceph:v20, name=gifted_ritchie, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 16:57:20 compute-0 systemd[1]: Started libpod-conmon-54ef4c0c6d7d71730ab5aefd8536741cc7fc5fe76d4860d21aebed65a7e37222.scope.
Jan 10 16:57:20 compute-0 podman[75539]: 2026-01-10 16:57:20.182416806 +0000 UTC m=+0.026650076 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 16:57:20 compute-0 systemd[1]: Started libcrun container.
Jan 10 16:57:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a13c99ee4b2ad3300a2e14e4d32cdceccb32081b5881450619e673a81a2166d/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 10 16:57:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a13c99ee4b2ad3300a2e14e4d32cdceccb32081b5881450619e673a81a2166d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 16:57:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a13c99ee4b2ad3300a2e14e4d32cdceccb32081b5881450619e673a81a2166d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 16:57:20 compute-0 ceph-mgr[75538]: mgr[py] Loading python module 'balancer'
Jan 10 16:57:20 compute-0 podman[75539]: 2026-01-10 16:57:20.306351648 +0000 UTC m=+0.150584978 container init 54ef4c0c6d7d71730ab5aefd8536741cc7fc5fe76d4860d21aebed65a7e37222 (image=quay.io/ceph/ceph:v20, name=gifted_ritchie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 10 16:57:20 compute-0 podman[75539]: 2026-01-10 16:57:20.31490107 +0000 UTC m=+0.159134300 container start 54ef4c0c6d7d71730ab5aefd8536741cc7fc5fe76d4860d21aebed65a7e37222 (image=quay.io/ceph/ceph:v20, name=gifted_ritchie, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 10 16:57:20 compute-0 podman[75539]: 2026-01-10 16:57:20.322766823 +0000 UTC m=+0.167000143 container attach 54ef4c0c6d7d71730ab5aefd8536741cc7fc5fe76d4860d21aebed65a7e37222 (image=quay.io/ceph/ceph:v20, name=gifted_ritchie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 10 16:57:20 compute-0 ceph-mgr[75538]: mgr[py] Loading python module 'cephadm'
Jan 10 16:57:20 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Jan 10 16:57:20 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1179129347' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Jan 10 16:57:20 compute-0 gifted_ritchie[75576]: 
Jan 10 16:57:20 compute-0 gifted_ritchie[75576]: {
Jan 10 16:57:20 compute-0 gifted_ritchie[75576]:     "fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 16:57:20 compute-0 gifted_ritchie[75576]:     "health": {
Jan 10 16:57:20 compute-0 gifted_ritchie[75576]:         "status": "HEALTH_OK",
Jan 10 16:57:20 compute-0 gifted_ritchie[75576]:         "checks": {},
Jan 10 16:57:20 compute-0 gifted_ritchie[75576]:         "mutes": []
Jan 10 16:57:20 compute-0 gifted_ritchie[75576]:     },
Jan 10 16:57:20 compute-0 gifted_ritchie[75576]:     "election_epoch": 5,
Jan 10 16:57:20 compute-0 gifted_ritchie[75576]:     "quorum": [
Jan 10 16:57:20 compute-0 gifted_ritchie[75576]:         0
Jan 10 16:57:20 compute-0 gifted_ritchie[75576]:     ],
Jan 10 16:57:20 compute-0 gifted_ritchie[75576]:     "quorum_names": [
Jan 10 16:57:20 compute-0 gifted_ritchie[75576]:         "compute-0"
Jan 10 16:57:20 compute-0 gifted_ritchie[75576]:     ],
Jan 10 16:57:20 compute-0 gifted_ritchie[75576]:     "quorum_age": 2,
Jan 10 16:57:20 compute-0 gifted_ritchie[75576]:     "monmap": {
Jan 10 16:57:20 compute-0 gifted_ritchie[75576]:         "epoch": 1,
Jan 10 16:57:20 compute-0 gifted_ritchie[75576]:         "min_mon_release_name": "tentacle",
Jan 10 16:57:20 compute-0 gifted_ritchie[75576]:         "num_mons": 1
Jan 10 16:57:20 compute-0 gifted_ritchie[75576]:     },
Jan 10 16:57:20 compute-0 gifted_ritchie[75576]:     "osdmap": {
Jan 10 16:57:20 compute-0 gifted_ritchie[75576]:         "epoch": 1,
Jan 10 16:57:20 compute-0 gifted_ritchie[75576]:         "num_osds": 0,
Jan 10 16:57:20 compute-0 gifted_ritchie[75576]:         "num_up_osds": 0,
Jan 10 16:57:20 compute-0 gifted_ritchie[75576]:         "osd_up_since": 0,
Jan 10 16:57:20 compute-0 gifted_ritchie[75576]:         "num_in_osds": 0,
Jan 10 16:57:20 compute-0 gifted_ritchie[75576]:         "osd_in_since": 0,
Jan 10 16:57:20 compute-0 gifted_ritchie[75576]:         "num_remapped_pgs": 0
Jan 10 16:57:20 compute-0 gifted_ritchie[75576]:     },
Jan 10 16:57:20 compute-0 gifted_ritchie[75576]:     "pgmap": {
Jan 10 16:57:20 compute-0 gifted_ritchie[75576]:         "pgs_by_state": [],
Jan 10 16:57:20 compute-0 gifted_ritchie[75576]:         "num_pgs": 0,
Jan 10 16:57:20 compute-0 gifted_ritchie[75576]:         "num_pools": 0,
Jan 10 16:57:20 compute-0 gifted_ritchie[75576]:         "num_objects": 0,
Jan 10 16:57:20 compute-0 gifted_ritchie[75576]:         "data_bytes": 0,
Jan 10 16:57:20 compute-0 gifted_ritchie[75576]:         "bytes_used": 0,
Jan 10 16:57:20 compute-0 gifted_ritchie[75576]:         "bytes_avail": 0,
Jan 10 16:57:20 compute-0 gifted_ritchie[75576]:         "bytes_total": 0
Jan 10 16:57:20 compute-0 gifted_ritchie[75576]:     },
Jan 10 16:57:20 compute-0 gifted_ritchie[75576]:     "fsmap": {
Jan 10 16:57:20 compute-0 gifted_ritchie[75576]:         "epoch": 1,
Jan 10 16:57:20 compute-0 gifted_ritchie[75576]:         "btime": "2026-01-10T16:57:15:771836+0000",
Jan 10 16:57:20 compute-0 gifted_ritchie[75576]:         "by_rank": [],
Jan 10 16:57:20 compute-0 gifted_ritchie[75576]:         "up:standby": 0
Jan 10 16:57:20 compute-0 gifted_ritchie[75576]:     },
Jan 10 16:57:20 compute-0 gifted_ritchie[75576]:     "mgrmap": {
Jan 10 16:57:20 compute-0 gifted_ritchie[75576]:         "available": false,
Jan 10 16:57:20 compute-0 gifted_ritchie[75576]:         "num_standbys": 0,
Jan 10 16:57:20 compute-0 gifted_ritchie[75576]:         "modules": [
Jan 10 16:57:20 compute-0 gifted_ritchie[75576]:             "iostat",
Jan 10 16:57:20 compute-0 gifted_ritchie[75576]:             "nfs"
Jan 10 16:57:20 compute-0 gifted_ritchie[75576]:         ],
Jan 10 16:57:20 compute-0 gifted_ritchie[75576]:         "services": {}
Jan 10 16:57:20 compute-0 gifted_ritchie[75576]:     },
Jan 10 16:57:20 compute-0 gifted_ritchie[75576]:     "servicemap": {
Jan 10 16:57:20 compute-0 gifted_ritchie[75576]:         "epoch": 1,
Jan 10 16:57:20 compute-0 gifted_ritchie[75576]:         "modified": "2026-01-10T16:57:15.774565+0000",
Jan 10 16:57:20 compute-0 gifted_ritchie[75576]:         "services": {}
Jan 10 16:57:20 compute-0 gifted_ritchie[75576]:     },
Jan 10 16:57:20 compute-0 gifted_ritchie[75576]:     "progress_events": {}
Jan 10 16:57:20 compute-0 gifted_ritchie[75576]: }
Jan 10 16:57:20 compute-0 systemd[1]: libpod-54ef4c0c6d7d71730ab5aefd8536741cc7fc5fe76d4860d21aebed65a7e37222.scope: Deactivated successfully.
Jan 10 16:57:20 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/1179129347' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Jan 10 16:57:20 compute-0 podman[75602]: 2026-01-10 16:57:20.634751592 +0000 UTC m=+0.031207356 container died 54ef4c0c6d7d71730ab5aefd8536741cc7fc5fe76d4860d21aebed65a7e37222 (image=quay.io/ceph/ceph:v20, name=gifted_ritchie, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 10 16:57:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-0a13c99ee4b2ad3300a2e14e4d32cdceccb32081b5881450619e673a81a2166d-merged.mount: Deactivated successfully.
Jan 10 16:57:20 compute-0 podman[75602]: 2026-01-10 16:57:20.682992588 +0000 UTC m=+0.079448332 container remove 54ef4c0c6d7d71730ab5aefd8536741cc7fc5fe76d4860d21aebed65a7e37222 (image=quay.io/ceph/ceph:v20, name=gifted_ritchie, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 10 16:57:20 compute-0 systemd[1]: libpod-conmon-54ef4c0c6d7d71730ab5aefd8536741cc7fc5fe76d4860d21aebed65a7e37222.scope: Deactivated successfully.
Jan 10 16:57:21 compute-0 ceph-mgr[75538]: mgr[py] Loading python module 'crash'
Jan 10 16:57:21 compute-0 ceph-mgr[75538]: mgr[py] Loading python module 'dashboard'
Jan 10 16:57:21 compute-0 ceph-mgr[75538]: mgr[py] Loading python module 'devicehealth'
Jan 10 16:57:22 compute-0 ceph-mgr[75538]: mgr[py] Loading python module 'diskprediction_local'
Jan 10 16:57:22 compute-0 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-mgr-compute-0-mkxlpr[75534]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Jan 10 16:57:22 compute-0 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-mgr-compute-0-mkxlpr[75534]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Jan 10 16:57:22 compute-0 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-mgr-compute-0-mkxlpr[75534]:   from numpy import show_config as show_numpy_config
Jan 10 16:57:22 compute-0 ceph-mgr[75538]: mgr[py] Loading python module 'influx'
Jan 10 16:57:22 compute-0 ceph-mgr[75538]: mgr[py] Loading python module 'insights'
Jan 10 16:57:22 compute-0 ceph-mgr[75538]: mgr[py] Loading python module 'iostat'
Jan 10 16:57:22 compute-0 ceph-mgr[75538]: mgr[py] Loading python module 'k8sevents'
Jan 10 16:57:22 compute-0 podman[75629]: 2026-01-10 16:57:22.816658466 +0000 UTC m=+0.095015743 container create a46c3a2770c039323f9354285553d125d107ffe5ec2f122ca35344e8948f8bf7 (image=quay.io/ceph/ceph:v20, name=vigilant_easley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 10 16:57:22 compute-0 ceph-mgr[75538]: mgr[py] Loading python module 'localpool'
Jan 10 16:57:22 compute-0 podman[75629]: 2026-01-10 16:57:22.765093295 +0000 UTC m=+0.043450552 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 16:57:22 compute-0 systemd[1]: Started libpod-conmon-a46c3a2770c039323f9354285553d125d107ffe5ec2f122ca35344e8948f8bf7.scope.
Jan 10 16:57:22 compute-0 systemd[1]: Started libcrun container.
Jan 10 16:57:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c075406721321bc320516b0a2da914497813233ae132dee63eb307e3b1b743c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 16:57:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c075406721321bc320516b0a2da914497813233ae132dee63eb307e3b1b743c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 16:57:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c075406721321bc320516b0a2da914497813233ae132dee63eb307e3b1b743c/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 10 16:57:22 compute-0 podman[75629]: 2026-01-10 16:57:22.903255259 +0000 UTC m=+0.181612506 container init a46c3a2770c039323f9354285553d125d107ffe5ec2f122ca35344e8948f8bf7 (image=quay.io/ceph/ceph:v20, name=vigilant_easley, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 10 16:57:22 compute-0 podman[75629]: 2026-01-10 16:57:22.909687752 +0000 UTC m=+0.188044989 container start a46c3a2770c039323f9354285553d125d107ffe5ec2f122ca35344e8948f8bf7 (image=quay.io/ceph/ceph:v20, name=vigilant_easley, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 16:57:22 compute-0 podman[75629]: 2026-01-10 16:57:22.916968168 +0000 UTC m=+0.195325415 container attach a46c3a2770c039323f9354285553d125d107ffe5ec2f122ca35344e8948f8bf7 (image=quay.io/ceph/ceph:v20, name=vigilant_easley, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 16:57:22 compute-0 ceph-mgr[75538]: mgr[py] Loading python module 'mds_autoscaler'
Jan 10 16:57:23 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Jan 10 16:57:23 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2499614654' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Jan 10 16:57:23 compute-0 vigilant_easley[75646]: 
Jan 10 16:57:23 compute-0 vigilant_easley[75646]: {
Jan 10 16:57:23 compute-0 vigilant_easley[75646]:     "fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 16:57:23 compute-0 vigilant_easley[75646]:     "health": {
Jan 10 16:57:23 compute-0 vigilant_easley[75646]:         "status": "HEALTH_OK",
Jan 10 16:57:23 compute-0 vigilant_easley[75646]:         "checks": {},
Jan 10 16:57:23 compute-0 vigilant_easley[75646]:         "mutes": []
Jan 10 16:57:23 compute-0 vigilant_easley[75646]:     },
Jan 10 16:57:23 compute-0 vigilant_easley[75646]:     "election_epoch": 5,
Jan 10 16:57:23 compute-0 vigilant_easley[75646]:     "quorum": [
Jan 10 16:57:23 compute-0 vigilant_easley[75646]:         0
Jan 10 16:57:23 compute-0 vigilant_easley[75646]:     ],
Jan 10 16:57:23 compute-0 vigilant_easley[75646]:     "quorum_names": [
Jan 10 16:57:23 compute-0 vigilant_easley[75646]:         "compute-0"
Jan 10 16:57:23 compute-0 vigilant_easley[75646]:     ],
Jan 10 16:57:23 compute-0 vigilant_easley[75646]:     "quorum_age": 4,
Jan 10 16:57:23 compute-0 vigilant_easley[75646]:     "monmap": {
Jan 10 16:57:23 compute-0 vigilant_easley[75646]:         "epoch": 1,
Jan 10 16:57:23 compute-0 vigilant_easley[75646]:         "min_mon_release_name": "tentacle",
Jan 10 16:57:23 compute-0 vigilant_easley[75646]:         "num_mons": 1
Jan 10 16:57:23 compute-0 vigilant_easley[75646]:     },
Jan 10 16:57:23 compute-0 vigilant_easley[75646]:     "osdmap": {
Jan 10 16:57:23 compute-0 vigilant_easley[75646]:         "epoch": 1,
Jan 10 16:57:23 compute-0 vigilant_easley[75646]:         "num_osds": 0,
Jan 10 16:57:23 compute-0 vigilant_easley[75646]:         "num_up_osds": 0,
Jan 10 16:57:23 compute-0 vigilant_easley[75646]:         "osd_up_since": 0,
Jan 10 16:57:23 compute-0 vigilant_easley[75646]:         "num_in_osds": 0,
Jan 10 16:57:23 compute-0 vigilant_easley[75646]:         "osd_in_since": 0,
Jan 10 16:57:23 compute-0 vigilant_easley[75646]:         "num_remapped_pgs": 0
Jan 10 16:57:23 compute-0 vigilant_easley[75646]:     },
Jan 10 16:57:23 compute-0 vigilant_easley[75646]:     "pgmap": {
Jan 10 16:57:23 compute-0 vigilant_easley[75646]:         "pgs_by_state": [],
Jan 10 16:57:23 compute-0 vigilant_easley[75646]:         "num_pgs": 0,
Jan 10 16:57:23 compute-0 vigilant_easley[75646]:         "num_pools": 0,
Jan 10 16:57:23 compute-0 vigilant_easley[75646]:         "num_objects": 0,
Jan 10 16:57:23 compute-0 vigilant_easley[75646]:         "data_bytes": 0,
Jan 10 16:57:23 compute-0 vigilant_easley[75646]:         "bytes_used": 0,
Jan 10 16:57:23 compute-0 vigilant_easley[75646]:         "bytes_avail": 0,
Jan 10 16:57:23 compute-0 vigilant_easley[75646]:         "bytes_total": 0
Jan 10 16:57:23 compute-0 vigilant_easley[75646]:     },
Jan 10 16:57:23 compute-0 vigilant_easley[75646]:     "fsmap": {
Jan 10 16:57:23 compute-0 vigilant_easley[75646]:         "epoch": 1,
Jan 10 16:57:23 compute-0 vigilant_easley[75646]:         "btime": "2026-01-10T16:57:15:771836+0000",
Jan 10 16:57:23 compute-0 vigilant_easley[75646]:         "by_rank": [],
Jan 10 16:57:23 compute-0 vigilant_easley[75646]:         "up:standby": 0
Jan 10 16:57:23 compute-0 vigilant_easley[75646]:     },
Jan 10 16:57:23 compute-0 vigilant_easley[75646]:     "mgrmap": {
Jan 10 16:57:23 compute-0 vigilant_easley[75646]:         "available": false,
Jan 10 16:57:23 compute-0 vigilant_easley[75646]:         "num_standbys": 0,
Jan 10 16:57:23 compute-0 vigilant_easley[75646]:         "modules": [
Jan 10 16:57:23 compute-0 vigilant_easley[75646]:             "iostat",
Jan 10 16:57:23 compute-0 vigilant_easley[75646]:             "nfs"
Jan 10 16:57:23 compute-0 vigilant_easley[75646]:         ],
Jan 10 16:57:23 compute-0 vigilant_easley[75646]:         "services": {}
Jan 10 16:57:23 compute-0 vigilant_easley[75646]:     },
Jan 10 16:57:23 compute-0 vigilant_easley[75646]:     "servicemap": {
Jan 10 16:57:23 compute-0 vigilant_easley[75646]:         "epoch": 1,
Jan 10 16:57:23 compute-0 vigilant_easley[75646]:         "modified": "2026-01-10T16:57:15.774565+0000",
Jan 10 16:57:23 compute-0 vigilant_easley[75646]:         "services": {}
Jan 10 16:57:23 compute-0 vigilant_easley[75646]:     },
Jan 10 16:57:23 compute-0 vigilant_easley[75646]:     "progress_events": {}
Jan 10 16:57:23 compute-0 vigilant_easley[75646]: }
Jan 10 16:57:23 compute-0 systemd[1]: libpod-a46c3a2770c039323f9354285553d125d107ffe5ec2f122ca35344e8948f8bf7.scope: Deactivated successfully.
Jan 10 16:57:23 compute-0 podman[75629]: 2026-01-10 16:57:23.131510126 +0000 UTC m=+0.409867363 container died a46c3a2770c039323f9354285553d125d107ffe5ec2f122ca35344e8948f8bf7 (image=quay.io/ceph/ceph:v20, name=vigilant_easley, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 10 16:57:23 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/2499614654' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Jan 10 16:57:23 compute-0 ceph-mgr[75538]: mgr[py] Loading python module 'mirroring'
Jan 10 16:57:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-4c075406721321bc320516b0a2da914497813233ae132dee63eb307e3b1b743c-merged.mount: Deactivated successfully.
Jan 10 16:57:23 compute-0 podman[75629]: 2026-01-10 16:57:23.241750239 +0000 UTC m=+0.520107486 container remove a46c3a2770c039323f9354285553d125d107ffe5ec2f122ca35344e8948f8bf7 (image=quay.io/ceph/ceph:v20, name=vigilant_easley, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 16:57:23 compute-0 systemd[1]: libpod-conmon-a46c3a2770c039323f9354285553d125d107ffe5ec2f122ca35344e8948f8bf7.scope: Deactivated successfully.
Jan 10 16:57:23 compute-0 ceph-mgr[75538]: mgr[py] Loading python module 'nfs'
Jan 10 16:57:23 compute-0 ceph-mgr[75538]: mgr[py] Loading python module 'orchestrator'
Jan 10 16:57:23 compute-0 ceph-mgr[75538]: mgr[py] Loading python module 'osd_perf_query'
Jan 10 16:57:23 compute-0 ceph-mgr[75538]: mgr[py] Loading python module 'osd_support'
Jan 10 16:57:23 compute-0 ceph-mgr[75538]: mgr[py] Loading python module 'pg_autoscaler'
Jan 10 16:57:24 compute-0 ceph-mgr[75538]: mgr[py] Loading python module 'progress'
Jan 10 16:57:24 compute-0 ceph-mgr[75538]: mgr[py] Loading python module 'prometheus'
Jan 10 16:57:24 compute-0 ceph-mgr[75538]: mgr[py] Loading python module 'rbd_support'
Jan 10 16:57:24 compute-0 ceph-mgr[75538]: mgr[py] Loading python module 'rgw'
Jan 10 16:57:24 compute-0 ceph-mgr[75538]: mgr[py] Loading python module 'rook'
Jan 10 16:57:25 compute-0 podman[75683]: 2026-01-10 16:57:25.375400863 +0000 UTC m=+0.096274874 container create f936378ba963b1b0d2b5a21d6043daa07169fe2c814365ce606f3a6095afec3f (image=quay.io/ceph/ceph:v20, name=competent_poitras, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 10 16:57:25 compute-0 systemd[1]: Started libpod-conmon-f936378ba963b1b0d2b5a21d6043daa07169fe2c814365ce606f3a6095afec3f.scope.
Jan 10 16:57:25 compute-0 podman[75683]: 2026-01-10 16:57:25.329574904 +0000 UTC m=+0.050448975 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 16:57:25 compute-0 systemd[1]: Started libcrun container.
Jan 10 16:57:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41e7c1d89eab50eb1ff0fa45ca52a353153d5e6ed45520349b657375258e9180/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 16:57:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41e7c1d89eab50eb1ff0fa45ca52a353153d5e6ed45520349b657375258e9180/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 10 16:57:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41e7c1d89eab50eb1ff0fa45ca52a353153d5e6ed45520349b657375258e9180/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 16:57:25 compute-0 ceph-mgr[75538]: mgr[py] Loading python module 'selftest'
Jan 10 16:57:25 compute-0 podman[75683]: 2026-01-10 16:57:25.468553211 +0000 UTC m=+0.189427222 container init f936378ba963b1b0d2b5a21d6043daa07169fe2c814365ce606f3a6095afec3f (image=quay.io/ceph/ceph:v20, name=competent_poitras, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 16:57:25 compute-0 podman[75683]: 2026-01-10 16:57:25.474091882 +0000 UTC m=+0.194965883 container start f936378ba963b1b0d2b5a21d6043daa07169fe2c814365ce606f3a6095afec3f (image=quay.io/ceph/ceph:v20, name=competent_poitras, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 16:57:25 compute-0 podman[75683]: 2026-01-10 16:57:25.477926376 +0000 UTC m=+0.198800417 container attach f936378ba963b1b0d2b5a21d6043daa07169fe2c814365ce606f3a6095afec3f (image=quay.io/ceph/ceph:v20, name=competent_poitras, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 16:57:25 compute-0 ceph-mgr[75538]: mgr[py] Loading python module 'smb'
Jan 10 16:57:25 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Jan 10 16:57:25 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2317659530' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Jan 10 16:57:25 compute-0 competent_poitras[75699]: 
Jan 10 16:57:25 compute-0 competent_poitras[75699]: {
Jan 10 16:57:25 compute-0 competent_poitras[75699]:     "fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 16:57:25 compute-0 competent_poitras[75699]:     "health": {
Jan 10 16:57:25 compute-0 competent_poitras[75699]:         "status": "HEALTH_OK",
Jan 10 16:57:25 compute-0 competent_poitras[75699]:         "checks": {},
Jan 10 16:57:25 compute-0 competent_poitras[75699]:         "mutes": []
Jan 10 16:57:25 compute-0 competent_poitras[75699]:     },
Jan 10 16:57:25 compute-0 competent_poitras[75699]:     "election_epoch": 5,
Jan 10 16:57:25 compute-0 competent_poitras[75699]:     "quorum": [
Jan 10 16:57:25 compute-0 competent_poitras[75699]:         0
Jan 10 16:57:25 compute-0 competent_poitras[75699]:     ],
Jan 10 16:57:25 compute-0 competent_poitras[75699]:     "quorum_names": [
Jan 10 16:57:25 compute-0 competent_poitras[75699]:         "compute-0"
Jan 10 16:57:25 compute-0 competent_poitras[75699]:     ],
Jan 10 16:57:25 compute-0 competent_poitras[75699]:     "quorum_age": 7,
Jan 10 16:57:25 compute-0 competent_poitras[75699]:     "monmap": {
Jan 10 16:57:25 compute-0 competent_poitras[75699]:         "epoch": 1,
Jan 10 16:57:25 compute-0 competent_poitras[75699]:         "min_mon_release_name": "tentacle",
Jan 10 16:57:25 compute-0 competent_poitras[75699]:         "num_mons": 1
Jan 10 16:57:25 compute-0 competent_poitras[75699]:     },
Jan 10 16:57:25 compute-0 competent_poitras[75699]:     "osdmap": {
Jan 10 16:57:25 compute-0 competent_poitras[75699]:         "epoch": 1,
Jan 10 16:57:25 compute-0 competent_poitras[75699]:         "num_osds": 0,
Jan 10 16:57:25 compute-0 competent_poitras[75699]:         "num_up_osds": 0,
Jan 10 16:57:25 compute-0 competent_poitras[75699]:         "osd_up_since": 0,
Jan 10 16:57:25 compute-0 competent_poitras[75699]:         "num_in_osds": 0,
Jan 10 16:57:25 compute-0 competent_poitras[75699]:         "osd_in_since": 0,
Jan 10 16:57:25 compute-0 competent_poitras[75699]:         "num_remapped_pgs": 0
Jan 10 16:57:25 compute-0 competent_poitras[75699]:     },
Jan 10 16:57:25 compute-0 competent_poitras[75699]:     "pgmap": {
Jan 10 16:57:25 compute-0 competent_poitras[75699]:         "pgs_by_state": [],
Jan 10 16:57:25 compute-0 competent_poitras[75699]:         "num_pgs": 0,
Jan 10 16:57:25 compute-0 competent_poitras[75699]:         "num_pools": 0,
Jan 10 16:57:25 compute-0 competent_poitras[75699]:         "num_objects": 0,
Jan 10 16:57:25 compute-0 competent_poitras[75699]:         "data_bytes": 0,
Jan 10 16:57:25 compute-0 competent_poitras[75699]:         "bytes_used": 0,
Jan 10 16:57:25 compute-0 competent_poitras[75699]:         "bytes_avail": 0,
Jan 10 16:57:25 compute-0 competent_poitras[75699]:         "bytes_total": 0
Jan 10 16:57:25 compute-0 competent_poitras[75699]:     },
Jan 10 16:57:25 compute-0 competent_poitras[75699]:     "fsmap": {
Jan 10 16:57:25 compute-0 competent_poitras[75699]:         "epoch": 1,
Jan 10 16:57:25 compute-0 competent_poitras[75699]:         "btime": "2026-01-10T16:57:15:771836+0000",
Jan 10 16:57:25 compute-0 competent_poitras[75699]:         "by_rank": [],
Jan 10 16:57:25 compute-0 competent_poitras[75699]:         "up:standby": 0
Jan 10 16:57:25 compute-0 competent_poitras[75699]:     },
Jan 10 16:57:25 compute-0 competent_poitras[75699]:     "mgrmap": {
Jan 10 16:57:25 compute-0 competent_poitras[75699]:         "available": false,
Jan 10 16:57:25 compute-0 competent_poitras[75699]:         "num_standbys": 0,
Jan 10 16:57:25 compute-0 competent_poitras[75699]:         "modules": [
Jan 10 16:57:25 compute-0 competent_poitras[75699]:             "iostat",
Jan 10 16:57:25 compute-0 competent_poitras[75699]:             "nfs"
Jan 10 16:57:25 compute-0 competent_poitras[75699]:         ],
Jan 10 16:57:25 compute-0 competent_poitras[75699]:         "services": {}
Jan 10 16:57:25 compute-0 competent_poitras[75699]:     },
Jan 10 16:57:25 compute-0 competent_poitras[75699]:     "servicemap": {
Jan 10 16:57:25 compute-0 competent_poitras[75699]:         "epoch": 1,
Jan 10 16:57:25 compute-0 competent_poitras[75699]:         "modified": "2026-01-10T16:57:15.774565+0000",
Jan 10 16:57:25 compute-0 competent_poitras[75699]:         "services": {}
Jan 10 16:57:25 compute-0 competent_poitras[75699]:     },
Jan 10 16:57:25 compute-0 competent_poitras[75699]:     "progress_events": {}
Jan 10 16:57:25 compute-0 competent_poitras[75699]: }
Jan 10 16:57:25 compute-0 systemd[1]: libpod-f936378ba963b1b0d2b5a21d6043daa07169fe2c814365ce606f3a6095afec3f.scope: Deactivated successfully.
Jan 10 16:57:25 compute-0 podman[75683]: 2026-01-10 16:57:25.720077304 +0000 UTC m=+0.440951295 container died f936378ba963b1b0d2b5a21d6043daa07169fe2c814365ce606f3a6095afec3f (image=quay.io/ceph/ceph:v20, name=competent_poitras, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 10 16:57:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-41e7c1d89eab50eb1ff0fa45ca52a353153d5e6ed45520349b657375258e9180-merged.mount: Deactivated successfully.
Jan 10 16:57:25 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/2317659530' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Jan 10 16:57:25 compute-0 podman[75683]: 2026-01-10 16:57:25.756340592 +0000 UTC m=+0.477214583 container remove f936378ba963b1b0d2b5a21d6043daa07169fe2c814365ce606f3a6095afec3f (image=quay.io/ceph/ceph:v20, name=competent_poitras, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 10 16:57:25 compute-0 systemd[1]: libpod-conmon-f936378ba963b1b0d2b5a21d6043daa07169fe2c814365ce606f3a6095afec3f.scope: Deactivated successfully.
Jan 10 16:57:25 compute-0 ceph-mgr[75538]: mgr[py] Loading python module 'snap_schedule'
Jan 10 16:57:25 compute-0 ceph-mgr[75538]: mgr[py] Loading python module 'stats'
Jan 10 16:57:25 compute-0 ceph-mgr[75538]: mgr[py] Loading python module 'status'
Jan 10 16:57:26 compute-0 ceph-mgr[75538]: mgr[py] Loading python module 'telegraf'
Jan 10 16:57:26 compute-0 ceph-mgr[75538]: mgr[py] Loading python module 'telemetry'
Jan 10 16:57:26 compute-0 ceph-mgr[75538]: mgr[py] Loading python module 'test_orchestrator'
Jan 10 16:57:26 compute-0 ceph-mgr[75538]: mgr[py] Loading python module 'volumes'
Jan 10 16:57:26 compute-0 ceph-mgr[75538]: ms_deliver_dispatch: unhandled message 0x55c8a2a2b860 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Jan 10 16:57:26 compute-0 ceph-mon[75249]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.mkxlpr
Jan 10 16:57:26 compute-0 ceph-mon[75249]: log_channel(cluster) log [DBG] : mgrmap e2: compute-0.mkxlpr(active, starting, since 0.0100662s)
Jan 10 16:57:26 compute-0 ceph-mgr[75538]: mgr handle_mgr_map Activating!
Jan 10 16:57:26 compute-0 ceph-mgr[75538]: mgr handle_mgr_map I am now activating
Jan 10 16:57:26 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0)
Jan 10 16:57:26 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1838997223' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "mds metadata"} : dispatch
Jan 10 16:57:26 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).mds e1 all = 1
Jan 10 16:57:26 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Jan 10 16:57:26 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1838997223' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata"} : dispatch
Jan 10 16:57:26 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0)
Jan 10 16:57:26 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1838997223' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "mon metadata"} : dispatch
Jan 10 16:57:26 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Jan 10 16:57:26 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1838997223' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "mon metadata", "id": "compute-0"} : dispatch
Jan 10 16:57:26 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.mkxlpr", "id": "compute-0.mkxlpr"} v 0)
Jan 10 16:57:26 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1838997223' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "mgr metadata", "who": "compute-0.mkxlpr", "id": "compute-0.mkxlpr"} : dispatch
Jan 10 16:57:26 compute-0 ceph-mgr[75538]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 10 16:57:26 compute-0 ceph-mgr[75538]: mgr load Constructed class from module: balancer
Jan 10 16:57:26 compute-0 ceph-mon[75249]: log_channel(cluster) log [INF] : Manager daemon compute-0.mkxlpr is now available
Jan 10 16:57:26 compute-0 ceph-mgr[75538]: [balancer INFO root] Starting
Jan 10 16:57:26 compute-0 ceph-mgr[75538]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 10 16:57:26 compute-0 ceph-mgr[75538]: mgr load Constructed class from module: crash
Jan 10 16:57:26 compute-0 ceph-mgr[75538]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 10 16:57:26 compute-0 ceph-mgr[75538]: mgr load Constructed class from module: devicehealth
Jan 10 16:57:26 compute-0 ceph-mgr[75538]: [balancer INFO root] Optimize plan auto_2026-01-10_16:57:26
Jan 10 16:57:26 compute-0 ceph-mgr[75538]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 10 16:57:26 compute-0 ceph-mgr[75538]: [balancer INFO root] do_upmap
Jan 10 16:57:26 compute-0 ceph-mgr[75538]: [balancer INFO root] No pools available
Jan 10 16:57:26 compute-0 ceph-mgr[75538]: [devicehealth INFO root] Starting
Jan 10 16:57:26 compute-0 ceph-mgr[75538]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 10 16:57:26 compute-0 ceph-mgr[75538]: mgr load Constructed class from module: iostat
Jan 10 16:57:26 compute-0 ceph-mgr[75538]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 10 16:57:26 compute-0 ceph-mgr[75538]: mgr load Constructed class from module: nfs
Jan 10 16:57:26 compute-0 ceph-mgr[75538]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 10 16:57:26 compute-0 ceph-mgr[75538]: mgr load Constructed class from module: orchestrator
Jan 10 16:57:26 compute-0 ceph-mgr[75538]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 10 16:57:26 compute-0 ceph-mgr[75538]: mgr load Constructed class from module: pg_autoscaler
Jan 10 16:57:26 compute-0 ceph-mgr[75538]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 10 16:57:26 compute-0 ceph-mgr[75538]: mgr load Constructed class from module: progress
Jan 10 16:57:26 compute-0 ceph-mgr[75538]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 10 16:57:26 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] _maybe_adjust
Jan 10 16:57:26 compute-0 ceph-mgr[75538]: [progress INFO root] Loading...
Jan 10 16:57:26 compute-0 ceph-mgr[75538]: [progress INFO root] No stored events to load
Jan 10 16:57:26 compute-0 ceph-mgr[75538]: [progress INFO root] Loaded [] historic events
Jan 10 16:57:26 compute-0 ceph-mgr[75538]: [progress INFO root] Loaded OSDMap, ready.
Jan 10 16:57:26 compute-0 ceph-mgr[75538]: [rbd_support INFO root] recovery thread starting
Jan 10 16:57:26 compute-0 ceph-mgr[75538]: [rbd_support INFO root] starting setup
Jan 10 16:57:26 compute-0 ceph-mgr[75538]: mgr load Constructed class from module: rbd_support
Jan 10 16:57:26 compute-0 ceph-mon[75249]: Activating manager daemon compute-0.mkxlpr
Jan 10 16:57:26 compute-0 ceph-mon[75249]: mgrmap e2: compute-0.mkxlpr(active, starting, since 0.0100662s)
Jan 10 16:57:26 compute-0 ceph-mon[75249]: from='mgr.14102 192.168.122.100:0/1838997223' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "mds metadata"} : dispatch
Jan 10 16:57:26 compute-0 ceph-mon[75249]: from='mgr.14102 192.168.122.100:0/1838997223' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata"} : dispatch
Jan 10 16:57:26 compute-0 ceph-mon[75249]: from='mgr.14102 192.168.122.100:0/1838997223' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "mon metadata"} : dispatch
Jan 10 16:57:26 compute-0 ceph-mon[75249]: from='mgr.14102 192.168.122.100:0/1838997223' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "mon metadata", "id": "compute-0"} : dispatch
Jan 10 16:57:26 compute-0 ceph-mon[75249]: from='mgr.14102 192.168.122.100:0/1838997223' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "mgr metadata", "who": "compute-0.mkxlpr", "id": "compute-0.mkxlpr"} : dispatch
Jan 10 16:57:26 compute-0 ceph-mon[75249]: Manager daemon compute-0.mkxlpr is now available
Jan 10 16:57:26 compute-0 ceph-mgr[75538]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 10 16:57:26 compute-0 ceph-mgr[75538]: mgr load Constructed class from module: status
Jan 10 16:57:26 compute-0 ceph-mgr[75538]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 10 16:57:26 compute-0 ceph-mgr[75538]: mgr load Constructed class from module: telemetry
Jan 10 16:57:26 compute-0 ceph-mgr[75538]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 10 16:57:26 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/report_id}] v 0)
Jan 10 16:57:26 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mkxlpr/mirror_snapshot_schedule"} v 0)
Jan 10 16:57:26 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/1838997223' entity='mgr.compute-0.mkxlpr' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mkxlpr/mirror_snapshot_schedule"} : dispatch
Jan 10 16:57:26 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/1838997223' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:57:26 compute-0 ceph-mgr[75538]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 10 16:57:26 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/salt}] v 0)
Jan 10 16:57:26 compute-0 ceph-mgr[75538]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Jan 10 16:57:26 compute-0 ceph-mgr[75538]: [rbd_support INFO root] PerfHandler: starting
Jan 10 16:57:26 compute-0 ceph-mgr[75538]: [rbd_support INFO root] TaskHandler: starting
Jan 10 16:57:26 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/1838997223' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:57:26 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mkxlpr/trash_purge_schedule"} v 0)
Jan 10 16:57:26 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/1838997223' entity='mgr.compute-0.mkxlpr' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mkxlpr/trash_purge_schedule"} : dispatch
Jan 10 16:57:26 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/collection}] v 0)
Jan 10 16:57:26 compute-0 ceph-mgr[75538]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 10 16:57:26 compute-0 ceph-mgr[75538]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Jan 10 16:57:26 compute-0 ceph-mgr[75538]: [rbd_support INFO root] setup complete
Jan 10 16:57:27 compute-0 ceph-mgr[75538]: mgr load Constructed class from module: volumes
Jan 10 16:57:27 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/1838997223' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:57:27 compute-0 podman[75815]: 2026-01-10 16:57:27.863755172 +0000 UTC m=+0.069134644 container create 2de9c15466be09f366cb9de288e0a5f43945ceafcf1fb8af98059d47625827bd (image=quay.io/ceph/ceph:v20, name=focused_joliot, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 10 16:57:27 compute-0 systemd[1]: Started libpod-conmon-2de9c15466be09f366cb9de288e0a5f43945ceafcf1fb8af98059d47625827bd.scope.
Jan 10 16:57:27 compute-0 podman[75815]: 2026-01-10 16:57:27.834493215 +0000 UTC m=+0.039872747 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 16:57:27 compute-0 systemd[1]: Started libcrun container.
Jan 10 16:57:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8e570c08fd7c6d4d55d41af1137ce276a2dd87ffab3f7a8713a780524794474/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 10 16:57:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8e570c08fd7c6d4d55d41af1137ce276a2dd87ffab3f7a8713a780524794474/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 16:57:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8e570c08fd7c6d4d55d41af1137ce276a2dd87ffab3f7a8713a780524794474/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 16:57:27 compute-0 ceph-mon[75249]: log_channel(cluster) log [DBG] : mgrmap e3: compute-0.mkxlpr(active, since 1.03538s)
Jan 10 16:57:27 compute-0 podman[75815]: 2026-01-10 16:57:27.967424587 +0000 UTC m=+0.172804099 container init 2de9c15466be09f366cb9de288e0a5f43945ceafcf1fb8af98059d47625827bd (image=quay.io/ceph/ceph:v20, name=focused_joliot, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 16:57:27 compute-0 podman[75815]: 2026-01-10 16:57:27.974036887 +0000 UTC m=+0.179416339 container start 2de9c15466be09f366cb9de288e0a5f43945ceafcf1fb8af98059d47625827bd (image=quay.io/ceph/ceph:v20, name=focused_joliot, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 10 16:57:27 compute-0 podman[75815]: 2026-01-10 16:57:27.978513349 +0000 UTC m=+0.183892851 container attach 2de9c15466be09f366cb9de288e0a5f43945ceafcf1fb8af98059d47625827bd (image=quay.io/ceph/ceph:v20, name=focused_joliot, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 10 16:57:27 compute-0 ceph-mon[75249]: from='mgr.14102 192.168.122.100:0/1838997223' entity='mgr.compute-0.mkxlpr' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mkxlpr/mirror_snapshot_schedule"} : dispatch
Jan 10 16:57:27 compute-0 ceph-mon[75249]: from='mgr.14102 192.168.122.100:0/1838997223' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:57:27 compute-0 ceph-mon[75249]: from='mgr.14102 192.168.122.100:0/1838997223' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:57:27 compute-0 ceph-mon[75249]: from='mgr.14102 192.168.122.100:0/1838997223' entity='mgr.compute-0.mkxlpr' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mkxlpr/trash_purge_schedule"} : dispatch
Jan 10 16:57:27 compute-0 ceph-mon[75249]: from='mgr.14102 192.168.122.100:0/1838997223' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:57:27 compute-0 ceph-mon[75249]: mgrmap e3: compute-0.mkxlpr(active, since 1.03538s)
Jan 10 16:57:28 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Jan 10 16:57:28 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2128525717' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Jan 10 16:57:28 compute-0 focused_joliot[75831]: 
Jan 10 16:57:28 compute-0 focused_joliot[75831]: {
Jan 10 16:57:28 compute-0 focused_joliot[75831]:     "fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 16:57:28 compute-0 focused_joliot[75831]:     "health": {
Jan 10 16:57:28 compute-0 focused_joliot[75831]:         "status": "HEALTH_OK",
Jan 10 16:57:28 compute-0 focused_joliot[75831]:         "checks": {},
Jan 10 16:57:28 compute-0 focused_joliot[75831]:         "mutes": []
Jan 10 16:57:28 compute-0 focused_joliot[75831]:     },
Jan 10 16:57:28 compute-0 focused_joliot[75831]:     "election_epoch": 5,
Jan 10 16:57:28 compute-0 focused_joliot[75831]:     "quorum": [
Jan 10 16:57:28 compute-0 focused_joliot[75831]:         0
Jan 10 16:57:28 compute-0 focused_joliot[75831]:     ],
Jan 10 16:57:28 compute-0 focused_joliot[75831]:     "quorum_names": [
Jan 10 16:57:28 compute-0 focused_joliot[75831]:         "compute-0"
Jan 10 16:57:28 compute-0 focused_joliot[75831]:     ],
Jan 10 16:57:28 compute-0 focused_joliot[75831]:     "quorum_age": 10,
Jan 10 16:57:28 compute-0 focused_joliot[75831]:     "monmap": {
Jan 10 16:57:28 compute-0 focused_joliot[75831]:         "epoch": 1,
Jan 10 16:57:28 compute-0 focused_joliot[75831]:         "min_mon_release_name": "tentacle",
Jan 10 16:57:28 compute-0 focused_joliot[75831]:         "num_mons": 1
Jan 10 16:57:28 compute-0 focused_joliot[75831]:     },
Jan 10 16:57:28 compute-0 focused_joliot[75831]:     "osdmap": {
Jan 10 16:57:28 compute-0 focused_joliot[75831]:         "epoch": 1,
Jan 10 16:57:28 compute-0 focused_joliot[75831]:         "num_osds": 0,
Jan 10 16:57:28 compute-0 focused_joliot[75831]:         "num_up_osds": 0,
Jan 10 16:57:28 compute-0 focused_joliot[75831]:         "osd_up_since": 0,
Jan 10 16:57:28 compute-0 focused_joliot[75831]:         "num_in_osds": 0,
Jan 10 16:57:28 compute-0 focused_joliot[75831]:         "osd_in_since": 0,
Jan 10 16:57:28 compute-0 focused_joliot[75831]:         "num_remapped_pgs": 0
Jan 10 16:57:28 compute-0 focused_joliot[75831]:     },
Jan 10 16:57:28 compute-0 focused_joliot[75831]:     "pgmap": {
Jan 10 16:57:28 compute-0 focused_joliot[75831]:         "pgs_by_state": [],
Jan 10 16:57:28 compute-0 focused_joliot[75831]:         "num_pgs": 0,
Jan 10 16:57:28 compute-0 focused_joliot[75831]:         "num_pools": 0,
Jan 10 16:57:28 compute-0 focused_joliot[75831]:         "num_objects": 0,
Jan 10 16:57:28 compute-0 focused_joliot[75831]:         "data_bytes": 0,
Jan 10 16:57:28 compute-0 focused_joliot[75831]:         "bytes_used": 0,
Jan 10 16:57:28 compute-0 focused_joliot[75831]:         "bytes_avail": 0,
Jan 10 16:57:28 compute-0 focused_joliot[75831]:         "bytes_total": 0
Jan 10 16:57:28 compute-0 focused_joliot[75831]:     },
Jan 10 16:57:28 compute-0 focused_joliot[75831]:     "fsmap": {
Jan 10 16:57:28 compute-0 focused_joliot[75831]:         "epoch": 1,
Jan 10 16:57:28 compute-0 focused_joliot[75831]:         "btime": "2026-01-10T16:57:15:771836+0000",
Jan 10 16:57:28 compute-0 focused_joliot[75831]:         "by_rank": [],
Jan 10 16:57:28 compute-0 focused_joliot[75831]:         "up:standby": 0
Jan 10 16:57:28 compute-0 focused_joliot[75831]:     },
Jan 10 16:57:28 compute-0 focused_joliot[75831]:     "mgrmap": {
Jan 10 16:57:28 compute-0 focused_joliot[75831]:         "available": true,
Jan 10 16:57:28 compute-0 focused_joliot[75831]:         "num_standbys": 0,
Jan 10 16:57:28 compute-0 focused_joliot[75831]:         "modules": [
Jan 10 16:57:28 compute-0 focused_joliot[75831]:             "iostat",
Jan 10 16:57:28 compute-0 focused_joliot[75831]:             "nfs"
Jan 10 16:57:28 compute-0 focused_joliot[75831]:         ],
Jan 10 16:57:28 compute-0 focused_joliot[75831]:         "services": {}
Jan 10 16:57:28 compute-0 focused_joliot[75831]:     },
Jan 10 16:57:28 compute-0 focused_joliot[75831]:     "servicemap": {
Jan 10 16:57:28 compute-0 focused_joliot[75831]:         "epoch": 1,
Jan 10 16:57:28 compute-0 focused_joliot[75831]:         "modified": "2026-01-10T16:57:15.774565+0000",
Jan 10 16:57:28 compute-0 focused_joliot[75831]:         "services": {}
Jan 10 16:57:28 compute-0 focused_joliot[75831]:     },
Jan 10 16:57:28 compute-0 focused_joliot[75831]:     "progress_events": {}
Jan 10 16:57:28 compute-0 focused_joliot[75831]: }
Jan 10 16:57:28 compute-0 systemd[1]: libpod-2de9c15466be09f366cb9de288e0a5f43945ceafcf1fb8af98059d47625827bd.scope: Deactivated successfully.
Jan 10 16:57:28 compute-0 podman[75815]: 2026-01-10 16:57:28.550647997 +0000 UTC m=+0.756027469 container died 2de9c15466be09f366cb9de288e0a5f43945ceafcf1fb8af98059d47625827bd (image=quay.io/ceph/ceph:v20, name=focused_joliot, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 10 16:57:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-a8e570c08fd7c6d4d55d41af1137ce276a2dd87ffab3f7a8713a780524794474-merged.mount: Deactivated successfully.
Jan 10 16:57:28 compute-0 podman[75815]: 2026-01-10 16:57:28.602936202 +0000 UTC m=+0.808315664 container remove 2de9c15466be09f366cb9de288e0a5f43945ceafcf1fb8af98059d47625827bd (image=quay.io/ceph/ceph:v20, name=focused_joliot, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 10 16:57:28 compute-0 systemd[1]: libpod-conmon-2de9c15466be09f366cb9de288e0a5f43945ceafcf1fb8af98059d47625827bd.scope: Deactivated successfully.
Jan 10 16:57:28 compute-0 podman[75871]: 2026-01-10 16:57:28.708962571 +0000 UTC m=+0.068854138 container create 1f98089b44288f4004861b2e36fea92bcac2ff210460b873484e7d189f5374d3 (image=quay.io/ceph/ceph:v20, name=great_jennings, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 16:57:28 compute-0 systemd[1]: Started libpod-conmon-1f98089b44288f4004861b2e36fea92bcac2ff210460b873484e7d189f5374d3.scope.
Jan 10 16:57:28 compute-0 systemd[1]: Started libcrun container.
Jan 10 16:57:28 compute-0 podman[75871]: 2026-01-10 16:57:28.684752041 +0000 UTC m=+0.044643638 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 16:57:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80f083adee4015f60fc3bddeca3ab8af3b22edd5c02a370a57f624379b3111c4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 16:57:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80f083adee4015f60fc3bddeca3ab8af3b22edd5c02a370a57f624379b3111c4/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 10 16:57:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80f083adee4015f60fc3bddeca3ab8af3b22edd5c02a370a57f624379b3111c4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 16:57:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80f083adee4015f60fc3bddeca3ab8af3b22edd5c02a370a57f624379b3111c4/merged/var/lib/ceph/user.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 16:57:28 compute-0 podman[75871]: 2026-01-10 16:57:28.790285236 +0000 UTC m=+0.150176833 container init 1f98089b44288f4004861b2e36fea92bcac2ff210460b873484e7d189f5374d3 (image=quay.io/ceph/ceph:v20, name=great_jennings, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 10 16:57:28 compute-0 podman[75871]: 2026-01-10 16:57:28.795462967 +0000 UTC m=+0.155354534 container start 1f98089b44288f4004861b2e36fea92bcac2ff210460b873484e7d189f5374d3 (image=quay.io/ceph/ceph:v20, name=great_jennings, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 10 16:57:28 compute-0 podman[75871]: 2026-01-10 16:57:28.798228443 +0000 UTC m=+0.158120010 container attach 1f98089b44288f4004861b2e36fea92bcac2ff210460b873484e7d189f5374d3 (image=quay.io/ceph/ceph:v20, name=great_jennings, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 16:57:28 compute-0 ceph-mgr[75538]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 10 16:57:28 compute-0 ceph-mgr[75538]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 10 16:57:28 compute-0 ceph-mon[75249]: log_channel(cluster) log [DBG] : mgrmap e4: compute-0.mkxlpr(active, since 2s)
Jan 10 16:57:28 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/2128525717' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Jan 10 16:57:29 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Jan 10 16:57:29 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3862259952' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Jan 10 16:57:29 compute-0 great_jennings[75887]: 
Jan 10 16:57:29 compute-0 great_jennings[75887]: [global]
Jan 10 16:57:29 compute-0 great_jennings[75887]:         fsid = a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4
Jan 10 16:57:29 compute-0 great_jennings[75887]:         mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Jan 10 16:57:29 compute-0 great_jennings[75887]:         osd_crush_chooseleaf_type = 0
Jan 10 16:57:29 compute-0 systemd[1]: libpod-1f98089b44288f4004861b2e36fea92bcac2ff210460b873484e7d189f5374d3.scope: Deactivated successfully.
Jan 10 16:57:29 compute-0 podman[75871]: 2026-01-10 16:57:29.238983212 +0000 UTC m=+0.598874769 container died 1f98089b44288f4004861b2e36fea92bcac2ff210460b873484e7d189f5374d3 (image=quay.io/ceph/ceph:v20, name=great_jennings, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 16:57:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-80f083adee4015f60fc3bddeca3ab8af3b22edd5c02a370a57f624379b3111c4-merged.mount: Deactivated successfully.
Jan 10 16:57:29 compute-0 podman[75871]: 2026-01-10 16:57:29.280898044 +0000 UTC m=+0.640789601 container remove 1f98089b44288f4004861b2e36fea92bcac2ff210460b873484e7d189f5374d3 (image=quay.io/ceph/ceph:v20, name=great_jennings, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 16:57:29 compute-0 systemd[1]: libpod-conmon-1f98089b44288f4004861b2e36fea92bcac2ff210460b873484e7d189f5374d3.scope: Deactivated successfully.
Jan 10 16:57:29 compute-0 podman[75925]: 2026-01-10 16:57:29.354728426 +0000 UTC m=+0.051564996 container create 897851a6c41b24cadc2d60d25c14ebf44fcb881c951424af796f55a130727732 (image=quay.io/ceph/ceph:v20, name=condescending_lederberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 16:57:29 compute-0 systemd[1]: Started libpod-conmon-897851a6c41b24cadc2d60d25c14ebf44fcb881c951424af796f55a130727732.scope.
Jan 10 16:57:29 compute-0 podman[75925]: 2026-01-10 16:57:29.329637222 +0000 UTC m=+0.026473822 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 16:57:29 compute-0 systemd[1]: Started libcrun container.
Jan 10 16:57:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e05978f6a659ec31b69073fd580305c7928f44e01f9e21662cc71d2662006c3e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 16:57:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e05978f6a659ec31b69073fd580305c7928f44e01f9e21662cc71d2662006c3e/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 10 16:57:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e05978f6a659ec31b69073fd580305c7928f44e01f9e21662cc71d2662006c3e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 16:57:29 compute-0 podman[75925]: 2026-01-10 16:57:29.455559433 +0000 UTC m=+0.152396013 container init 897851a6c41b24cadc2d60d25c14ebf44fcb881c951424af796f55a130727732 (image=quay.io/ceph/ceph:v20, name=condescending_lederberg, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 10 16:57:29 compute-0 podman[75925]: 2026-01-10 16:57:29.462140482 +0000 UTC m=+0.158977062 container start 897851a6c41b24cadc2d60d25c14ebf44fcb881c951424af796f55a130727732 (image=quay.io/ceph/ceph:v20, name=condescending_lederberg, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 10 16:57:29 compute-0 podman[75925]: 2026-01-10 16:57:29.465671338 +0000 UTC m=+0.162507948 container attach 897851a6c41b24cadc2d60d25c14ebf44fcb881c951424af796f55a130727732 (image=quay.io/ceph/ceph:v20, name=condescending_lederberg, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 16:57:29 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "cephadm"} v 0)
Jan 10 16:57:29 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2571123315' entity='client.admin' cmd={"prefix": "mgr module enable", "module": "cephadm"} : dispatch
Jan 10 16:57:29 compute-0 ceph-mon[75249]: mgrmap e4: compute-0.mkxlpr(active, since 2s)
Jan 10 16:57:29 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/3862259952' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Jan 10 16:57:29 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/2571123315' entity='client.admin' cmd={"prefix": "mgr module enable", "module": "cephadm"} : dispatch
Jan 10 16:57:30 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2571123315' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Jan 10 16:57:30 compute-0 ceph-mgr[75538]: mgr handle_mgr_map respawning because set of enabled modules changed!
Jan 10 16:57:30 compute-0 ceph-mgr[75538]: mgr respawn  e: '/usr/bin/ceph-mgr'
Jan 10 16:57:30 compute-0 ceph-mgr[75538]: mgr respawn  0: '/usr/bin/ceph-mgr'
Jan 10 16:57:30 compute-0 ceph-mgr[75538]: mgr respawn  1: '-n'
Jan 10 16:57:30 compute-0 ceph-mgr[75538]: mgr respawn  2: 'mgr.compute-0.mkxlpr'
Jan 10 16:57:30 compute-0 ceph-mgr[75538]: mgr respawn  3: '-f'
Jan 10 16:57:30 compute-0 ceph-mgr[75538]: mgr respawn  4: '--setuser'
Jan 10 16:57:30 compute-0 ceph-mgr[75538]: mgr respawn  5: 'ceph'
Jan 10 16:57:30 compute-0 ceph-mgr[75538]: mgr respawn  6: '--setgroup'
Jan 10 16:57:30 compute-0 ceph-mgr[75538]: mgr respawn  7: 'ceph'
Jan 10 16:57:30 compute-0 ceph-mgr[75538]: mgr respawn  8: '--default-log-to-file=false'
Jan 10 16:57:30 compute-0 ceph-mgr[75538]: mgr respawn  9: '--default-log-to-journald=true'
Jan 10 16:57:30 compute-0 ceph-mgr[75538]: mgr respawn  10: '--default-log-to-stderr=false'
Jan 10 16:57:30 compute-0 ceph-mgr[75538]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Jan 10 16:57:30 compute-0 ceph-mgr[75538]: mgr respawn  exe_path /proc/self/exe
Jan 10 16:57:30 compute-0 ceph-mon[75249]: log_channel(cluster) log [DBG] : mgrmap e5: compute-0.mkxlpr(active, since 3s)
Jan 10 16:57:30 compute-0 systemd[1]: libpod-897851a6c41b24cadc2d60d25c14ebf44fcb881c951424af796f55a130727732.scope: Deactivated successfully.
Jan 10 16:57:30 compute-0 podman[75925]: 2026-01-10 16:57:30.056552088 +0000 UTC m=+0.753388648 container died 897851a6c41b24cadc2d60d25c14ebf44fcb881c951424af796f55a130727732 (image=quay.io/ceph/ceph:v20, name=condescending_lederberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 10 16:57:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-e05978f6a659ec31b69073fd580305c7928f44e01f9e21662cc71d2662006c3e-merged.mount: Deactivated successfully.
Jan 10 16:57:30 compute-0 podman[75925]: 2026-01-10 16:57:30.130178424 +0000 UTC m=+0.827014984 container remove 897851a6c41b24cadc2d60d25c14ebf44fcb881c951424af796f55a130727732 (image=quay.io/ceph/ceph:v20, name=condescending_lederberg, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 10 16:57:30 compute-0 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-mgr-compute-0-mkxlpr[75534]: ignoring --setuser ceph since I am not root
Jan 10 16:57:30 compute-0 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-mgr-compute-0-mkxlpr[75534]: ignoring --setgroup ceph since I am not root
Jan 10 16:57:30 compute-0 systemd[1]: libpod-conmon-897851a6c41b24cadc2d60d25c14ebf44fcb881c951424af796f55a130727732.scope: Deactivated successfully.
Jan 10 16:57:30 compute-0 ceph-mgr[75538]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mgr, pid 2
Jan 10 16:57:30 compute-0 ceph-mgr[75538]: pidfile_write: ignore empty --pid-file
Jan 10 16:57:30 compute-0 ceph-mgr[75538]: mgr[py] Loading python module 'alerts'
Jan 10 16:57:30 compute-0 podman[75988]: 2026-01-10 16:57:30.202884555 +0000 UTC m=+0.049573252 container create 19cf23cc3f52b24869726c16a9e2a4b1ebe7fd7d8f2d8099ebb1740aee795f85 (image=quay.io/ceph/ceph:v20, name=eager_lichterman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 10 16:57:30 compute-0 podman[75988]: 2026-01-10 16:57:30.180482244 +0000 UTC m=+0.027170961 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 16:57:30 compute-0 systemd[1]: Started libpod-conmon-19cf23cc3f52b24869726c16a9e2a4b1ebe7fd7d8f2d8099ebb1740aee795f85.scope.
Jan 10 16:57:30 compute-0 ceph-mgr[75538]: mgr[py] Loading python module 'balancer'
Jan 10 16:57:30 compute-0 systemd[1]: Started libcrun container.
Jan 10 16:57:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1917c5a6443856320d0accb6a79726639d9459c185e046236540055b2f171c5e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 16:57:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1917c5a6443856320d0accb6a79726639d9459c185e046236540055b2f171c5e/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 10 16:57:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1917c5a6443856320d0accb6a79726639d9459c185e046236540055b2f171c5e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 16:57:30 compute-0 ceph-mgr[75538]: mgr[py] Loading python module 'cephadm'
Jan 10 16:57:30 compute-0 podman[75988]: 2026-01-10 16:57:30.425614994 +0000 UTC m=+0.272303721 container init 19cf23cc3f52b24869726c16a9e2a4b1ebe7fd7d8f2d8099ebb1740aee795f85 (image=quay.io/ceph/ceph:v20, name=eager_lichterman, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 16:57:30 compute-0 podman[75988]: 2026-01-10 16:57:30.432048949 +0000 UTC m=+0.278737646 container start 19cf23cc3f52b24869726c16a9e2a4b1ebe7fd7d8f2d8099ebb1740aee795f85 (image=quay.io/ceph/ceph:v20, name=eager_lichterman, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 16:57:30 compute-0 podman[75988]: 2026-01-10 16:57:30.454413478 +0000 UTC m=+0.301102175 container attach 19cf23cc3f52b24869726c16a9e2a4b1ebe7fd7d8f2d8099ebb1740aee795f85 (image=quay.io/ceph/ceph:v20, name=eager_lichterman, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 16:57:30 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0)
Jan 10 16:57:30 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4146363671' entity='client.admin' cmd={"prefix": "mgr stat"} : dispatch
Jan 10 16:57:30 compute-0 eager_lichterman[76016]: {
Jan 10 16:57:30 compute-0 eager_lichterman[76016]:     "epoch": 5,
Jan 10 16:57:30 compute-0 eager_lichterman[76016]:     "available": true,
Jan 10 16:57:30 compute-0 eager_lichterman[76016]:     "active_name": "compute-0.mkxlpr",
Jan 10 16:57:30 compute-0 eager_lichterman[76016]:     "num_standby": 0
Jan 10 16:57:30 compute-0 eager_lichterman[76016]: }
Jan 10 16:57:30 compute-0 systemd[1]: libpod-19cf23cc3f52b24869726c16a9e2a4b1ebe7fd7d8f2d8099ebb1740aee795f85.scope: Deactivated successfully.
Jan 10 16:57:30 compute-0 podman[75988]: 2026-01-10 16:57:30.97933139 +0000 UTC m=+0.826020087 container died 19cf23cc3f52b24869726c16a9e2a4b1ebe7fd7d8f2d8099ebb1740aee795f85 (image=quay.io/ceph/ceph:v20, name=eager_lichterman, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 10 16:57:31 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/2571123315' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Jan 10 16:57:31 compute-0 ceph-mon[75249]: mgrmap e5: compute-0.mkxlpr(active, since 3s)
Jan 10 16:57:31 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/4146363671' entity='client.admin' cmd={"prefix": "mgr stat"} : dispatch
Jan 10 16:57:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-1917c5a6443856320d0accb6a79726639d9459c185e046236540055b2f171c5e-merged.mount: Deactivated successfully.
Jan 10 16:57:31 compute-0 podman[75988]: 2026-01-10 16:57:31.164626129 +0000 UTC m=+1.011314826 container remove 19cf23cc3f52b24869726c16a9e2a4b1ebe7fd7d8f2d8099ebb1740aee795f85 (image=quay.io/ceph/ceph:v20, name=eager_lichterman, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 10 16:57:31 compute-0 systemd[1]: libpod-conmon-19cf23cc3f52b24869726c16a9e2a4b1ebe7fd7d8f2d8099ebb1740aee795f85.scope: Deactivated successfully.
Jan 10 16:57:31 compute-0 podman[76067]: 2026-01-10 16:57:31.237640568 +0000 UTC m=+0.048692677 container create 5aa676cdc0e8a89e0afc47ddf61416ffa40418c478b04a3684e5684c2ce493fc (image=quay.io/ceph/ceph:v20, name=bold_mcnulty, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True)
Jan 10 16:57:31 compute-0 systemd[1]: Started libpod-conmon-5aa676cdc0e8a89e0afc47ddf61416ffa40418c478b04a3684e5684c2ce493fc.scope.
Jan 10 16:57:31 compute-0 systemd[1]: Started libcrun container.
Jan 10 16:57:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/452c29ec949b64f9a729075ea8d796d2cbd91490fe5194c49e64301606622e2a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 16:57:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/452c29ec949b64f9a729075ea8d796d2cbd91490fe5194c49e64301606622e2a/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 10 16:57:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/452c29ec949b64f9a729075ea8d796d2cbd91490fe5194c49e64301606622e2a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 16:57:31 compute-0 podman[76067]: 2026-01-10 16:57:31.305338503 +0000 UTC m=+0.116390632 container init 5aa676cdc0e8a89e0afc47ddf61416ffa40418c478b04a3684e5684c2ce493fc (image=quay.io/ceph/ceph:v20, name=bold_mcnulty, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 16:57:31 compute-0 podman[76067]: 2026-01-10 16:57:31.312044166 +0000 UTC m=+0.123096265 container start 5aa676cdc0e8a89e0afc47ddf61416ffa40418c478b04a3684e5684c2ce493fc (image=quay.io/ceph/ceph:v20, name=bold_mcnulty, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 10 16:57:31 compute-0 podman[76067]: 2026-01-10 16:57:31.219457293 +0000 UTC m=+0.030509422 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 16:57:31 compute-0 podman[76067]: 2026-01-10 16:57:31.315373536 +0000 UTC m=+0.126425645 container attach 5aa676cdc0e8a89e0afc47ddf61416ffa40418c478b04a3684e5684c2ce493fc (image=quay.io/ceph/ceph:v20, name=bold_mcnulty, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 10 16:57:31 compute-0 ceph-mgr[75538]: mgr[py] Loading python module 'crash'
Jan 10 16:57:31 compute-0 ceph-mgr[75538]: mgr[py] Loading python module 'dashboard'
Jan 10 16:57:32 compute-0 ceph-mgr[75538]: mgr[py] Loading python module 'devicehealth'
Jan 10 16:57:32 compute-0 ceph-mgr[75538]: mgr[py] Loading python module 'diskprediction_local'
Jan 10 16:57:32 compute-0 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-mgr-compute-0-mkxlpr[75534]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Jan 10 16:57:32 compute-0 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-mgr-compute-0-mkxlpr[75534]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Jan 10 16:57:32 compute-0 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-mgr-compute-0-mkxlpr[75534]:   from numpy import show_config as show_numpy_config
Jan 10 16:57:32 compute-0 ceph-mgr[75538]: mgr[py] Loading python module 'influx'
Jan 10 16:57:32 compute-0 ceph-mgr[75538]: mgr[py] Loading python module 'insights'
Jan 10 16:57:32 compute-0 ceph-mgr[75538]: mgr[py] Loading python module 'iostat'
Jan 10 16:57:32 compute-0 ceph-mgr[75538]: mgr[py] Loading python module 'k8sevents'
Jan 10 16:57:33 compute-0 ceph-mgr[75538]: mgr[py] Loading python module 'localpool'
Jan 10 16:57:33 compute-0 ceph-mgr[75538]: mgr[py] Loading python module 'mds_autoscaler'
Jan 10 16:57:33 compute-0 ceph-mgr[75538]: mgr[py] Loading python module 'mirroring'
Jan 10 16:57:33 compute-0 ceph-mgr[75538]: mgr[py] Loading python module 'nfs'
Jan 10 16:57:33 compute-0 ceph-mgr[75538]: mgr[py] Loading python module 'orchestrator'
Jan 10 16:57:34 compute-0 ceph-mgr[75538]: mgr[py] Loading python module 'osd_perf_query'
Jan 10 16:57:34 compute-0 ceph-mgr[75538]: mgr[py] Loading python module 'osd_support'
Jan 10 16:57:34 compute-0 ceph-mgr[75538]: mgr[py] Loading python module 'pg_autoscaler'
Jan 10 16:57:34 compute-0 ceph-mgr[75538]: mgr[py] Loading python module 'progress'
Jan 10 16:57:34 compute-0 ceph-mgr[75538]: mgr[py] Loading python module 'prometheus'
Jan 10 16:57:34 compute-0 ceph-mgr[75538]: mgr[py] Loading python module 'rbd_support'
Jan 10 16:57:34 compute-0 ceph-mgr[75538]: mgr[py] Loading python module 'rgw'
Jan 10 16:57:35 compute-0 ceph-mgr[75538]: mgr[py] Loading python module 'rook'
Jan 10 16:57:35 compute-0 ceph-mgr[75538]: mgr[py] Loading python module 'selftest'
Jan 10 16:57:35 compute-0 ceph-mgr[75538]: mgr[py] Loading python module 'smb'
Jan 10 16:57:36 compute-0 ceph-mgr[75538]: mgr[py] Loading python module 'snap_schedule'
Jan 10 16:57:36 compute-0 ceph-mgr[75538]: mgr[py] Loading python module 'stats'
Jan 10 16:57:36 compute-0 ceph-mgr[75538]: mgr[py] Loading python module 'status'
Jan 10 16:57:36 compute-0 ceph-mgr[75538]: mgr[py] Loading python module 'telegraf'
Jan 10 16:57:36 compute-0 ceph-mgr[75538]: mgr[py] Loading python module 'telemetry'
Jan 10 16:57:36 compute-0 ceph-mgr[75538]: mgr[py] Loading python module 'test_orchestrator'
Jan 10 16:57:36 compute-0 ceph-mgr[75538]: mgr[py] Loading python module 'volumes'
Jan 10 16:57:37 compute-0 ceph-mon[75249]: log_channel(cluster) log [INF] : Active manager daemon compute-0.mkxlpr restarted
Jan 10 16:57:37 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e1 do_prune osdmap full prune enabled
Jan 10 16:57:37 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e1 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 10 16:57:37 compute-0 ceph-mon[75249]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.mkxlpr
Jan 10 16:57:37 compute-0 ceph-mgr[75538]: ms_deliver_dispatch: unhandled message 0x55a9b2db6000 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Jan 10 16:57:37 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e1 _set_cache_ratios kv ratio 0.2 inc ratio 0.4 full ratio 0.4
Jan 10 16:57:37 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e1 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Jan 10 16:57:37 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e2 e2: 0 total, 0 up, 0 in
Jan 10 16:57:37 compute-0 ceph-mgr[75538]: mgr handle_mgr_map Activating!
Jan 10 16:57:37 compute-0 ceph-mgr[75538]: mgr handle_mgr_map I am now activating
Jan 10 16:57:37 compute-0 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e2: 0 total, 0 up, 0 in
Jan 10 16:57:37 compute-0 ceph-mon[75249]: log_channel(cluster) log [DBG] : mgrmap e6: compute-0.mkxlpr(active, starting, since 0.738038s)
Jan 10 16:57:37 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Jan 10 16:57:37 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "mon metadata", "id": "compute-0"} : dispatch
Jan 10 16:57:37 compute-0 ceph-mon[75249]: Active manager daemon compute-0.mkxlpr restarted
Jan 10 16:57:37 compute-0 ceph-mon[75249]: Activating manager daemon compute-0.mkxlpr
Jan 10 16:57:37 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.mkxlpr", "id": "compute-0.mkxlpr"} v 0)
Jan 10 16:57:37 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "mgr metadata", "who": "compute-0.mkxlpr", "id": "compute-0.mkxlpr"} : dispatch
Jan 10 16:57:37 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0)
Jan 10 16:57:37 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "mds metadata"} : dispatch
Jan 10 16:57:37 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).mds e1 all = 1
Jan 10 16:57:37 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Jan 10 16:57:37 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata"} : dispatch
Jan 10 16:57:37 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0)
Jan 10 16:57:37 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "mon metadata"} : dispatch
Jan 10 16:57:37 compute-0 ceph-mgr[75538]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 10 16:57:37 compute-0 ceph-mgr[75538]: mgr load Constructed class from module: balancer
Jan 10 16:57:37 compute-0 ceph-mon[75249]: log_channel(cluster) log [INF] : Manager daemon compute-0.mkxlpr is now available
Jan 10 16:57:37 compute-0 ceph-mgr[75538]: [balancer INFO root] Starting
Jan 10 16:57:37 compute-0 ceph-mgr[75538]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 10 16:57:37 compute-0 ceph-mgr[75538]: [balancer INFO root] Optimize plan auto_2026-01-10_16:57:37
Jan 10 16:57:37 compute-0 ceph-mgr[75538]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 10 16:57:37 compute-0 ceph-mgr[75538]: [balancer INFO root] do_upmap
Jan 10 16:57:37 compute-0 ceph-mgr[75538]: [balancer INFO root] No pools available
Jan 10 16:57:38 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1019918966 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 16:57:38 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.cert.cephadm_root_ca_cert}] v 0)
Jan 10 16:57:38 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:57:38 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.key.cephadm_root_ca_key}] v 0)
Jan 10 16:57:38 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:57:38 compute-0 ceph-mgr[75538]: [cephadm INFO cephadm.migrations] Found migration_current of "None". Setting to last migration.
Jan 10 16:57:38 compute-0 ceph-mgr[75538]: log_channel(cephadm) log [INF] : Found migration_current of "None". Setting to last migration.
Jan 10 16:57:38 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/migration_current}] v 0)
Jan 10 16:57:38 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:57:38 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/config_checks}] v 0)
Jan 10 16:57:38 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:57:38 compute-0 ceph-mgr[75538]: mgr load Constructed class from module: cephadm
Jan 10 16:57:38 compute-0 ceph-mgr[75538]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 10 16:57:38 compute-0 ceph-mgr[75538]: mgr load Constructed class from module: crash
Jan 10 16:57:38 compute-0 ceph-mgr[75538]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 10 16:57:38 compute-0 ceph-mgr[75538]: mgr load Constructed class from module: devicehealth
Jan 10 16:57:38 compute-0 ceph-mgr[75538]: [devicehealth INFO root] Starting
Jan 10 16:57:38 compute-0 ceph-mgr[75538]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 10 16:57:38 compute-0 ceph-mgr[75538]: mgr load Constructed class from module: iostat
Jan 10 16:57:38 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Jan 10 16:57:38 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config dump", "format": "json"} : dispatch
Jan 10 16:57:38 compute-0 ceph-mgr[75538]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 10 16:57:38 compute-0 ceph-mgr[75538]: mgr load Constructed class from module: nfs
Jan 10 16:57:38 compute-0 ceph-mgr[75538]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 10 16:57:38 compute-0 ceph-mgr[75538]: mgr load Constructed class from module: orchestrator
Jan 10 16:57:38 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Jan 10 16:57:38 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config dump", "format": "json"} : dispatch
Jan 10 16:57:38 compute-0 ceph-mgr[75538]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 10 16:57:38 compute-0 ceph-mgr[75538]: mgr load Constructed class from module: pg_autoscaler
Jan 10 16:57:38 compute-0 ceph-mgr[75538]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 10 16:57:38 compute-0 ceph-mgr[75538]: mgr load Constructed class from module: progress
Jan 10 16:57:38 compute-0 ceph-mgr[75538]: [progress INFO root] Loading...
Jan 10 16:57:38 compute-0 ceph-mgr[75538]: [progress INFO root] No stored events to load
Jan 10 16:57:38 compute-0 ceph-mgr[75538]: [progress INFO root] Loaded [] historic events
Jan 10 16:57:38 compute-0 ceph-mgr[75538]: [progress INFO root] Loaded OSDMap, ready.
Jan 10 16:57:38 compute-0 ceph-mgr[75538]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 10 16:57:38 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] _maybe_adjust
Jan 10 16:57:38 compute-0 ceph-mgr[75538]: [rbd_support INFO root] recovery thread starting
Jan 10 16:57:38 compute-0 ceph-mgr[75538]: [rbd_support INFO root] starting setup
Jan 10 16:57:38 compute-0 ceph-mgr[75538]: mgr load Constructed class from module: rbd_support
Jan 10 16:57:38 compute-0 ceph-mgr[75538]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 10 16:57:38 compute-0 ceph-mgr[75538]: mgr load Constructed class from module: status
Jan 10 16:57:38 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mkxlpr/mirror_snapshot_schedule"} v 0)
Jan 10 16:57:38 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mkxlpr/mirror_snapshot_schedule"} : dispatch
Jan 10 16:57:38 compute-0 ceph-mgr[75538]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 10 16:57:38 compute-0 ceph-mgr[75538]: mgr load Constructed class from module: telemetry
Jan 10 16:57:38 compute-0 ceph-mgr[75538]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 10 16:57:38 compute-0 ceph-mgr[75538]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 10 16:57:38 compute-0 ceph-mgr[75538]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Jan 10 16:57:38 compute-0 ceph-mgr[75538]: [rbd_support INFO root] PerfHandler: starting
Jan 10 16:57:38 compute-0 ceph-mgr[75538]: [rbd_support INFO root] TaskHandler: starting
Jan 10 16:57:38 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mkxlpr/trash_purge_schedule"} v 0)
Jan 10 16:57:38 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mkxlpr/trash_purge_schedule"} : dispatch
Jan 10 16:57:38 compute-0 ceph-mgr[75538]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 10 16:57:38 compute-0 ceph-mgr[75538]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Jan 10 16:57:38 compute-0 ceph-mgr[75538]: [rbd_support INFO root] setup complete
Jan 10 16:57:38 compute-0 ceph-mgr[75538]: mgr load Constructed class from module: volumes
Jan 10 16:57:39 compute-0 ceph-mgr[75538]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 10 16:57:40 compute-0 ceph-mgr[75538]: [cephadm INFO cherrypy.error] [10/Jan/2026:16:57:40] ENGINE Bus STARTING
Jan 10 16:57:40 compute-0 ceph-mgr[75538]: log_channel(cephadm) log [INF] : [10/Jan/2026:16:57:40] ENGINE Bus STARTING
Jan 10 16:57:40 compute-0 ceph-mgr[75538]: [cephadm INFO cherrypy.error] [10/Jan/2026:16:57:40] ENGINE Serving on https://192.168.122.100:7150
Jan 10 16:57:40 compute-0 ceph-mgr[75538]: log_channel(cephadm) log [INF] : [10/Jan/2026:16:57:40] ENGINE Serving on https://192.168.122.100:7150
Jan 10 16:57:40 compute-0 ceph-mgr[75538]: [cephadm INFO cherrypy.error] [10/Jan/2026:16:57:40] ENGINE Client ('192.168.122.100', 44532) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Jan 10 16:57:40 compute-0 ceph-mgr[75538]: log_channel(cephadm) log [INF] : [10/Jan/2026:16:57:40] ENGINE Client ('192.168.122.100', 44532) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Jan 10 16:57:40 compute-0 ceph-mgr[75538]: [cephadm INFO cherrypy.error] [10/Jan/2026:16:57:40] ENGINE Serving on http://192.168.122.100:8765
Jan 10 16:57:40 compute-0 ceph-mgr[75538]: log_channel(cephadm) log [INF] : [10/Jan/2026:16:57:40] ENGINE Serving on http://192.168.122.100:8765
Jan 10 16:57:40 compute-0 ceph-mgr[75538]: [cephadm INFO cherrypy.error] [10/Jan/2026:16:57:40] ENGINE Bus STARTED
Jan 10 16:57:40 compute-0 ceph-mgr[75538]: log_channel(cephadm) log [INF] : [10/Jan/2026:16:57:40] ENGINE Bus STARTED
Jan 10 16:57:40 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Jan 10 16:57:40 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config dump", "format": "json"} : dispatch
Jan 10 16:57:40 compute-0 ceph-mgr[75538]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 10 16:57:41 compute-0 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.14126 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Jan 10 16:57:41 compute-0 ceph-mon[75249]: log_channel(cluster) log [DBG] : mgrmap e7: compute-0.mkxlpr(active, since 4s)
Jan 10 16:57:41 compute-0 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.14126 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Jan 10 16:57:41 compute-0 bold_mcnulty[76083]: {
Jan 10 16:57:41 compute-0 bold_mcnulty[76083]:     "mgrmap_epoch": 7,
Jan 10 16:57:41 compute-0 bold_mcnulty[76083]:     "initialized": true
Jan 10 16:57:41 compute-0 bold_mcnulty[76083]: }
Jan 10 16:57:41 compute-0 systemd[1]: libpod-5aa676cdc0e8a89e0afc47ddf61416ffa40418c478b04a3684e5684c2ce493fc.scope: Deactivated successfully.
Jan 10 16:57:41 compute-0 podman[76067]: 2026-01-10 16:57:41.648216168 +0000 UTC m=+10.459268287 container died 5aa676cdc0e8a89e0afc47ddf61416ffa40418c478b04a3684e5684c2ce493fc (image=quay.io/ceph/ceph:v20, name=bold_mcnulty, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 16:57:41 compute-0 ceph-mgr[75538]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 10 16:57:42 compute-0 ceph-mon[75249]: osdmap e2: 0 total, 0 up, 0 in
Jan 10 16:57:42 compute-0 ceph-mon[75249]: mgrmap e6: compute-0.mkxlpr(active, starting, since 0.738038s)
Jan 10 16:57:42 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "mon metadata", "id": "compute-0"} : dispatch
Jan 10 16:57:42 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "mgr metadata", "who": "compute-0.mkxlpr", "id": "compute-0.mkxlpr"} : dispatch
Jan 10 16:57:42 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "mds metadata"} : dispatch
Jan 10 16:57:42 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata"} : dispatch
Jan 10 16:57:42 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "mon metadata"} : dispatch
Jan 10 16:57:42 compute-0 ceph-mon[75249]: Manager daemon compute-0.mkxlpr is now available
Jan 10 16:57:42 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:57:42 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:57:42 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:57:42 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:57:42 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config dump", "format": "json"} : dispatch
Jan 10 16:57:42 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config dump", "format": "json"} : dispatch
Jan 10 16:57:42 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mkxlpr/mirror_snapshot_schedule"} : dispatch
Jan 10 16:57:42 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mkxlpr/trash_purge_schedule"} : dispatch
Jan 10 16:57:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-452c29ec949b64f9a729075ea8d796d2cbd91490fe5194c49e64301606622e2a-merged.mount: Deactivated successfully.
Jan 10 16:57:42 compute-0 podman[76067]: 2026-01-10 16:57:42.443944019 +0000 UTC m=+11.254996128 container remove 5aa676cdc0e8a89e0afc47ddf61416ffa40418c478b04a3684e5684c2ce493fc (image=quay.io/ceph/ceph:v20, name=bold_mcnulty, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 10 16:57:42 compute-0 systemd[1]: libpod-conmon-5aa676cdc0e8a89e0afc47ddf61416ffa40418c478b04a3684e5684c2ce493fc.scope: Deactivated successfully.
Jan 10 16:57:42 compute-0 podman[76255]: 2026-01-10 16:57:42.526258522 +0000 UTC m=+0.052864581 container create ed2abd28ee0d1289cf4aee86c8c8b78353f0d153fce4e56d7a7acb7488d0bb7a (image=quay.io/ceph/ceph:v20, name=stoic_kepler, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True)
Jan 10 16:57:42 compute-0 systemd[1]: Started libpod-conmon-ed2abd28ee0d1289cf4aee86c8c8b78353f0d153fce4e56d7a7acb7488d0bb7a.scope.
Jan 10 16:57:42 compute-0 podman[76255]: 2026-01-10 16:57:42.499127833 +0000 UTC m=+0.025733922 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 16:57:42 compute-0 systemd[1]: Started libcrun container.
Jan 10 16:57:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e3158fcf75487419cc35fd32097b3bf39ebaf0b97cf3bcb820f642bd848d6a3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 16:57:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e3158fcf75487419cc35fd32097b3bf39ebaf0b97cf3bcb820f642bd848d6a3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 16:57:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e3158fcf75487419cc35fd32097b3bf39ebaf0b97cf3bcb820f642bd848d6a3/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 10 16:57:42 compute-0 ceph-mon[75249]: log_channel(cluster) log [DBG] : mgrmap e8: compute-0.mkxlpr(active, since 5s)
Jan 10 16:57:42 compute-0 podman[76255]: 2026-01-10 16:57:42.624441656 +0000 UTC m=+0.151047755 container init ed2abd28ee0d1289cf4aee86c8c8b78353f0d153fce4e56d7a7acb7488d0bb7a (image=quay.io/ceph/ceph:v20, name=stoic_kepler, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 10 16:57:42 compute-0 podman[76255]: 2026-01-10 16:57:42.635153198 +0000 UTC m=+0.161759267 container start ed2abd28ee0d1289cf4aee86c8c8b78353f0d153fce4e56d7a7acb7488d0bb7a (image=quay.io/ceph/ceph:v20, name=stoic_kepler, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 16:57:42 compute-0 podman[76255]: 2026-01-10 16:57:42.639219639 +0000 UTC m=+0.165825758 container attach ed2abd28ee0d1289cf4aee86c8c8b78353f0d153fce4e56d7a7acb7488d0bb7a (image=quay.io/ceph/ceph:v20, name=stoic_kepler, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 16:57:42 compute-0 ceph-mgr[75538]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 10 16:57:43 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "orchestrator"} v 0)
Jan 10 16:57:43 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/840503998' entity='client.admin' cmd={"prefix": "mgr module enable", "module": "orchestrator"} : dispatch
Jan 10 16:57:43 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020052876 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 16:57:43 compute-0 ceph-mon[75249]: Found migration_current of "None". Setting to last migration.
Jan 10 16:57:43 compute-0 ceph-mon[75249]: [10/Jan/2026:16:57:40] ENGINE Bus STARTING
Jan 10 16:57:43 compute-0 ceph-mon[75249]: [10/Jan/2026:16:57:40] ENGINE Serving on https://192.168.122.100:7150
Jan 10 16:57:43 compute-0 ceph-mon[75249]: [10/Jan/2026:16:57:40] ENGINE Client ('192.168.122.100', 44532) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Jan 10 16:57:43 compute-0 ceph-mon[75249]: [10/Jan/2026:16:57:40] ENGINE Serving on http://192.168.122.100:8765
Jan 10 16:57:43 compute-0 ceph-mon[75249]: [10/Jan/2026:16:57:40] ENGINE Bus STARTED
Jan 10 16:57:43 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config dump", "format": "json"} : dispatch
Jan 10 16:57:43 compute-0 ceph-mon[75249]: from='client.14126 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Jan 10 16:57:43 compute-0 ceph-mon[75249]: mgrmap e7: compute-0.mkxlpr(active, since 4s)
Jan 10 16:57:43 compute-0 ceph-mon[75249]: from='client.14126 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Jan 10 16:57:43 compute-0 ceph-mon[75249]: mgrmap e8: compute-0.mkxlpr(active, since 5s)
Jan 10 16:57:43 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/840503998' entity='client.admin' cmd={"prefix": "mgr module enable", "module": "orchestrator"} : dispatch
Jan 10 16:57:43 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/840503998' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "orchestrator"}]': finished
Jan 10 16:57:43 compute-0 stoic_kepler[76270]: module 'orchestrator' is already enabled (always-on)
Jan 10 16:57:43 compute-0 ceph-mon[75249]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.mkxlpr(active, since 6s)
Jan 10 16:57:43 compute-0 systemd[1]: libpod-ed2abd28ee0d1289cf4aee86c8c8b78353f0d153fce4e56d7a7acb7488d0bb7a.scope: Deactivated successfully.
Jan 10 16:57:43 compute-0 podman[76255]: 2026-01-10 16:57:43.633244083 +0000 UTC m=+1.159850132 container died ed2abd28ee0d1289cf4aee86c8c8b78353f0d153fce4e56d7a7acb7488d0bb7a (image=quay.io/ceph/ceph:v20, name=stoic_kepler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 10 16:57:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-9e3158fcf75487419cc35fd32097b3bf39ebaf0b97cf3bcb820f642bd848d6a3-merged.mount: Deactivated successfully.
Jan 10 16:57:43 compute-0 podman[76255]: 2026-01-10 16:57:43.671422053 +0000 UTC m=+1.198028112 container remove ed2abd28ee0d1289cf4aee86c8c8b78353f0d153fce4e56d7a7acb7488d0bb7a (image=quay.io/ceph/ceph:v20, name=stoic_kepler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 10 16:57:43 compute-0 systemd[1]: libpod-conmon-ed2abd28ee0d1289cf4aee86c8c8b78353f0d153fce4e56d7a7acb7488d0bb7a.scope: Deactivated successfully.
Jan 10 16:57:43 compute-0 podman[76310]: 2026-01-10 16:57:43.769113855 +0000 UTC m=+0.068133468 container create 6a74cccb75ba1080be29df75c5f03afffecf3bc4e14563c2d31fa5bd6bafcb5f (image=quay.io/ceph/ceph:v20, name=dazzling_sammet, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 10 16:57:43 compute-0 systemd[1]: Started libpod-conmon-6a74cccb75ba1080be29df75c5f03afffecf3bc4e14563c2d31fa5bd6bafcb5f.scope.
Jan 10 16:57:43 compute-0 podman[76310]: 2026-01-10 16:57:43.741634436 +0000 UTC m=+0.040654059 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 16:57:43 compute-0 systemd[1]: Started libcrun container.
Jan 10 16:57:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9952ba6ac0517f54bf653e61cd50ab105e765d99266125826e9eeb81dd8ec2ea/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 16:57:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9952ba6ac0517f54bf653e61cd50ab105e765d99266125826e9eeb81dd8ec2ea/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 10 16:57:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9952ba6ac0517f54bf653e61cd50ab105e765d99266125826e9eeb81dd8ec2ea/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 16:57:43 compute-0 podman[76310]: 2026-01-10 16:57:43.870057465 +0000 UTC m=+0.169077098 container init 6a74cccb75ba1080be29df75c5f03afffecf3bc4e14563c2d31fa5bd6bafcb5f (image=quay.io/ceph/ceph:v20, name=dazzling_sammet, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 10 16:57:43 compute-0 podman[76310]: 2026-01-10 16:57:43.882209416 +0000 UTC m=+0.181229009 container start 6a74cccb75ba1080be29df75c5f03afffecf3bc4e14563c2d31fa5bd6bafcb5f (image=quay.io/ceph/ceph:v20, name=dazzling_sammet, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 10 16:57:43 compute-0 podman[76310]: 2026-01-10 16:57:43.885743133 +0000 UTC m=+0.184762716 container attach 6a74cccb75ba1080be29df75c5f03afffecf3bc4e14563c2d31fa5bd6bafcb5f (image=quay.io/ceph/ceph:v20, name=dazzling_sammet, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 16:57:43 compute-0 ceph-mgr[75538]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 10 16:57:44 compute-0 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.14136 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 16:57:44 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/orchestrator/orchestrator}] v 0)
Jan 10 16:57:44 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:57:44 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Jan 10 16:57:44 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config dump", "format": "json"} : dispatch
Jan 10 16:57:44 compute-0 systemd[1]: libpod-6a74cccb75ba1080be29df75c5f03afffecf3bc4e14563c2d31fa5bd6bafcb5f.scope: Deactivated successfully.
Jan 10 16:57:44 compute-0 podman[76310]: 2026-01-10 16:57:44.407986042 +0000 UTC m=+0.707005625 container died 6a74cccb75ba1080be29df75c5f03afffecf3bc4e14563c2d31fa5bd6bafcb5f (image=quay.io/ceph/ceph:v20, name=dazzling_sammet, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 16:57:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-9952ba6ac0517f54bf653e61cd50ab105e765d99266125826e9eeb81dd8ec2ea-merged.mount: Deactivated successfully.
Jan 10 16:57:44 compute-0 podman[76310]: 2026-01-10 16:57:44.447508649 +0000 UTC m=+0.746528232 container remove 6a74cccb75ba1080be29df75c5f03afffecf3bc4e14563c2d31fa5bd6bafcb5f (image=quay.io/ceph/ceph:v20, name=dazzling_sammet, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 10 16:57:44 compute-0 systemd[1]: libpod-conmon-6a74cccb75ba1080be29df75c5f03afffecf3bc4e14563c2d31fa5bd6bafcb5f.scope: Deactivated successfully.
Jan 10 16:57:44 compute-0 podman[76364]: 2026-01-10 16:57:44.520897499 +0000 UTC m=+0.049800578 container create 79dccadedc093b6b44c5b66e0a3079ef5534fb364a8c5cc468be5d742e4e898d (image=quay.io/ceph/ceph:v20, name=busy_faraday, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 16:57:44 compute-0 systemd[1]: Started libpod-conmon-79dccadedc093b6b44c5b66e0a3079ef5534fb364a8c5cc468be5d742e4e898d.scope.
Jan 10 16:57:44 compute-0 systemd[1]: Started libcrun container.
Jan 10 16:57:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03b79f61d8ad37e8b75bd3ae861bd95335126093f5b2ab7afcfff1d02917eb5b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 16:57:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03b79f61d8ad37e8b75bd3ae861bd95335126093f5b2ab7afcfff1d02917eb5b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 16:57:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03b79f61d8ad37e8b75bd3ae861bd95335126093f5b2ab7afcfff1d02917eb5b/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 10 16:57:44 compute-0 podman[76364]: 2026-01-10 16:57:44.496852553 +0000 UTC m=+0.025755702 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 16:57:44 compute-0 podman[76364]: 2026-01-10 16:57:44.602248595 +0000 UTC m=+0.131151694 container init 79dccadedc093b6b44c5b66e0a3079ef5534fb364a8c5cc468be5d742e4e898d (image=quay.io/ceph/ceph:v20, name=busy_faraday, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 16:57:44 compute-0 podman[76364]: 2026-01-10 16:57:44.609906634 +0000 UTC m=+0.138809703 container start 79dccadedc093b6b44c5b66e0a3079ef5534fb364a8c5cc468be5d742e4e898d (image=quay.io/ceph/ceph:v20, name=busy_faraday, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 10 16:57:44 compute-0 podman[76364]: 2026-01-10 16:57:44.61563279 +0000 UTC m=+0.144535869 container attach 79dccadedc093b6b44c5b66e0a3079ef5534fb364a8c5cc468be5d742e4e898d (image=quay.io/ceph/ceph:v20, name=busy_faraday, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 10 16:57:44 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/840503998' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "orchestrator"}]': finished
Jan 10 16:57:44 compute-0 ceph-mon[75249]: mgrmap e9: compute-0.mkxlpr(active, since 6s)
Jan 10 16:57:44 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:57:44 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config dump", "format": "json"} : dispatch
Jan 10 16:57:44 compute-0 ceph-mgr[75538]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 10 16:57:45 compute-0 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.14138 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 16:57:45 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_user}] v 0)
Jan 10 16:57:45 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:57:45 compute-0 ceph-mgr[75538]: [cephadm INFO root] Set ssh ssh_user
Jan 10 16:57:45 compute-0 ceph-mgr[75538]: log_channel(cephadm) log [INF] : Set ssh ssh_user
Jan 10 16:57:45 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_config}] v 0)
Jan 10 16:57:45 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:57:45 compute-0 ceph-mgr[75538]: [cephadm INFO root] Set ssh ssh_config
Jan 10 16:57:45 compute-0 ceph-mgr[75538]: log_channel(cephadm) log [INF] : Set ssh ssh_config
Jan 10 16:57:45 compute-0 ceph-mgr[75538]: [cephadm INFO root] ssh user set to ceph-admin. sudo will be used
Jan 10 16:57:45 compute-0 ceph-mgr[75538]: log_channel(cephadm) log [INF] : ssh user set to ceph-admin. sudo will be used
Jan 10 16:57:45 compute-0 busy_faraday[76380]: ssh user set to ceph-admin. sudo will be used
Jan 10 16:57:45 compute-0 systemd[1]: libpod-79dccadedc093b6b44c5b66e0a3079ef5534fb364a8c5cc468be5d742e4e898d.scope: Deactivated successfully.
Jan 10 16:57:45 compute-0 podman[76364]: 2026-01-10 16:57:45.075591212 +0000 UTC m=+0.604494391 container died 79dccadedc093b6b44c5b66e0a3079ef5534fb364a8c5cc468be5d742e4e898d (image=quay.io/ceph/ceph:v20, name=busy_faraday, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 10 16:57:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-03b79f61d8ad37e8b75bd3ae861bd95335126093f5b2ab7afcfff1d02917eb5b-merged.mount: Deactivated successfully.
Jan 10 16:57:45 compute-0 podman[76364]: 2026-01-10 16:57:45.122813709 +0000 UTC m=+0.651716798 container remove 79dccadedc093b6b44c5b66e0a3079ef5534fb364a8c5cc468be5d742e4e898d (image=quay.io/ceph/ceph:v20, name=busy_faraday, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 10 16:57:45 compute-0 systemd[1]: libpod-conmon-79dccadedc093b6b44c5b66e0a3079ef5534fb364a8c5cc468be5d742e4e898d.scope: Deactivated successfully.
Jan 10 16:57:45 compute-0 podman[76417]: 2026-01-10 16:57:45.198533012 +0000 UTC m=+0.049743727 container create 3686a0d717c074238152d28d82f68d68925b9bf053b215881273cb114c88e10e (image=quay.io/ceph/ceph:v20, name=trusting_nash, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 16:57:45 compute-0 systemd[1]: Started libpod-conmon-3686a0d717c074238152d28d82f68d68925b9bf053b215881273cb114c88e10e.scope.
Jan 10 16:57:45 compute-0 systemd[1]: Started libcrun container.
Jan 10 16:57:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/758f0cc8b487db06d8107546412f38bc9e86085d21fb93972d075cf8d9dad0ed/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Jan 10 16:57:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/758f0cc8b487db06d8107546412f38bc9e86085d21fb93972d075cf8d9dad0ed/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Jan 10 16:57:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/758f0cc8b487db06d8107546412f38bc9e86085d21fb93972d075cf8d9dad0ed/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 16:57:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/758f0cc8b487db06d8107546412f38bc9e86085d21fb93972d075cf8d9dad0ed/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 16:57:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/758f0cc8b487db06d8107546412f38bc9e86085d21fb93972d075cf8d9dad0ed/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 10 16:57:45 compute-0 podman[76417]: 2026-01-10 16:57:45.257653813 +0000 UTC m=+0.108864508 container init 3686a0d717c074238152d28d82f68d68925b9bf053b215881273cb114c88e10e (image=quay.io/ceph/ceph:v20, name=trusting_nash, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 10 16:57:45 compute-0 podman[76417]: 2026-01-10 16:57:45.268420856 +0000 UTC m=+0.119631551 container start 3686a0d717c074238152d28d82f68d68925b9bf053b215881273cb114c88e10e (image=quay.io/ceph/ceph:v20, name=trusting_nash, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 10 16:57:45 compute-0 podman[76417]: 2026-01-10 16:57:45.177400286 +0000 UTC m=+0.028611001 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 16:57:45 compute-0 podman[76417]: 2026-01-10 16:57:45.27259597 +0000 UTC m=+0.123806665 container attach 3686a0d717c074238152d28d82f68d68925b9bf053b215881273cb114c88e10e (image=quay.io/ceph/ceph:v20, name=trusting_nash, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 10 16:57:45 compute-0 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.14140 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 16:57:45 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_key}] v 0)
Jan 10 16:57:45 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:57:45 compute-0 ceph-mgr[75538]: [cephadm INFO root] Set ssh ssh_identity_key
Jan 10 16:57:45 compute-0 ceph-mgr[75538]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_key
Jan 10 16:57:45 compute-0 ceph-mgr[75538]: [cephadm INFO root] Set ssh private key
Jan 10 16:57:45 compute-0 ceph-mgr[75538]: log_channel(cephadm) log [INF] : Set ssh private key
Jan 10 16:57:45 compute-0 systemd[1]: libpod-3686a0d717c074238152d28d82f68d68925b9bf053b215881273cb114c88e10e.scope: Deactivated successfully.
Jan 10 16:57:45 compute-0 podman[76417]: 2026-01-10 16:57:45.700438337 +0000 UTC m=+0.551649012 container died 3686a0d717c074238152d28d82f68d68925b9bf053b215881273cb114c88e10e (image=quay.io/ceph/ceph:v20, name=trusting_nash, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2)
Jan 10 16:57:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-758f0cc8b487db06d8107546412f38bc9e86085d21fb93972d075cf8d9dad0ed-merged.mount: Deactivated successfully.
Jan 10 16:57:45 compute-0 podman[76417]: 2026-01-10 16:57:45.767639208 +0000 UTC m=+0.618849883 container remove 3686a0d717c074238152d28d82f68d68925b9bf053b215881273cb114c88e10e (image=quay.io/ceph/ceph:v20, name=trusting_nash, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 16:57:45 compute-0 systemd[1]: libpod-conmon-3686a0d717c074238152d28d82f68d68925b9bf053b215881273cb114c88e10e.scope: Deactivated successfully.
Jan 10 16:57:45 compute-0 podman[76466]: 2026-01-10 16:57:45.828477796 +0000 UTC m=+0.037910754 container create 781f0a5b7e29ab433e605cd4dd90f1e972408d8dfb8b33f29f3c880a35af8331 (image=quay.io/ceph/ceph:v20, name=xenodochial_chandrasekhar, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 10 16:57:45 compute-0 systemd[1]: Started libpod-conmon-781f0a5b7e29ab433e605cd4dd90f1e972408d8dfb8b33f29f3c880a35af8331.scope.
Jan 10 16:57:45 compute-0 systemd[1]: Started libcrun container.
Jan 10 16:57:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a82b5fbe23a4a32652f440bfa9f5113c917699fbf4696736d576647a74396761/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Jan 10 16:57:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a82b5fbe23a4a32652f440bfa9f5113c917699fbf4696736d576647a74396761/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Jan 10 16:57:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a82b5fbe23a4a32652f440bfa9f5113c917699fbf4696736d576647a74396761/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 16:57:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a82b5fbe23a4a32652f440bfa9f5113c917699fbf4696736d576647a74396761/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 16:57:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a82b5fbe23a4a32652f440bfa9f5113c917699fbf4696736d576647a74396761/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 10 16:57:45 compute-0 podman[76466]: 2026-01-10 16:57:45.895116661 +0000 UTC m=+0.104549629 container init 781f0a5b7e29ab433e605cd4dd90f1e972408d8dfb8b33f29f3c880a35af8331 (image=quay.io/ceph/ceph:v20, name=xenodochial_chandrasekhar, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 16:57:45 compute-0 podman[76466]: 2026-01-10 16:57:45.899802699 +0000 UTC m=+0.109235677 container start 781f0a5b7e29ab433e605cd4dd90f1e972408d8dfb8b33f29f3c880a35af8331 (image=quay.io/ceph/ceph:v20, name=xenodochial_chandrasekhar, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 16:57:45 compute-0 podman[76466]: 2026-01-10 16:57:45.904444466 +0000 UTC m=+0.113877444 container attach 781f0a5b7e29ab433e605cd4dd90f1e972408d8dfb8b33f29f3c880a35af8331 (image=quay.io/ceph/ceph:v20, name=xenodochial_chandrasekhar, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 10 16:57:45 compute-0 podman[76466]: 2026-01-10 16:57:45.813006694 +0000 UTC m=+0.022439652 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 16:57:45 compute-0 ceph-mgr[75538]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 10 16:57:46 compute-0 ceph-mon[75249]: from='client.14136 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 16:57:46 compute-0 ceph-mon[75249]: from='client.14138 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 16:57:46 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:57:46 compute-0 ceph-mon[75249]: Set ssh ssh_user
Jan 10 16:57:46 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:57:46 compute-0 ceph-mon[75249]: Set ssh ssh_config
Jan 10 16:57:46 compute-0 ceph-mon[75249]: ssh user set to ceph-admin. sudo will be used
Jan 10 16:57:46 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:57:46 compute-0 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.14142 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 16:57:46 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_pub}] v 0)
Jan 10 16:57:46 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:57:46 compute-0 ceph-mgr[75538]: [cephadm INFO root] Set ssh ssh_identity_pub
Jan 10 16:57:46 compute-0 ceph-mgr[75538]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_pub
Jan 10 16:57:46 compute-0 systemd[1]: libpod-781f0a5b7e29ab433e605cd4dd90f1e972408d8dfb8b33f29f3c880a35af8331.scope: Deactivated successfully.
Jan 10 16:57:46 compute-0 podman[76466]: 2026-01-10 16:57:46.315556306 +0000 UTC m=+0.524989264 container died 781f0a5b7e29ab433e605cd4dd90f1e972408d8dfb8b33f29f3c880a35af8331 (image=quay.io/ceph/ceph:v20, name=xenodochial_chandrasekhar, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 10 16:57:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-a82b5fbe23a4a32652f440bfa9f5113c917699fbf4696736d576647a74396761-merged.mount: Deactivated successfully.
Jan 10 16:57:46 compute-0 podman[76466]: 2026-01-10 16:57:46.424119724 +0000 UTC m=+0.633552682 container remove 781f0a5b7e29ab433e605cd4dd90f1e972408d8dfb8b33f29f3c880a35af8331 (image=quay.io/ceph/ceph:v20, name=xenodochial_chandrasekhar, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 10 16:57:46 compute-0 systemd[1]: libpod-conmon-781f0a5b7e29ab433e605cd4dd90f1e972408d8dfb8b33f29f3c880a35af8331.scope: Deactivated successfully.
Jan 10 16:57:46 compute-0 podman[76525]: 2026-01-10 16:57:46.488773976 +0000 UTC m=+0.045609394 container create 5c46fdc8ebd1cae46842ece0c9f6d970417b0ab91c477354981ec679e7079035 (image=quay.io/ceph/ceph:v20, name=serene_meitner, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 16:57:46 compute-0 systemd[1]: Started libpod-conmon-5c46fdc8ebd1cae46842ece0c9f6d970417b0ab91c477354981ec679e7079035.scope.
Jan 10 16:57:46 compute-0 systemd[1]: Started libcrun container.
Jan 10 16:57:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15da8e97256aa69cba98cadf0142516b270959364cce38a7b76dab91b5d8d6b8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 16:57:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15da8e97256aa69cba98cadf0142516b270959364cce38a7b76dab91b5d8d6b8/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 10 16:57:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15da8e97256aa69cba98cadf0142516b270959364cce38a7b76dab91b5d8d6b8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 16:57:46 compute-0 podman[76525]: 2026-01-10 16:57:46.464575436 +0000 UTC m=+0.021410944 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 16:57:46 compute-0 podman[76525]: 2026-01-10 16:57:46.56452567 +0000 UTC m=+0.121361118 container init 5c46fdc8ebd1cae46842ece0c9f6d970417b0ab91c477354981ec679e7079035 (image=quay.io/ceph/ceph:v20, name=serene_meitner, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 16:57:46 compute-0 podman[76525]: 2026-01-10 16:57:46.570561384 +0000 UTC m=+0.127396812 container start 5c46fdc8ebd1cae46842ece0c9f6d970417b0ab91c477354981ec679e7079035 (image=quay.io/ceph/ceph:v20, name=serene_meitner, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 16:57:46 compute-0 podman[76525]: 2026-01-10 16:57:46.57480552 +0000 UTC m=+0.131640938 container attach 5c46fdc8ebd1cae46842ece0c9f6d970417b0ab91c477354981ec679e7079035 (image=quay.io/ceph/ceph:v20, name=serene_meitner, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 10 16:57:46 compute-0 ceph-mgr[75538]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 10 16:57:47 compute-0 ceph-mon[75249]: from='client.14140 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 16:57:47 compute-0 ceph-mon[75249]: Set ssh ssh_identity_key
Jan 10 16:57:47 compute-0 ceph-mon[75249]: Set ssh private key
Jan 10 16:57:47 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:57:47 compute-0 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.14144 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 16:57:47 compute-0 serene_meitner[76542]: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCrpqUxP+rrDq/Ne49spAJer50GELazLs3h1Q1nSPstwxnA2ih3Wxc7ipm12Sm1nyI+Pq8x+KeJfj8IpMWVJpnmQgku7OTLPpBCMUgZqvskdS1H2Lo01SrPTBNx3RSKKPUYYjNr2DxupexnUOZYFwiOGp9jEEzNk4MSgMtVcbTdfsfcmKdVlHjFydPyM0m2/FrIviz1xEQHZcABIIMi1tW0wmhobTPykznW/rR0uMofhcN8Ktm20+RCa5/1KZ800IrngzeRoXPZuq10fqggUWj0mJJ9MfqSz2dOblfXIYKAO7QA+vJd1s92aBmtAORIFSXqs6pGZcuml5k1iJb8gHy/FOl4u/jVrcBfDoC6g7CEmIksIVSAHsRWhzYiZNbBf2pJQjwzzSTBxh7T2deblHWj1XFJxdfNNeQacucZThihEExtBiXou8QGNVNs5s8Oe4pE+gjOhim955mz3GivSHu8b0T44AWrjaB1p6W28JvYYhl4DYSfw6kawGZpupmbDEc= zuul@controller
Jan 10 16:57:47 compute-0 systemd[1]: libpod-5c46fdc8ebd1cae46842ece0c9f6d970417b0ab91c477354981ec679e7079035.scope: Deactivated successfully.
Jan 10 16:57:47 compute-0 podman[76525]: 2026-01-10 16:57:47.097875432 +0000 UTC m=+0.654710870 container died 5c46fdc8ebd1cae46842ece0c9f6d970417b0ab91c477354981ec679e7079035 (image=quay.io/ceph/ceph:v20, name=serene_meitner, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 10 16:57:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-15da8e97256aa69cba98cadf0142516b270959364cce38a7b76dab91b5d8d6b8-merged.mount: Deactivated successfully.
Jan 10 16:57:47 compute-0 podman[76525]: 2026-01-10 16:57:47.144923514 +0000 UTC m=+0.701758942 container remove 5c46fdc8ebd1cae46842ece0c9f6d970417b0ab91c477354981ec679e7079035 (image=quay.io/ceph/ceph:v20, name=serene_meitner, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 16:57:47 compute-0 systemd[1]: libpod-conmon-5c46fdc8ebd1cae46842ece0c9f6d970417b0ab91c477354981ec679e7079035.scope: Deactivated successfully.
Jan 10 16:57:47 compute-0 podman[76579]: 2026-01-10 16:57:47.24937201 +0000 UTC m=+0.074762048 container create 2033c8e580bc9c2364e6cec5254f1ccffdc2abfdf19226fa902fd64fdfbc48a0 (image=quay.io/ceph/ceph:v20, name=festive_neumann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 10 16:57:47 compute-0 systemd[1]: Started libpod-conmon-2033c8e580bc9c2364e6cec5254f1ccffdc2abfdf19226fa902fd64fdfbc48a0.scope.
Jan 10 16:57:47 compute-0 podman[76579]: 2026-01-10 16:57:47.218649703 +0000 UTC m=+0.044039791 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 16:57:47 compute-0 systemd[1]: Started libcrun container.
Jan 10 16:57:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfbe2abb034826463531483205d95b836ac34ae8e43e539816df68c133f4f023/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 16:57:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfbe2abb034826463531483205d95b836ac34ae8e43e539816df68c133f4f023/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 10 16:57:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfbe2abb034826463531483205d95b836ac34ae8e43e539816df68c133f4f023/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 16:57:47 compute-0 podman[76579]: 2026-01-10 16:57:47.341254393 +0000 UTC m=+0.166644491 container init 2033c8e580bc9c2364e6cec5254f1ccffdc2abfdf19226fa902fd64fdfbc48a0 (image=quay.io/ceph/ceph:v20, name=festive_neumann, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 10 16:57:47 compute-0 podman[76579]: 2026-01-10 16:57:47.346985059 +0000 UTC m=+0.172375107 container start 2033c8e580bc9c2364e6cec5254f1ccffdc2abfdf19226fa902fd64fdfbc48a0 (image=quay.io/ceph/ceph:v20, name=festive_neumann, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 10 16:57:47 compute-0 podman[76579]: 2026-01-10 16:57:47.351494402 +0000 UTC m=+0.176884400 container attach 2033c8e580bc9c2364e6cec5254f1ccffdc2abfdf19226fa902fd64fdfbc48a0 (image=quay.io/ceph/ceph:v20, name=festive_neumann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 16:57:47 compute-0 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.14146 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 16:57:47 compute-0 ceph-mgr[75538]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 10 16:57:47 compute-0 sshd-session[76621]: Accepted publickey for ceph-admin from 192.168.122.100 port 57244 ssh2: RSA SHA256:OwXsRrMUqFeNidRfyqqHnD8cFQm/QSlnm0xkW+qjdao
Jan 10 16:57:48 compute-0 systemd-logind[798]: New session 21 of user ceph-admin.
Jan 10 16:57:48 compute-0 systemd[1]: Created slice User Slice of UID 42477.
Jan 10 16:57:48 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42477...
Jan 10 16:57:48 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42477.
Jan 10 16:57:48 compute-0 ceph-mon[75249]: from='client.14142 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 16:57:48 compute-0 ceph-mon[75249]: Set ssh ssh_identity_pub
Jan 10 16:57:48 compute-0 ceph-mon[75249]: from='client.14144 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 16:57:48 compute-0 systemd[1]: Starting User Manager for UID 42477...
Jan 10 16:57:48 compute-0 systemd[76625]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 10 16:57:48 compute-0 sshd-session[76633]: Accepted publickey for ceph-admin from 192.168.122.100 port 57256 ssh2: RSA SHA256:OwXsRrMUqFeNidRfyqqHnD8cFQm/QSlnm0xkW+qjdao
Jan 10 16:57:48 compute-0 systemd-logind[798]: New session 23 of user ceph-admin.
Jan 10 16:57:48 compute-0 systemd[76625]: Queued start job for default target Main User Target.
Jan 10 16:57:48 compute-0 systemd[76625]: Created slice User Application Slice.
Jan 10 16:57:48 compute-0 systemd[76625]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 10 16:57:48 compute-0 systemd[76625]: Started Daily Cleanup of User's Temporary Directories.
Jan 10 16:57:48 compute-0 systemd[76625]: Reached target Paths.
Jan 10 16:57:48 compute-0 systemd[76625]: Reached target Timers.
Jan 10 16:57:48 compute-0 systemd[76625]: Starting D-Bus User Message Bus Socket...
Jan 10 16:57:48 compute-0 systemd[76625]: Starting Create User's Volatile Files and Directories...
Jan 10 16:57:48 compute-0 systemd[76625]: Listening on D-Bus User Message Bus Socket.
Jan 10 16:57:48 compute-0 systemd[76625]: Reached target Sockets.
Jan 10 16:57:48 compute-0 systemd[76625]: Finished Create User's Volatile Files and Directories.
Jan 10 16:57:48 compute-0 systemd[76625]: Reached target Basic System.
Jan 10 16:57:48 compute-0 systemd[76625]: Reached target Main User Target.
Jan 10 16:57:48 compute-0 systemd[76625]: Startup finished in 181ms.
Jan 10 16:57:48 compute-0 systemd[1]: Started User Manager for UID 42477.
Jan 10 16:57:48 compute-0 systemd[1]: Started Session 21 of User ceph-admin.
Jan 10 16:57:48 compute-0 systemd[1]: Started Session 23 of User ceph-admin.
Jan 10 16:57:48 compute-0 sshd-session[76621]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 10 16:57:48 compute-0 sshd-session[76633]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 10 16:57:48 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054706 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 16:57:48 compute-0 sudo[76645]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 16:57:48 compute-0 sudo[76645]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 16:57:48 compute-0 sudo[76645]: pam_unix(sudo:session): session closed for user root
Jan 10 16:57:48 compute-0 sshd-session[76670]: Accepted publickey for ceph-admin from 192.168.122.100 port 57262 ssh2: RSA SHA256:OwXsRrMUqFeNidRfyqqHnD8cFQm/QSlnm0xkW+qjdao
Jan 10 16:57:48 compute-0 systemd-logind[798]: New session 24 of user ceph-admin.
Jan 10 16:57:48 compute-0 systemd[1]: Started Session 24 of User ceph-admin.
Jan 10 16:57:48 compute-0 sshd-session[76670]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 10 16:57:48 compute-0 sudo[76674]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host --expect-hostname compute-0
Jan 10 16:57:48 compute-0 sudo[76674]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 16:57:48 compute-0 sudo[76674]: pam_unix(sudo:session): session closed for user root
Jan 10 16:57:48 compute-0 ceph-mgr[75538]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 10 16:57:49 compute-0 sshd-session[76699]: Accepted publickey for ceph-admin from 192.168.122.100 port 57270 ssh2: RSA SHA256:OwXsRrMUqFeNidRfyqqHnD8cFQm/QSlnm0xkW+qjdao
Jan 10 16:57:49 compute-0 systemd-logind[798]: New session 25 of user ceph-admin.
Jan 10 16:57:49 compute-0 systemd[1]: Started Session 25 of User ceph-admin.
Jan 10 16:57:49 compute-0 sshd-session[76699]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 10 16:57:49 compute-0 ceph-mon[75249]: from='client.14146 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 16:57:49 compute-0 sudo[76703]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b
Jan 10 16:57:49 compute-0 sudo[76703]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 16:57:49 compute-0 sudo[76703]: pam_unix(sudo:session): session closed for user root
Jan 10 16:57:49 compute-0 ceph-mgr[75538]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-0
Jan 10 16:57:49 compute-0 ceph-mgr[75538]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-0
Jan 10 16:57:49 compute-0 sshd-session[76728]: Accepted publickey for ceph-admin from 192.168.122.100 port 57276 ssh2: RSA SHA256:OwXsRrMUqFeNidRfyqqHnD8cFQm/QSlnm0xkW+qjdao
Jan 10 16:57:49 compute-0 systemd-logind[798]: New session 26 of user ceph-admin.
Jan 10 16:57:49 compute-0 systemd[1]: Started Session 26 of User ceph-admin.
Jan 10 16:57:49 compute-0 sshd-session[76728]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 10 16:57:49 compute-0 sudo[76732]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4
Jan 10 16:57:49 compute-0 sudo[76732]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 16:57:49 compute-0 sudo[76732]: pam_unix(sudo:session): session closed for user root
Jan 10 16:57:49 compute-0 sshd-session[76757]: Accepted publickey for ceph-admin from 192.168.122.100 port 57278 ssh2: RSA SHA256:OwXsRrMUqFeNidRfyqqHnD8cFQm/QSlnm0xkW+qjdao
Jan 10 16:57:49 compute-0 systemd-logind[798]: New session 27 of user ceph-admin.
Jan 10 16:57:49 compute-0 systemd[1]: Started Session 27 of User ceph-admin.
Jan 10 16:57:49 compute-0 sshd-session[76757]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 10 16:57:49 compute-0 sudo[76761]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4
Jan 10 16:57:49 compute-0 sudo[76761]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 16:57:49 compute-0 sudo[76761]: pam_unix(sudo:session): session closed for user root
Jan 10 16:57:49 compute-0 ceph-mgr[75538]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 10 16:57:50 compute-0 sshd-session[76786]: Accepted publickey for ceph-admin from 192.168.122.100 port 57286 ssh2: RSA SHA256:OwXsRrMUqFeNidRfyqqHnD8cFQm/QSlnm0xkW+qjdao
Jan 10 16:57:50 compute-0 systemd-logind[798]: New session 28 of user ceph-admin.
Jan 10 16:57:50 compute-0 systemd[1]: Started Session 28 of User ceph-admin.
Jan 10 16:57:50 compute-0 sshd-session[76786]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 10 16:57:50 compute-0 ceph-mon[75249]: Deploying cephadm binary to compute-0
Jan 10 16:57:50 compute-0 sudo[76790]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b.new
Jan 10 16:57:50 compute-0 sudo[76790]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 16:57:50 compute-0 sudo[76790]: pam_unix(sudo:session): session closed for user root
Jan 10 16:57:50 compute-0 sshd-session[76815]: Accepted publickey for ceph-admin from 192.168.122.100 port 57292 ssh2: RSA SHA256:OwXsRrMUqFeNidRfyqqHnD8cFQm/QSlnm0xkW+qjdao
Jan 10 16:57:50 compute-0 systemd-logind[798]: New session 29 of user ceph-admin.
Jan 10 16:57:50 compute-0 systemd[1]: Started Session 29 of User ceph-admin.
Jan 10 16:57:50 compute-0 sshd-session[76815]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 10 16:57:50 compute-0 sudo[76819]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4
Jan 10 16:57:50 compute-0 sudo[76819]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 16:57:50 compute-0 sudo[76819]: pam_unix(sudo:session): session closed for user root
Jan 10 16:57:50 compute-0 sshd-session[76844]: Accepted publickey for ceph-admin from 192.168.122.100 port 57308 ssh2: RSA SHA256:OwXsRrMUqFeNidRfyqqHnD8cFQm/QSlnm0xkW+qjdao
Jan 10 16:57:50 compute-0 systemd-logind[798]: New session 30 of user ceph-admin.
Jan 10 16:57:50 compute-0 systemd[1]: Started Session 30 of User ceph-admin.
Jan 10 16:57:50 compute-0 sshd-session[76844]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 10 16:57:50 compute-0 ceph-mgr[75538]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 10 16:57:50 compute-0 sudo[76848]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b.new
Jan 10 16:57:50 compute-0 sudo[76848]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 16:57:50 compute-0 sudo[76848]: pam_unix(sudo:session): session closed for user root
Jan 10 16:57:51 compute-0 sshd-session[76873]: Accepted publickey for ceph-admin from 192.168.122.100 port 57316 ssh2: RSA SHA256:OwXsRrMUqFeNidRfyqqHnD8cFQm/QSlnm0xkW+qjdao
Jan 10 16:57:51 compute-0 systemd-logind[798]: New session 31 of user ceph-admin.
Jan 10 16:57:51 compute-0 systemd[1]: Started Session 31 of User ceph-admin.
Jan 10 16:57:51 compute-0 sshd-session[76873]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 10 16:57:51 compute-0 ceph-mgr[75538]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 10 16:57:52 compute-0 sshd-session[76900]: Accepted publickey for ceph-admin from 192.168.122.100 port 60630 ssh2: RSA SHA256:OwXsRrMUqFeNidRfyqqHnD8cFQm/QSlnm0xkW+qjdao
Jan 10 16:57:52 compute-0 systemd-logind[798]: New session 32 of user ceph-admin.
Jan 10 16:57:52 compute-0 systemd[1]: Started Session 32 of User ceph-admin.
Jan 10 16:57:52 compute-0 sshd-session[76900]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 10 16:57:52 compute-0 sudo[76904]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv -Z /tmp/cephadm-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b.new /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b
Jan 10 16:57:52 compute-0 sudo[76904]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 16:57:52 compute-0 sudo[76904]: pam_unix(sudo:session): session closed for user root
Jan 10 16:57:52 compute-0 sshd-session[76929]: Accepted publickey for ceph-admin from 192.168.122.100 port 60636 ssh2: RSA SHA256:OwXsRrMUqFeNidRfyqqHnD8cFQm/QSlnm0xkW+qjdao
Jan 10 16:57:52 compute-0 systemd-logind[798]: New session 33 of user ceph-admin.
Jan 10 16:57:52 compute-0 systemd[1]: Started Session 33 of User ceph-admin.
Jan 10 16:57:52 compute-0 sshd-session[76929]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 10 16:57:52 compute-0 ceph-mgr[75538]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 10 16:57:53 compute-0 sudo[76933]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host --expect-hostname compute-0
Jan 10 16:57:53 compute-0 sudo[76933]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 16:57:53 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 16:57:53 compute-0 sudo[76933]: pam_unix(sudo:session): session closed for user root
Jan 10 16:57:53 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Jan 10 16:57:53 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:57:53 compute-0 ceph-mgr[75538]: [cephadm INFO root] Added host compute-0
Jan 10 16:57:53 compute-0 ceph-mgr[75538]: log_channel(cephadm) log [INF] : Added host compute-0
Jan 10 16:57:53 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Jan 10 16:57:53 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config dump", "format": "json"} : dispatch
Jan 10 16:57:53 compute-0 festive_neumann[76595]: Added host 'compute-0' with addr '192.168.122.100'
Jan 10 16:57:53 compute-0 systemd[1]: libpod-2033c8e580bc9c2364e6cec5254f1ccffdc2abfdf19226fa902fd64fdfbc48a0.scope: Deactivated successfully.
Jan 10 16:57:53 compute-0 podman[76579]: 2026-01-10 16:57:53.727564367 +0000 UTC m=+6.552954375 container died 2033c8e580bc9c2364e6cec5254f1ccffdc2abfdf19226fa902fd64fdfbc48a0 (image=quay.io/ceph/ceph:v20, name=festive_neumann, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 10 16:57:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-dfbe2abb034826463531483205d95b836ac34ae8e43e539816df68c133f4f023-merged.mount: Deactivated successfully.
Jan 10 16:57:53 compute-0 sudo[76978]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 16:57:53 compute-0 sudo[76978]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 16:57:53 compute-0 sudo[76978]: pam_unix(sudo:session): session closed for user root
Jan 10 16:57:53 compute-0 podman[76579]: 2026-01-10 16:57:53.782933285 +0000 UTC m=+6.608323293 container remove 2033c8e580bc9c2364e6cec5254f1ccffdc2abfdf19226fa902fd64fdfbc48a0 (image=quay.io/ceph/ceph:v20, name=festive_neumann, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 10 16:57:53 compute-0 systemd[1]: libpod-conmon-2033c8e580bc9c2364e6cec5254f1ccffdc2abfdf19226fa902fd64fdfbc48a0.scope: Deactivated successfully.
Jan 10 16:57:53 compute-0 sudo[77015]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph:v20 --timeout 895 pull
Jan 10 16:57:53 compute-0 sudo[77015]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 16:57:53 compute-0 podman[77026]: 2026-01-10 16:57:53.860259072 +0000 UTC m=+0.050540298 container create db415988d8c8468502dc5e9848d3f3424ccc75699662227c01261013310874c2 (image=quay.io/ceph/ceph:v20, name=elastic_feynman, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 10 16:57:53 compute-0 systemd[1]: Started libpod-conmon-db415988d8c8468502dc5e9848d3f3424ccc75699662227c01261013310874c2.scope.
Jan 10 16:57:53 compute-0 systemd[1]: Started libcrun container.
Jan 10 16:57:53 compute-0 podman[77026]: 2026-01-10 16:57:53.83450708 +0000 UTC m=+0.024788336 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 16:57:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63ccc6cadbacf05fe314fb440394a4f88e043bea9e7b66dad9e1f6d1052f8f60/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 16:57:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63ccc6cadbacf05fe314fb440394a4f88e043bea9e7b66dad9e1f6d1052f8f60/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 16:57:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63ccc6cadbacf05fe314fb440394a4f88e043bea9e7b66dad9e1f6d1052f8f60/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 10 16:57:53 compute-0 podman[77026]: 2026-01-10 16:57:53.95560545 +0000 UTC m=+0.145886776 container init db415988d8c8468502dc5e9848d3f3424ccc75699662227c01261013310874c2 (image=quay.io/ceph/ceph:v20, name=elastic_feynman, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 10 16:57:53 compute-0 ceph-mgr[75538]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 10 16:57:53 compute-0 podman[77026]: 2026-01-10 16:57:53.968348157 +0000 UTC m=+0.158629383 container start db415988d8c8468502dc5e9848d3f3424ccc75699662227c01261013310874c2 (image=quay.io/ceph/ceph:v20, name=elastic_feynman, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 16:57:53 compute-0 podman[77026]: 2026-01-10 16:57:53.972281264 +0000 UTC m=+0.162562500 container attach db415988d8c8468502dc5e9848d3f3424ccc75699662227c01261013310874c2 (image=quay.io/ceph/ceph:v20, name=elastic_feynman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 10 16:57:54 compute-0 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.14148 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 16:57:54 compute-0 ceph-mgr[75538]: [cephadm INFO root] Saving service mon spec with placement count:5
Jan 10 16:57:54 compute-0 ceph-mgr[75538]: log_channel(cephadm) log [INF] : Saving service mon spec with placement count:5
Jan 10 16:57:54 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Jan 10 16:57:54 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:57:54 compute-0 elastic_feynman[77057]: Scheduled mon update...
Jan 10 16:57:54 compute-0 podman[77026]: 2026-01-10 16:57:54.422654436 +0000 UTC m=+0.612935662 container died db415988d8c8468502dc5e9848d3f3424ccc75699662227c01261013310874c2 (image=quay.io/ceph/ceph:v20, name=elastic_feynman, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 10 16:57:54 compute-0 systemd[1]: libpod-db415988d8c8468502dc5e9848d3f3424ccc75699662227c01261013310874c2.scope: Deactivated successfully.
Jan 10 16:57:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-63ccc6cadbacf05fe314fb440394a4f88e043bea9e7b66dad9e1f6d1052f8f60-merged.mount: Deactivated successfully.
Jan 10 16:57:54 compute-0 podman[77026]: 2026-01-10 16:57:54.473805269 +0000 UTC m=+0.664086495 container remove db415988d8c8468502dc5e9848d3f3424ccc75699662227c01261013310874c2 (image=quay.io/ceph/ceph:v20, name=elastic_feynman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 16:57:54 compute-0 systemd[1]: libpod-conmon-db415988d8c8468502dc5e9848d3f3424ccc75699662227c01261013310874c2.scope: Deactivated successfully.
Jan 10 16:57:54 compute-0 podman[77120]: 2026-01-10 16:57:54.551564698 +0000 UTC m=+0.046569910 container create ec5261574d183ca2b9b69691740b7d4243f0067afb803b840879dfaab540f94a (image=quay.io/ceph/ceph:v20, name=vigilant_feistel, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 10 16:57:54 compute-0 systemd[1]: Started libpod-conmon-ec5261574d183ca2b9b69691740b7d4243f0067afb803b840879dfaab540f94a.scope.
Jan 10 16:57:54 compute-0 systemd[1]: Started libcrun container.
Jan 10 16:57:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdadede5dce6c8f517acde5bac57308151872d4450a9197b4a2d6af387481813/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 16:57:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdadede5dce6c8f517acde5bac57308151872d4450a9197b4a2d6af387481813/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 10 16:57:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdadede5dce6c8f517acde5bac57308151872d4450a9197b4a2d6af387481813/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 16:57:54 compute-0 podman[77120]: 2026-01-10 16:57:54.530887354 +0000 UTC m=+0.025892586 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 16:57:54 compute-0 podman[77120]: 2026-01-10 16:57:54.634673562 +0000 UTC m=+0.129678804 container init ec5261574d183ca2b9b69691740b7d4243f0067afb803b840879dfaab540f94a (image=quay.io/ceph/ceph:v20, name=vigilant_feistel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 10 16:57:54 compute-0 podman[77120]: 2026-01-10 16:57:54.641597481 +0000 UTC m=+0.136602693 container start ec5261574d183ca2b9b69691740b7d4243f0067afb803b840879dfaab540f94a (image=quay.io/ceph/ceph:v20, name=vigilant_feistel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 10 16:57:54 compute-0 podman[77120]: 2026-01-10 16:57:54.647042519 +0000 UTC m=+0.142047731 container attach ec5261574d183ca2b9b69691740b7d4243f0067afb803b840879dfaab540f94a (image=quay.io/ceph/ceph:v20, name=vigilant_feistel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 16:57:54 compute-0 podman[77077]: 2026-01-10 16:57:54.667859817 +0000 UTC m=+0.560172494 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 16:57:54 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:57:54 compute-0 ceph-mon[75249]: Added host compute-0
Jan 10 16:57:54 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config dump", "format": "json"} : dispatch
Jan 10 16:57:54 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:57:54 compute-0 podman[77156]: 2026-01-10 16:57:54.813861965 +0000 UTC m=+0.069988188 container create e1f97356295bcda32753361d46f88e149ea3c6ecf538f8c96e68461374d4477a (image=quay.io/ceph/ceph:v20, name=admiring_cartwright, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 10 16:57:54 compute-0 systemd[1]: Started libpod-conmon-e1f97356295bcda32753361d46f88e149ea3c6ecf538f8c96e68461374d4477a.scope.
Jan 10 16:57:54 compute-0 systemd[1]: Started libcrun container.
Jan 10 16:57:54 compute-0 podman[77156]: 2026-01-10 16:57:54.78689605 +0000 UTC m=+0.043022313 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 16:57:54 compute-0 podman[77156]: 2026-01-10 16:57:54.889936337 +0000 UTC m=+0.146062570 container init e1f97356295bcda32753361d46f88e149ea3c6ecf538f8c96e68461374d4477a (image=quay.io/ceph/ceph:v20, name=admiring_cartwright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2)
Jan 10 16:57:54 compute-0 podman[77156]: 2026-01-10 16:57:54.89443986 +0000 UTC m=+0.150566073 container start e1f97356295bcda32753361d46f88e149ea3c6ecf538f8c96e68461374d4477a (image=quay.io/ceph/ceph:v20, name=admiring_cartwright, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 16:57:54 compute-0 podman[77156]: 2026-01-10 16:57:54.898539962 +0000 UTC m=+0.154666195 container attach e1f97356295bcda32753361d46f88e149ea3c6ecf538f8c96e68461374d4477a (image=quay.io/ceph/ceph:v20, name=admiring_cartwright, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 16:57:54 compute-0 ceph-mgr[75538]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 10 16:57:55 compute-0 admiring_cartwright[77191]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable)
Jan 10 16:57:55 compute-0 systemd[1]: libpod-e1f97356295bcda32753361d46f88e149ea3c6ecf538f8c96e68461374d4477a.scope: Deactivated successfully.
Jan 10 16:57:55 compute-0 podman[77156]: 2026-01-10 16:57:55.015094758 +0000 UTC m=+0.271221011 container died e1f97356295bcda32753361d46f88e149ea3c6ecf538f8c96e68461374d4477a (image=quay.io/ceph/ceph:v20, name=admiring_cartwright, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 10 16:57:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-c19efb4e5eaf5b29d1899115cdfa4cb4af4ef76b9b1376514bd5351357a32c3a-merged.mount: Deactivated successfully.
Jan 10 16:57:55 compute-0 podman[77156]: 2026-01-10 16:57:55.062250282 +0000 UTC m=+0.318376495 container remove e1f97356295bcda32753361d46f88e149ea3c6ecf538f8c96e68461374d4477a (image=quay.io/ceph/ceph:v20, name=admiring_cartwright, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 10 16:57:55 compute-0 systemd[1]: libpod-conmon-e1f97356295bcda32753361d46f88e149ea3c6ecf538f8c96e68461374d4477a.scope: Deactivated successfully.
Jan 10 16:57:55 compute-0 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.14150 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 16:57:55 compute-0 ceph-mgr[75538]: [cephadm INFO root] Saving service mgr spec with placement count:2
Jan 10 16:57:55 compute-0 ceph-mgr[75538]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement count:2
Jan 10 16:57:55 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Jan 10 16:57:55 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:57:55 compute-0 vigilant_feistel[77137]: Scheduled mgr update...
Jan 10 16:57:55 compute-0 sudo[77015]: pam_unix(sudo:session): session closed for user root
Jan 10 16:57:55 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=container_image}] v 0)
Jan 10 16:57:55 compute-0 systemd[1]: libpod-ec5261574d183ca2b9b69691740b7d4243f0067afb803b840879dfaab540f94a.scope: Deactivated successfully.
Jan 10 16:57:55 compute-0 podman[77120]: 2026-01-10 16:57:55.126495713 +0000 UTC m=+0.621500925 container died ec5261574d183ca2b9b69691740b7d4243f0067afb803b840879dfaab540f94a (image=quay.io/ceph/ceph:v20, name=vigilant_feistel, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 10 16:57:55 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:57:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-cdadede5dce6c8f517acde5bac57308151872d4450a9197b4a2d6af387481813-merged.mount: Deactivated successfully.
Jan 10 16:57:55 compute-0 podman[77120]: 2026-01-10 16:57:55.168255341 +0000 UTC m=+0.663260553 container remove ec5261574d183ca2b9b69691740b7d4243f0067afb803b840879dfaab540f94a (image=quay.io/ceph/ceph:v20, name=vigilant_feistel, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 10 16:57:55 compute-0 systemd[1]: libpod-conmon-ec5261574d183ca2b9b69691740b7d4243f0067afb803b840879dfaab540f94a.scope: Deactivated successfully.
Jan 10 16:57:55 compute-0 sudo[77219]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 16:57:55 compute-0 sudo[77219]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 16:57:55 compute-0 sudo[77219]: pam_unix(sudo:session): session closed for user root
Jan 10 16:57:55 compute-0 podman[77236]: 2026-01-10 16:57:55.239566974 +0000 UTC m=+0.047467025 container create b6d8d74465cbd7b30ecb9d16a88c5d6aa4a07a04aa3822b95a067486cd358d84 (image=quay.io/ceph/ceph:v20, name=priceless_hawking, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 10 16:57:55 compute-0 systemd[1]: Started libpod-conmon-b6d8d74465cbd7b30ecb9d16a88c5d6aa4a07a04aa3822b95a067486cd358d84.scope.
Jan 10 16:57:55 compute-0 sudo[77261]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host
Jan 10 16:57:55 compute-0 sudo[77261]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 16:57:55 compute-0 systemd[1]: Started libcrun container.
Jan 10 16:57:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2aedd641c74edfccecfb5c8d96f8dbf9e00ad5c0b8caa47799481f37e53d9e77/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 16:57:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2aedd641c74edfccecfb5c8d96f8dbf9e00ad5c0b8caa47799481f37e53d9e77/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 10 16:57:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2aedd641c74edfccecfb5c8d96f8dbf9e00ad5c0b8caa47799481f37e53d9e77/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 16:57:55 compute-0 podman[77236]: 2026-01-10 16:57:55.300193366 +0000 UTC m=+0.108093417 container init b6d8d74465cbd7b30ecb9d16a88c5d6aa4a07a04aa3822b95a067486cd358d84 (image=quay.io/ceph/ceph:v20, name=priceless_hawking, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 10 16:57:55 compute-0 podman[77236]: 2026-01-10 16:57:55.305738857 +0000 UTC m=+0.113638908 container start b6d8d74465cbd7b30ecb9d16a88c5d6aa4a07a04aa3822b95a067486cd358d84 (image=quay.io/ceph/ceph:v20, name=priceless_hawking, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 10 16:57:55 compute-0 podman[77236]: 2026-01-10 16:57:55.309636643 +0000 UTC m=+0.117536694 container attach b6d8d74465cbd7b30ecb9d16a88c5d6aa4a07a04aa3822b95a067486cd358d84 (image=quay.io/ceph/ceph:v20, name=priceless_hawking, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 10 16:57:55 compute-0 podman[77236]: 2026-01-10 16:57:55.221322947 +0000 UTC m=+0.029223018 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 16:57:55 compute-0 sudo[77261]: pam_unix(sudo:session): session closed for user root
Jan 10 16:57:55 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 10 16:57:55 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:57:55 compute-0 sudo[77335]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 16:57:55 compute-0 sudo[77335]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 16:57:55 compute-0 sudo[77335]: pam_unix(sudo:session): session closed for user root
Jan 10 16:57:55 compute-0 sudo[77360]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Jan 10 16:57:55 compute-0 sudo[77360]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 16:57:55 compute-0 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.14152 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 16:57:55 compute-0 ceph-mgr[75538]: [cephadm INFO root] Saving service crash spec with placement *
Jan 10 16:57:55 compute-0 ceph-mgr[75538]: log_channel(cephadm) log [INF] : Saving service crash spec with placement *
Jan 10 16:57:55 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Jan 10 16:57:55 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:57:55 compute-0 priceless_hawking[77290]: Scheduled crash update...
Jan 10 16:57:55 compute-0 systemd[1]: libpod-b6d8d74465cbd7b30ecb9d16a88c5d6aa4a07a04aa3822b95a067486cd358d84.scope: Deactivated successfully.
Jan 10 16:57:55 compute-0 podman[77236]: 2026-01-10 16:57:55.816611886 +0000 UTC m=+0.624511957 container died b6d8d74465cbd7b30ecb9d16a88c5d6aa4a07a04aa3822b95a067486cd358d84 (image=quay.io/ceph/ceph:v20, name=priceless_hawking, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 10 16:57:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-2aedd641c74edfccecfb5c8d96f8dbf9e00ad5c0b8caa47799481f37e53d9e77-merged.mount: Deactivated successfully.
Jan 10 16:57:55 compute-0 ceph-mgr[75538]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 10 16:57:56 compute-0 podman[77236]: 2026-01-10 16:57:56.003299663 +0000 UTC m=+0.811199714 container remove b6d8d74465cbd7b30ecb9d16a88c5d6aa4a07a04aa3822b95a067486cd358d84 (image=quay.io/ceph/ceph:v20, name=priceless_hawking, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 16:57:56 compute-0 systemd[1]: libpod-conmon-b6d8d74465cbd7b30ecb9d16a88c5d6aa4a07a04aa3822b95a067486cd358d84.scope: Deactivated successfully.
Jan 10 16:57:56 compute-0 podman[77407]: 2026-01-10 16:57:56.064046348 +0000 UTC m=+0.040737251 container create e74c89da720c13e578235980929c59004e5deb9ae7ec4832e713fc74d61aa858 (image=quay.io/ceph/ceph:v20, name=angry_hertz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 16:57:56 compute-0 systemd[1]: Started libpod-conmon-e74c89da720c13e578235980929c59004e5deb9ae7ec4832e713fc74d61aa858.scope.
Jan 10 16:57:56 compute-0 ceph-mon[75249]: from='client.14148 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 16:57:56 compute-0 ceph-mon[75249]: Saving service mon spec with placement count:5
Jan 10 16:57:56 compute-0 ceph-mon[75249]: from='client.14150 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 16:57:56 compute-0 ceph-mon[75249]: Saving service mgr spec with placement count:2
Jan 10 16:57:56 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:57:56 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:57:56 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:57:56 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:57:56 compute-0 systemd[1]: Started libcrun container.
Jan 10 16:57:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/049f090f9e140bccd89dacb39b43e642ed06cb896f2dacf12278635b8820cbb8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 16:57:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/049f090f9e140bccd89dacb39b43e642ed06cb896f2dacf12278635b8820cbb8/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 10 16:57:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/049f090f9e140bccd89dacb39b43e642ed06cb896f2dacf12278635b8820cbb8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 16:57:56 compute-0 podman[77407]: 2026-01-10 16:57:56.140594114 +0000 UTC m=+0.117285127 container init e74c89da720c13e578235980929c59004e5deb9ae7ec4832e713fc74d61aa858 (image=quay.io/ceph/ceph:v20, name=angry_hertz, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 16:57:56 compute-0 podman[77407]: 2026-01-10 16:57:56.047812226 +0000 UTC m=+0.024503149 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 16:57:56 compute-0 podman[77407]: 2026-01-10 16:57:56.149390853 +0000 UTC m=+0.126081776 container start e74c89da720c13e578235980929c59004e5deb9ae7ec4832e713fc74d61aa858 (image=quay.io/ceph/ceph:v20, name=angry_hertz, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 16:57:56 compute-0 podman[77407]: 2026-01-10 16:57:56.153521766 +0000 UTC m=+0.130212719 container attach e74c89da720c13e578235980929c59004e5deb9ae7ec4832e713fc74d61aa858 (image=quay.io/ceph/ceph:v20, name=angry_hertz, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 16:57:56 compute-0 podman[77461]: 2026-01-10 16:57:56.214521358 +0000 UTC m=+0.055399881 container exec 69622407e4b336ab6e593d34ac16bfb19f7f8835a32ed22c7a89e50ee8c8d8e7 (image=quay.io/ceph/ceph:v20, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 16:57:56 compute-0 podman[77461]: 2026-01-10 16:57:56.340801349 +0000 UTC m=+0.181679922 container exec_died 69622407e4b336ab6e593d34ac16bfb19f7f8835a32ed22c7a89e50ee8c8d8e7 (image=quay.io/ceph/ceph:v20, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 16:57:56 compute-0 sudo[77360]: pam_unix(sudo:session): session closed for user root
Jan 10 16:57:56 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 10 16:57:56 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:57:56 compute-0 sudo[77558]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 16:57:56 compute-0 sudo[77558]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 16:57:56 compute-0 sudo[77558]: pam_unix(sudo:session): session closed for user root
Jan 10 16:57:56 compute-0 sudo[77583]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 10 16:57:56 compute-0 sudo[77583]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 16:57:56 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/container_init}] v 0)
Jan 10 16:57:56 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2764466694' entity='client.admin' 
Jan 10 16:57:56 compute-0 systemd[1]: libpod-e74c89da720c13e578235980929c59004e5deb9ae7ec4832e713fc74d61aa858.scope: Deactivated successfully.
Jan 10 16:57:56 compute-0 podman[77407]: 2026-01-10 16:57:56.848279736 +0000 UTC m=+0.824970649 container died e74c89da720c13e578235980929c59004e5deb9ae7ec4832e713fc74d61aa858 (image=quay.io/ceph/ceph:v20, name=angry_hertz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 16:57:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-049f090f9e140bccd89dacb39b43e642ed06cb896f2dacf12278635b8820cbb8-merged.mount: Deactivated successfully.
Jan 10 16:57:56 compute-0 podman[77407]: 2026-01-10 16:57:56.881567913 +0000 UTC m=+0.858258816 container remove e74c89da720c13e578235980929c59004e5deb9ae7ec4832e713fc74d61aa858 (image=quay.io/ceph/ceph:v20, name=angry_hertz, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 10 16:57:56 compute-0 systemd[1]: libpod-conmon-e74c89da720c13e578235980929c59004e5deb9ae7ec4832e713fc74d61aa858.scope: Deactivated successfully.
Jan 10 16:57:56 compute-0 ceph-mgr[75538]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 10 16:57:56 compute-0 podman[77623]: 2026-01-10 16:57:56.962006523 +0000 UTC m=+0.058071032 container create 2ad6d78ba4e33065d8fbdce5bd6d5c87913fa5e8e37dad97de6a798af00e67a1 (image=quay.io/ceph/ceph:v20, name=clever_cerf, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 16:57:57 compute-0 systemd[1]: Started libpod-conmon-2ad6d78ba4e33065d8fbdce5bd6d5c87913fa5e8e37dad97de6a798af00e67a1.scope.
Jan 10 16:57:57 compute-0 podman[77623]: 2026-01-10 16:57:56.937809364 +0000 UTC m=+0.033873893 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 16:57:57 compute-0 systemd[1]: Started libcrun container.
Jan 10 16:57:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58f4d07da10d6f4c9bc518394e2016b276e3e37c5264cd3ff31fd6edd16a69e5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 16:57:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58f4d07da10d6f4c9bc518394e2016b276e3e37c5264cd3ff31fd6edd16a69e5/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 10 16:57:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58f4d07da10d6f4c9bc518394e2016b276e3e37c5264cd3ff31fd6edd16a69e5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 16:57:57 compute-0 podman[77623]: 2026-01-10 16:57:57.057403092 +0000 UTC m=+0.153467601 container init 2ad6d78ba4e33065d8fbdce5bd6d5c87913fa5e8e37dad97de6a798af00e67a1 (image=quay.io/ceph/ceph:v20, name=clever_cerf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 10 16:57:57 compute-0 podman[77623]: 2026-01-10 16:57:57.063375635 +0000 UTC m=+0.159440124 container start 2ad6d78ba4e33065d8fbdce5bd6d5c87913fa5e8e37dad97de6a798af00e67a1 (image=quay.io/ceph/ceph:v20, name=clever_cerf, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 16:57:57 compute-0 podman[77623]: 2026-01-10 16:57:57.067196679 +0000 UTC m=+0.163261188 container attach 2ad6d78ba4e33065d8fbdce5bd6d5c87913fa5e8e37dad97de6a798af00e67a1 (image=quay.io/ceph/ceph:v20, name=clever_cerf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 16:57:57 compute-0 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 77657 (sysctl)
Jan 10 16:57:57 compute-0 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Jan 10 16:57:57 compute-0 ceph-mon[75249]: from='client.14152 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 16:57:57 compute-0 ceph-mon[75249]: Saving service crash spec with placement *
Jan 10 16:57:57 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:57:57 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/2764466694' entity='client.admin' 
Jan 10 16:57:57 compute-0 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Jan 10 16:57:57 compute-0 sudo[77583]: pam_unix(sudo:session): session closed for user root
Jan 10 16:57:57 compute-0 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 16:57:57 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/client_keyrings}] v 0)
Jan 10 16:57:57 compute-0 sudo[77698]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 16:57:57 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:57:57 compute-0 sudo[77698]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 16:57:57 compute-0 sudo[77698]: pam_unix(sudo:session): session closed for user root
Jan 10 16:57:57 compute-0 systemd[1]: libpod-2ad6d78ba4e33065d8fbdce5bd6d5c87913fa5e8e37dad97de6a798af00e67a1.scope: Deactivated successfully.
Jan 10 16:57:57 compute-0 podman[77623]: 2026-01-10 16:57:57.498381278 +0000 UTC m=+0.594445767 container died 2ad6d78ba4e33065d8fbdce5bd6d5c87913fa5e8e37dad97de6a798af00e67a1 (image=quay.io/ceph/ceph:v20, name=clever_cerf, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 10 16:57:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-58f4d07da10d6f4c9bc518394e2016b276e3e37c5264cd3ff31fd6edd16a69e5-merged.mount: Deactivated successfully.
Jan 10 16:57:57 compute-0 sudo[77725]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 list-networks
Jan 10 16:57:57 compute-0 sudo[77725]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 16:57:57 compute-0 podman[77623]: 2026-01-10 16:57:57.600281834 +0000 UTC m=+0.696346323 container remove 2ad6d78ba4e33065d8fbdce5bd6d5c87913fa5e8e37dad97de6a798af00e67a1 (image=quay.io/ceph/ceph:v20, name=clever_cerf, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 16:57:57 compute-0 systemd[1]: libpod-conmon-2ad6d78ba4e33065d8fbdce5bd6d5c87913fa5e8e37dad97de6a798af00e67a1.scope: Deactivated successfully.
Jan 10 16:57:57 compute-0 podman[77763]: 2026-01-10 16:57:57.677096517 +0000 UTC m=+0.050149187 container create e6454d67ea178efc94a4cbbff267a68509330ddc36ad313fdfacd92b391b3e4b (image=quay.io/ceph/ceph:v20, name=cool_shockley, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 16:57:57 compute-0 systemd[1]: Started libpod-conmon-e6454d67ea178efc94a4cbbff267a68509330ddc36ad313fdfacd92b391b3e4b.scope.
Jan 10 16:57:57 compute-0 systemd[1]: Started libcrun container.
Jan 10 16:57:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0101285f28cbd0e84daf6a3829cc28e3629d2a366ae2d817e27761cdc0a3f476/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 16:57:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0101285f28cbd0e84daf6a3829cc28e3629d2a366ae2d817e27761cdc0a3f476/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 16:57:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0101285f28cbd0e84daf6a3829cc28e3629d2a366ae2d817e27761cdc0a3f476/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 10 16:57:57 compute-0 podman[77763]: 2026-01-10 16:57:57.653279848 +0000 UTC m=+0.026332498 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 16:57:57 compute-0 podman[77763]: 2026-01-10 16:57:57.766207345 +0000 UTC m=+0.139260015 container init e6454d67ea178efc94a4cbbff267a68509330ddc36ad313fdfacd92b391b3e4b (image=quay.io/ceph/ceph:v20, name=cool_shockley, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 10 16:57:57 compute-0 podman[77763]: 2026-01-10 16:57:57.772905848 +0000 UTC m=+0.145958518 container start e6454d67ea178efc94a4cbbff267a68509330ddc36ad313fdfacd92b391b3e4b (image=quay.io/ceph/ceph:v20, name=cool_shockley, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Jan 10 16:57:57 compute-0 podman[77763]: 2026-01-10 16:57:57.777623156 +0000 UTC m=+0.150675876 container attach e6454d67ea178efc94a4cbbff267a68509330ddc36ad313fdfacd92b391b3e4b (image=quay.io/ceph/ceph:v20, name=cool_shockley, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 10 16:57:57 compute-0 sudo[77725]: pam_unix(sudo:session): session closed for user root
Jan 10 16:57:57 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 10 16:57:57 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:57:57 compute-0 ceph-mgr[75538]: mgr.server send_report Giving up on OSDs that haven't reported yet, sending potentially incomplete PG state to mon
Jan 10 16:57:57 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 10 16:57:57 compute-0 ceph-mon[75249]: log_channel(cluster) log [WRN] : Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Jan 10 16:57:57 compute-0 sudo[77823]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 16:57:57 compute-0 sudo[77823]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 16:57:57 compute-0 sudo[77823]: pam_unix(sudo:session): session closed for user root
Jan 10 16:57:58 compute-0 sudo[77848]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 -- inventory --format=json-pretty --filter-for-batch
Jan 10 16:57:58 compute-0 sudo[77848]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 16:57:58 compute-0 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.14158 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 16:57:58 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Jan 10 16:57:58 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:57:58 compute-0 ceph-mgr[75538]: [cephadm INFO root] Added label _admin to host compute-0
Jan 10 16:57:58 compute-0 ceph-mgr[75538]: log_channel(cephadm) log [INF] : Added label _admin to host compute-0
Jan 10 16:57:58 compute-0 cool_shockley[77780]: Added label _admin to host compute-0
Jan 10 16:57:58 compute-0 systemd[1]: libpod-e6454d67ea178efc94a4cbbff267a68509330ddc36ad313fdfacd92b391b3e4b.scope: Deactivated successfully.
Jan 10 16:57:58 compute-0 podman[77763]: 2026-01-10 16:57:58.222641721 +0000 UTC m=+0.595694351 container died e6454d67ea178efc94a4cbbff267a68509330ddc36ad313fdfacd92b391b3e4b (image=quay.io/ceph/ceph:v20, name=cool_shockley, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True)
Jan 10 16:57:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-0101285f28cbd0e84daf6a3829cc28e3629d2a366ae2d817e27761cdc0a3f476-merged.mount: Deactivated successfully.
Jan 10 16:57:58 compute-0 podman[77763]: 2026-01-10 16:57:58.275048239 +0000 UTC m=+0.648100869 container remove e6454d67ea178efc94a4cbbff267a68509330ddc36ad313fdfacd92b391b3e4b (image=quay.io/ceph/ceph:v20, name=cool_shockley, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 10 16:57:58 compute-0 systemd[1]: libpod-conmon-e6454d67ea178efc94a4cbbff267a68509330ddc36ad313fdfacd92b391b3e4b.scope: Deactivated successfully.
Jan 10 16:57:58 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 16:57:58 compute-0 podman[77898]: 2026-01-10 16:57:58.357917647 +0000 UTC m=+0.044557965 container create a803d1e4fafaf150174476959f9ea5b202dd2d6d6880370e5713df8f5951a394 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_elgamal, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 10 16:57:58 compute-0 podman[77894]: 2026-01-10 16:57:58.362552523 +0000 UTC m=+0.052870101 container create 5d47000b14f360478494c3bbbc433246cb0ca5f2e834426ad480968b4994b4fe (image=quay.io/ceph/ceph:v20, name=zen_bardeen, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 10 16:57:58 compute-0 systemd[1]: Started libpod-conmon-a803d1e4fafaf150174476959f9ea5b202dd2d6d6880370e5713df8f5951a394.scope.
Jan 10 16:57:58 compute-0 systemd[1]: Started libpod-conmon-5d47000b14f360478494c3bbbc433246cb0ca5f2e834426ad480968b4994b4fe.scope.
Jan 10 16:57:58 compute-0 systemd[1]: Started libcrun container.
Jan 10 16:57:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8dba5f43d80ca03ed1e0f4ecf074625b24c3652b6ae10b3d791391e369e96da/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 10 16:57:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8dba5f43d80ca03ed1e0f4ecf074625b24c3652b6ae10b3d791391e369e96da/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 16:57:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8dba5f43d80ca03ed1e0f4ecf074625b24c3652b6ae10b3d791391e369e96da/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 16:57:58 compute-0 systemd[1]: Started libcrun container.
Jan 10 16:57:58 compute-0 podman[77898]: 2026-01-10 16:57:58.335493406 +0000 UTC m=+0.022133764 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 16:57:58 compute-0 podman[77894]: 2026-01-10 16:57:58.34295734 +0000 UTC m=+0.033274938 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 16:57:58 compute-0 podman[77894]: 2026-01-10 16:57:58.439438298 +0000 UTC m=+0.129755916 container init 5d47000b14f360478494c3bbbc433246cb0ca5f2e834426ad480968b4994b4fe (image=quay.io/ceph/ceph:v20, name=zen_bardeen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 10 16:57:58 compute-0 podman[77898]: 2026-01-10 16:57:58.455499826 +0000 UTC m=+0.142140174 container init a803d1e4fafaf150174476959f9ea5b202dd2d6d6880370e5713df8f5951a394 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_elgamal, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 16:57:58 compute-0 podman[77894]: 2026-01-10 16:57:58.456674608 +0000 UTC m=+0.146992186 container start 5d47000b14f360478494c3bbbc433246cb0ca5f2e834426ad480968b4994b4fe (image=quay.io/ceph/ceph:v20, name=zen_bardeen, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 16:57:58 compute-0 podman[77894]: 2026-01-10 16:57:58.460115482 +0000 UTC m=+0.150433340 container attach 5d47000b14f360478494c3bbbc433246cb0ca5f2e834426ad480968b4994b4fe (image=quay.io/ceph/ceph:v20, name=zen_bardeen, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 16:57:58 compute-0 podman[77898]: 2026-01-10 16:57:58.467189345 +0000 UTC m=+0.153829663 container start a803d1e4fafaf150174476959f9ea5b202dd2d6d6880370e5713df8f5951a394 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_elgamal, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 10 16:57:58 compute-0 elastic_elgamal[77928]: 167 167
Jan 10 16:57:58 compute-0 systemd[1]: libpod-a803d1e4fafaf150174476959f9ea5b202dd2d6d6880370e5713df8f5951a394.scope: Deactivated successfully.
Jan 10 16:57:58 compute-0 podman[77898]: 2026-01-10 16:57:58.47619492 +0000 UTC m=+0.162835278 container attach a803d1e4fafaf150174476959f9ea5b202dd2d6d6880370e5713df8f5951a394 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_elgamal, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3)
Jan 10 16:57:58 compute-0 podman[77898]: 2026-01-10 16:57:58.476850068 +0000 UTC m=+0.163490466 container died a803d1e4fafaf150174476959f9ea5b202dd2d6d6880370e5713df8f5951a394 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_elgamal, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 16:57:58 compute-0 ceph-mon[75249]: from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 16:57:58 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:57:58 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:57:58 compute-0 ceph-mon[75249]: pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 10 16:57:58 compute-0 ceph-mon[75249]: Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Jan 10 16:57:58 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:57:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-2bf2ddd64f891b3a062f505591811670ecd188dce8e6382467e04d24bf517a15-merged.mount: Deactivated successfully.
Jan 10 16:57:58 compute-0 podman[77898]: 2026-01-10 16:57:58.526557212 +0000 UTC m=+0.213197510 container remove a803d1e4fafaf150174476959f9ea5b202dd2d6d6880370e5713df8f5951a394 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_elgamal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 10 16:57:58 compute-0 systemd[1]: libpod-conmon-a803d1e4fafaf150174476959f9ea5b202dd2d6d6880370e5713df8f5951a394.scope: Deactivated successfully.
Jan 10 16:57:58 compute-0 ceph-mgr[75538]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 10 16:57:59 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/dashboard/cluster/status}] v 0)
Jan 10 16:57:59 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/976546190' entity='client.admin' 
Jan 10 16:57:59 compute-0 zen_bardeen[77930]: set mgr/dashboard/cluster/status
Jan 10 16:57:59 compute-0 systemd[1]: libpod-5d47000b14f360478494c3bbbc433246cb0ca5f2e834426ad480968b4994b4fe.scope: Deactivated successfully.
Jan 10 16:57:59 compute-0 conmon[77930]: conmon 5d47000b14f360478494 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5d47000b14f360478494c3bbbc433246cb0ca5f2e834426ad480968b4994b4fe.scope/container/memory.events
Jan 10 16:57:59 compute-0 podman[77894]: 2026-01-10 16:57:59.051265019 +0000 UTC m=+0.741582607 container died 5d47000b14f360478494c3bbbc433246cb0ca5f2e834426ad480968b4994b4fe (image=quay.io/ceph/ceph:v20, name=zen_bardeen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 10 16:57:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-a8dba5f43d80ca03ed1e0f4ecf074625b24c3652b6ae10b3d791391e369e96da-merged.mount: Deactivated successfully.
Jan 10 16:57:59 compute-0 podman[77894]: 2026-01-10 16:57:59.103278486 +0000 UTC m=+0.793596094 container remove 5d47000b14f360478494c3bbbc433246cb0ca5f2e834426ad480968b4994b4fe (image=quay.io/ceph/ceph:v20, name=zen_bardeen, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 16:57:59 compute-0 systemd[1]: libpod-conmon-5d47000b14f360478494c3bbbc433246cb0ca5f2e834426ad480968b4994b4fe.scope: Deactivated successfully.
Jan 10 16:57:59 compute-0 systemd[1]: Reloading.
Jan 10 16:57:59 compute-0 systemd-rc-local-generator[78015]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 16:57:59 compute-0 systemd-sysv-generator[78020]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 10 16:57:59 compute-0 sudo[74163]: pam_unix(sudo:session): session closed for user root
Jan 10 16:57:59 compute-0 podman[78031]: 2026-01-10 16:57:59.673283647 +0000 UTC m=+0.064275943 container create decb3bd8b81703687aaec0707a0af07295d28ed4af6bd702abd83134f2d0e117 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_beaver, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 16:57:59 compute-0 systemd[1]: Started libpod-conmon-decb3bd8b81703687aaec0707a0af07295d28ed4af6bd702abd83134f2d0e117.scope.
Jan 10 16:57:59 compute-0 systemd[1]: Started libcrun container.
Jan 10 16:57:59 compute-0 podman[78031]: 2026-01-10 16:57:59.65287154 +0000 UTC m=+0.043863846 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 16:57:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e56b48da4556d049c22aa6b7f89ed22271a3e960de134680610d975d7e3b878f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 16:57:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e56b48da4556d049c22aa6b7f89ed22271a3e960de134680610d975d7e3b878f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 16:57:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e56b48da4556d049c22aa6b7f89ed22271a3e960de134680610d975d7e3b878f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 16:57:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e56b48da4556d049c22aa6b7f89ed22271a3e960de134680610d975d7e3b878f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 16:57:59 compute-0 podman[78031]: 2026-01-10 16:57:59.762465577 +0000 UTC m=+0.153457923 container init decb3bd8b81703687aaec0707a0af07295d28ed4af6bd702abd83134f2d0e117 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_beaver, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 16:57:59 compute-0 podman[78031]: 2026-01-10 16:57:59.777409974 +0000 UTC m=+0.168402280 container start decb3bd8b81703687aaec0707a0af07295d28ed4af6bd702abd83134f2d0e117 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_beaver, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 10 16:57:59 compute-0 podman[78031]: 2026-01-10 16:57:59.78240848 +0000 UTC m=+0.173400776 container attach decb3bd8b81703687aaec0707a0af07295d28ed4af6bd702abd83134f2d0e117 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_beaver, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 16:57:59 compute-0 sudo[78075]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sacbjkxfhteznlykgfosgrhqaksmvara ; /usr/bin/python3'
Jan 10 16:57:59 compute-0 sudo[78075]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:57:59 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 10 16:58:00 compute-0 python3[78077]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/cephadm/use_repo_digest false
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 16:58:00 compute-0 ceph-mon[75249]: from='client.14158 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 16:58:00 compute-0 ceph-mon[75249]: Added label _admin to host compute-0
Jan 10 16:58:00 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/976546190' entity='client.admin' 
Jan 10 16:58:00 compute-0 podman[78080]: 2026-01-10 16:58:00.095901102 +0000 UTC m=+0.066563285 container create 982292a70c7fb708b3e050e4992236e4f8606fe63932f5e2b030cc11e016886c (image=quay.io/ceph/ceph:v20, name=distracted_sutherland, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 16:58:00 compute-0 systemd[1]: Started libpod-conmon-982292a70c7fb708b3e050e4992236e4f8606fe63932f5e2b030cc11e016886c.scope.
Jan 10 16:58:00 compute-0 podman[78080]: 2026-01-10 16:58:00.067274322 +0000 UTC m=+0.037936555 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 16:58:00 compute-0 systemd[1]: Started libcrun container.
Jan 10 16:58:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c485ee4affd3c4234fcd33ef289e37da4b5b14e5906a877535c275b87b2bb7e5/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 16:58:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c485ee4affd3c4234fcd33ef289e37da4b5b14e5906a877535c275b87b2bb7e5/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 16:58:00 compute-0 podman[78080]: 2026-01-10 16:58:00.21657738 +0000 UTC m=+0.187239603 container init 982292a70c7fb708b3e050e4992236e4f8606fe63932f5e2b030cc11e016886c (image=quay.io/ceph/ceph:v20, name=distracted_sutherland, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 16:58:00 compute-0 podman[78080]: 2026-01-10 16:58:00.227192259 +0000 UTC m=+0.197854442 container start 982292a70c7fb708b3e050e4992236e4f8606fe63932f5e2b030cc11e016886c (image=quay.io/ceph/ceph:v20, name=distracted_sutherland, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 16:58:00 compute-0 podman[78080]: 2026-01-10 16:58:00.232032311 +0000 UTC m=+0.202694494 container attach 982292a70c7fb708b3e050e4992236e4f8606fe63932f5e2b030cc11e016886c (image=quay.io/ceph/ceph:v20, name=distracted_sutherland, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True)
Jan 10 16:58:00 compute-0 youthful_beaver[78047]: [
Jan 10 16:58:00 compute-0 youthful_beaver[78047]:     {
Jan 10 16:58:00 compute-0 youthful_beaver[78047]:         "available": false,
Jan 10 16:58:00 compute-0 youthful_beaver[78047]:         "being_replaced": false,
Jan 10 16:58:00 compute-0 youthful_beaver[78047]:         "ceph_device_lvm": false,
Jan 10 16:58:00 compute-0 youthful_beaver[78047]:         "device_id": "QEMU_DVD-ROM_QM00001",
Jan 10 16:58:00 compute-0 youthful_beaver[78047]:         "lsm_data": {},
Jan 10 16:58:00 compute-0 youthful_beaver[78047]:         "lvs": [],
Jan 10 16:58:00 compute-0 youthful_beaver[78047]:         "path": "/dev/sr0",
Jan 10 16:58:00 compute-0 youthful_beaver[78047]:         "rejected_reasons": [
Jan 10 16:58:00 compute-0 youthful_beaver[78047]:             "Insufficient space (<5GB)",
Jan 10 16:58:00 compute-0 youthful_beaver[78047]:             "Has a FileSystem"
Jan 10 16:58:00 compute-0 youthful_beaver[78047]:         ],
Jan 10 16:58:00 compute-0 youthful_beaver[78047]:         "sys_api": {
Jan 10 16:58:00 compute-0 youthful_beaver[78047]:             "actuators": null,
Jan 10 16:58:00 compute-0 youthful_beaver[78047]:             "device_nodes": [
Jan 10 16:58:00 compute-0 youthful_beaver[78047]:                 "sr0"
Jan 10 16:58:00 compute-0 youthful_beaver[78047]:             ],
Jan 10 16:58:00 compute-0 youthful_beaver[78047]:             "devname": "sr0",
Jan 10 16:58:00 compute-0 youthful_beaver[78047]:             "human_readable_size": "482.00 KB",
Jan 10 16:58:00 compute-0 youthful_beaver[78047]:             "id_bus": "ata",
Jan 10 16:58:00 compute-0 youthful_beaver[78047]:             "model": "QEMU DVD-ROM",
Jan 10 16:58:00 compute-0 youthful_beaver[78047]:             "nr_requests": "2",
Jan 10 16:58:00 compute-0 youthful_beaver[78047]:             "parent": "/dev/sr0",
Jan 10 16:58:00 compute-0 youthful_beaver[78047]:             "partitions": {},
Jan 10 16:58:00 compute-0 youthful_beaver[78047]:             "path": "/dev/sr0",
Jan 10 16:58:00 compute-0 youthful_beaver[78047]:             "removable": "1",
Jan 10 16:58:00 compute-0 youthful_beaver[78047]:             "rev": "2.5+",
Jan 10 16:58:00 compute-0 youthful_beaver[78047]:             "ro": "0",
Jan 10 16:58:00 compute-0 youthful_beaver[78047]:             "rotational": "1",
Jan 10 16:58:00 compute-0 youthful_beaver[78047]:             "sas_address": "",
Jan 10 16:58:00 compute-0 youthful_beaver[78047]:             "sas_device_handle": "",
Jan 10 16:58:00 compute-0 youthful_beaver[78047]:             "scheduler_mode": "mq-deadline",
Jan 10 16:58:00 compute-0 youthful_beaver[78047]:             "sectors": 0,
Jan 10 16:58:00 compute-0 youthful_beaver[78047]:             "sectorsize": "2048",
Jan 10 16:58:00 compute-0 youthful_beaver[78047]:             "size": 493568.0,
Jan 10 16:58:00 compute-0 youthful_beaver[78047]:             "support_discard": "2048",
Jan 10 16:58:00 compute-0 youthful_beaver[78047]:             "type": "disk",
Jan 10 16:58:00 compute-0 youthful_beaver[78047]:             "vendor": "QEMU"
Jan 10 16:58:00 compute-0 youthful_beaver[78047]:         }
Jan 10 16:58:00 compute-0 youthful_beaver[78047]:     }
Jan 10 16:58:00 compute-0 youthful_beaver[78047]: ]
Jan 10 16:58:00 compute-0 systemd[1]: libpod-decb3bd8b81703687aaec0707a0af07295d28ed4af6bd702abd83134f2d0e117.scope: Deactivated successfully.
Jan 10 16:58:00 compute-0 podman[78031]: 2026-01-10 16:58:00.399544695 +0000 UTC m=+0.790536961 container died decb3bd8b81703687aaec0707a0af07295d28ed4af6bd702abd83134f2d0e117 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_beaver, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 16:58:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-e56b48da4556d049c22aa6b7f89ed22271a3e960de134680610d975d7e3b878f-merged.mount: Deactivated successfully.
Jan 10 16:58:00 compute-0 podman[78031]: 2026-01-10 16:58:00.44303401 +0000 UTC m=+0.834026266 container remove decb3bd8b81703687aaec0707a0af07295d28ed4af6bd702abd83134f2d0e117 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_beaver, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 10 16:58:00 compute-0 systemd[1]: libpod-conmon-decb3bd8b81703687aaec0707a0af07295d28ed4af6bd702abd83134f2d0e117.scope: Deactivated successfully.
Jan 10 16:58:00 compute-0 sudo[77848]: pam_unix(sudo:session): session closed for user root
Jan 10 16:58:00 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 10 16:58:00 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:58:00 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 10 16:58:00 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:58:00 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 10 16:58:00 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:58:00 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 10 16:58:00 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:58:00 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Jan 10 16:58:00 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 10 16:58:00 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 16:58:00 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 16:58:00 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 10 16:58:00 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 10 16:58:00 compute-0 ceph-mgr[75538]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Jan 10 16:58:00 compute-0 ceph-mgr[75538]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Jan 10 16:58:00 compute-0 sudo[78798]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Jan 10 16:58:00 compute-0 sudo[78798]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 16:58:00 compute-0 sudo[78798]: pam_unix(sudo:session): session closed for user root
Jan 10 16:58:00 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/use_repo_digest}] v 0)
Jan 10 16:58:00 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/156437027' entity='client.admin' 
Jan 10 16:58:00 compute-0 sudo[78823]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/etc/ceph
Jan 10 16:58:00 compute-0 sudo[78823]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 16:58:00 compute-0 sudo[78823]: pam_unix(sudo:session): session closed for user root
Jan 10 16:58:00 compute-0 systemd[1]: libpod-982292a70c7fb708b3e050e4992236e4f8606fe63932f5e2b030cc11e016886c.scope: Deactivated successfully.
Jan 10 16:58:00 compute-0 podman[78080]: 2026-01-10 16:58:00.675322298 +0000 UTC m=+0.645984441 container died 982292a70c7fb708b3e050e4992236e4f8606fe63932f5e2b030cc11e016886c (image=quay.io/ceph/ceph:v20, name=distracted_sutherland, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 16:58:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-c485ee4affd3c4234fcd33ef289e37da4b5b14e5906a877535c275b87b2bb7e5-merged.mount: Deactivated successfully.
Jan 10 16:58:00 compute-0 podman[78080]: 2026-01-10 16:58:00.71615231 +0000 UTC m=+0.686814453 container remove 982292a70c7fb708b3e050e4992236e4f8606fe63932f5e2b030cc11e016886c (image=quay.io/ceph/ceph:v20, name=distracted_sutherland, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 16:58:00 compute-0 systemd[1]: libpod-conmon-982292a70c7fb708b3e050e4992236e4f8606fe63932f5e2b030cc11e016886c.scope: Deactivated successfully.
Jan 10 16:58:00 compute-0 sudo[78075]: pam_unix(sudo:session): session closed for user root
Jan 10 16:58:00 compute-0 sudo[78851]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/etc/ceph/ceph.conf.new
Jan 10 16:58:00 compute-0 sudo[78851]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 16:58:00 compute-0 sudo[78851]: pam_unix(sudo:session): session closed for user root
Jan 10 16:58:00 compute-0 sudo[78887]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4
Jan 10 16:58:00 compute-0 sudo[78887]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 16:58:00 compute-0 sudo[78887]: pam_unix(sudo:session): session closed for user root
Jan 10 16:58:00 compute-0 sudo[78912]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/etc/ceph/ceph.conf.new
Jan 10 16:58:00 compute-0 sudo[78912]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 16:58:00 compute-0 sudo[78912]: pam_unix(sudo:session): session closed for user root
Jan 10 16:58:00 compute-0 ceph-mgr[75538]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 10 16:58:01 compute-0 sudo[78960]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/etc/ceph/ceph.conf.new
Jan 10 16:58:01 compute-0 sudo[78960]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 16:58:01 compute-0 sudo[78960]: pam_unix(sudo:session): session closed for user root
Jan 10 16:58:01 compute-0 ceph-mon[75249]: pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 10 16:58:01 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:58:01 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:58:01 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:58:01 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:58:01 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 10 16:58:01 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 16:58:01 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 10 16:58:01 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/156437027' entity='client.admin' 
Jan 10 16:58:01 compute-0 sudo[78985]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/etc/ceph/ceph.conf.new
Jan 10 16:58:01 compute-0 sudo[78985]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 16:58:01 compute-0 sudo[78985]: pam_unix(sudo:session): session closed for user root
Jan 10 16:58:01 compute-0 sudo[79033]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv -Z /tmp/cephadm-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Jan 10 16:58:01 compute-0 sudo[79033]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 16:58:01 compute-0 sudo[79033]: pam_unix(sudo:session): session closed for user root
Jan 10 16:58:01 compute-0 ceph-mgr[75538]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/config/ceph.conf
Jan 10 16:58:01 compute-0 ceph-mgr[75538]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/config/ceph.conf
Jan 10 16:58:01 compute-0 sudo[79087]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/config
Jan 10 16:58:01 compute-0 sudo[79087]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 16:58:01 compute-0 sudo[79087]: pam_unix(sudo:session): session closed for user root
Jan 10 16:58:01 compute-0 sudo[79135]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/config
Jan 10 16:58:01 compute-0 sudo[79135]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 16:58:01 compute-0 sudo[79135]: pam_unix(sudo:session): session closed for user root
Jan 10 16:58:01 compute-0 sudo[79160]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/config/ceph.conf.new
Jan 10 16:58:01 compute-0 sudo[79160]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 16:58:01 compute-0 sudo[79160]: pam_unix(sudo:session): session closed for user root
Jan 10 16:58:01 compute-0 sudo[79185]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4
Jan 10 16:58:01 compute-0 sudo[79185]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 16:58:01 compute-0 sudo[79185]: pam_unix(sudo:session): session closed for user root
Jan 10 16:58:01 compute-0 sudo[79233]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/config/ceph.conf.new
Jan 10 16:58:01 compute-0 sudo[79233]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 16:58:01 compute-0 sudo[79233]: pam_unix(sudo:session): session closed for user root
Jan 10 16:58:01 compute-0 sudo[79337]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uxljhogwoknmwwabyjmqeuyrhbgygzts ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1768064281.0961525-36469-151021259633936/async_wrapper.py j905361730444 30 /home/zuul/.ansible/tmp/ansible-tmp-1768064281.0961525-36469-151021259633936/AnsiballZ_command.py _'
Jan 10 16:58:01 compute-0 sudo[79337]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:58:01 compute-0 sudo[79325]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/config/ceph.conf.new
Jan 10 16:58:01 compute-0 sudo[79325]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 16:58:01 compute-0 sudo[79325]: pam_unix(sudo:session): session closed for user root
Jan 10 16:58:01 compute-0 sudo[79358]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/config/ceph.conf.new
Jan 10 16:58:01 compute-0 sudo[79358]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 16:58:01 compute-0 sudo[79358]: pam_unix(sudo:session): session closed for user root
Jan 10 16:58:01 compute-0 ansible-async_wrapper.py[79355]: Invoked with j905361730444 30 /home/zuul/.ansible/tmp/ansible-tmp-1768064281.0961525-36469-151021259633936/AnsiballZ_command.py _
Jan 10 16:58:01 compute-0 ansible-async_wrapper.py[79408]: Starting module and watcher
Jan 10 16:58:01 compute-0 ansible-async_wrapper.py[79408]: Start watching 79409 (30)
Jan 10 16:58:01 compute-0 ansible-async_wrapper.py[79409]: Start module (79409)
Jan 10 16:58:01 compute-0 ansible-async_wrapper.py[79355]: Return async_wrapper task started.
Jan 10 16:58:01 compute-0 sudo[79383]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv -Z /tmp/cephadm-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/config/ceph.conf.new /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/config/ceph.conf
Jan 10 16:58:01 compute-0 sudo[79383]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 16:58:01 compute-0 sudo[79337]: pam_unix(sudo:session): session closed for user root
Jan 10 16:58:01 compute-0 sudo[79383]: pam_unix(sudo:session): session closed for user root
Jan 10 16:58:01 compute-0 ceph-mgr[75538]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Jan 10 16:58:01 compute-0 ceph-mgr[75538]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Jan 10 16:58:01 compute-0 sudo[79413]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Jan 10 16:58:01 compute-0 sudo[79413]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 16:58:01 compute-0 sudo[79413]: pam_unix(sudo:session): session closed for user root
Jan 10 16:58:01 compute-0 sudo[79438]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/etc/ceph
Jan 10 16:58:01 compute-0 python3[79410]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 16:58:01 compute-0 sudo[79438]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 16:58:01 compute-0 sudo[79438]: pam_unix(sudo:session): session closed for user root
Jan 10 16:58:01 compute-0 sudo[79464]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/etc/ceph/ceph.client.admin.keyring.new
Jan 10 16:58:01 compute-0 sudo[79464]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 16:58:01 compute-0 sudo[79464]: pam_unix(sudo:session): session closed for user root
Jan 10 16:58:01 compute-0 podman[79463]: 2026-01-10 16:58:01.958877931 +0000 UTC m=+0.056888371 container create 2a8db026d63140e8ef5fbb9868a0afc9076e57f6f5c27b2bd11b2f1b31bddf32 (image=quay.io/ceph/ceph:v20, name=zen_bartik, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 16:58:01 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 10 16:58:02 compute-0 systemd[1]: Started libpod-conmon-2a8db026d63140e8ef5fbb9868a0afc9076e57f6f5c27b2bd11b2f1b31bddf32.scope.
Jan 10 16:58:02 compute-0 sudo[79500]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4
Jan 10 16:58:02 compute-0 sudo[79500]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 16:58:02 compute-0 sudo[79500]: pam_unix(sudo:session): session closed for user root
Jan 10 16:58:02 compute-0 podman[79463]: 2026-01-10 16:58:01.941623821 +0000 UTC m=+0.039634281 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 16:58:02 compute-0 systemd[1]: Started libcrun container.
Jan 10 16:58:02 compute-0 ceph-mon[75249]: Updating compute-0:/etc/ceph/ceph.conf
Jan 10 16:58:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d75a81e994dfab45937bdb14287ac762e601c69f30e962ead116fbfdb0c48125/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 16:58:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d75a81e994dfab45937bdb14287ac762e601c69f30e962ead116fbfdb0c48125/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 16:58:02 compute-0 podman[79463]: 2026-01-10 16:58:02.060646154 +0000 UTC m=+0.158656594 container init 2a8db026d63140e8ef5fbb9868a0afc9076e57f6f5c27b2bd11b2f1b31bddf32 (image=quay.io/ceph/ceph:v20, name=zen_bartik, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 10 16:58:02 compute-0 podman[79463]: 2026-01-10 16:58:02.068077266 +0000 UTC m=+0.166087696 container start 2a8db026d63140e8ef5fbb9868a0afc9076e57f6f5c27b2bd11b2f1b31bddf32 (image=quay.io/ceph/ceph:v20, name=zen_bartik, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 10 16:58:02 compute-0 podman[79463]: 2026-01-10 16:58:02.07115616 +0000 UTC m=+0.169166690 container attach 2a8db026d63140e8ef5fbb9868a0afc9076e57f6f5c27b2bd11b2f1b31bddf32 (image=quay.io/ceph/ceph:v20, name=zen_bartik, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 16:58:02 compute-0 sudo[79532]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/etc/ceph/ceph.client.admin.keyring.new
Jan 10 16:58:02 compute-0 sudo[79532]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 16:58:02 compute-0 sudo[79532]: pam_unix(sudo:session): session closed for user root
Jan 10 16:58:02 compute-0 sudo[79581]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/etc/ceph/ceph.client.admin.keyring.new
Jan 10 16:58:02 compute-0 sudo[79581]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 16:58:02 compute-0 sudo[79581]: pam_unix(sudo:session): session closed for user root
Jan 10 16:58:02 compute-0 sudo[79625]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/etc/ceph/ceph.client.admin.keyring.new
Jan 10 16:58:02 compute-0 sudo[79625]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 16:58:02 compute-0 sudo[79625]: pam_unix(sudo:session): session closed for user root
Jan 10 16:58:02 compute-0 sudo[79650]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv -Z /tmp/cephadm-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Jan 10 16:58:02 compute-0 sudo[79650]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 16:58:02 compute-0 sudo[79650]: pam_unix(sudo:session): session closed for user root
Jan 10 16:58:02 compute-0 ceph-mgr[75538]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/config/ceph.client.admin.keyring
Jan 10 16:58:02 compute-0 ceph-mgr[75538]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/config/ceph.client.admin.keyring
Jan 10 16:58:02 compute-0 sudo[79675]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/config
Jan 10 16:58:02 compute-0 sudo[79675]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 16:58:02 compute-0 sudo[79675]: pam_unix(sudo:session): session closed for user root
Jan 10 16:58:02 compute-0 sudo[79700]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/config
Jan 10 16:58:02 compute-0 sudo[79700]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 16:58:02 compute-0 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 10 16:58:02 compute-0 sudo[79700]: pam_unix(sudo:session): session closed for user root
Jan 10 16:58:02 compute-0 zen_bartik[79528]: 
Jan 10 16:58:02 compute-0 zen_bartik[79528]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Jan 10 16:58:02 compute-0 systemd[1]: libpod-2a8db026d63140e8ef5fbb9868a0afc9076e57f6f5c27b2bd11b2f1b31bddf32.scope: Deactivated successfully.
Jan 10 16:58:02 compute-0 podman[79463]: 2026-01-10 16:58:02.527870124 +0000 UTC m=+0.625880594 container died 2a8db026d63140e8ef5fbb9868a0afc9076e57f6f5c27b2bd11b2f1b31bddf32 (image=quay.io/ceph/ceph:v20, name=zen_bartik, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 10 16:58:02 compute-0 sudo[79727]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/config/ceph.client.admin.keyring.new
Jan 10 16:58:02 compute-0 sudo[79727]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 16:58:02 compute-0 sudo[79727]: pam_unix(sudo:session): session closed for user root
Jan 10 16:58:02 compute-0 sudo[79762]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4
Jan 10 16:58:02 compute-0 sudo[79762]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 16:58:02 compute-0 sudo[79762]: pam_unix(sudo:session): session closed for user root
Jan 10 16:58:02 compute-0 sudo[79787]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/config/ceph.client.admin.keyring.new
Jan 10 16:58:02 compute-0 sudo[79787]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 16:58:02 compute-0 sudo[79787]: pam_unix(sudo:session): session closed for user root
Jan 10 16:58:02 compute-0 sudo[79838]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/config/ceph.client.admin.keyring.new
Jan 10 16:58:02 compute-0 sudo[79838]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 16:58:02 compute-0 sudo[79838]: pam_unix(sudo:session): session closed for user root
Jan 10 16:58:02 compute-0 ceph-mgr[75538]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 10 16:58:02 compute-0 sudo[79883]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/config/ceph.client.admin.keyring.new
Jan 10 16:58:02 compute-0 sudo[79883]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 16:58:02 compute-0 sudo[79883]: pam_unix(sudo:session): session closed for user root
Jan 10 16:58:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-d75a81e994dfab45937bdb14287ac762e601c69f30e962ead116fbfdb0c48125-merged.mount: Deactivated successfully.
Jan 10 16:58:02 compute-0 podman[79463]: 2026-01-10 16:58:02.991896367 +0000 UTC m=+1.089906807 container remove 2a8db026d63140e8ef5fbb9868a0afc9076e57f6f5c27b2bd11b2f1b31bddf32 (image=quay.io/ceph/ceph:v20, name=zen_bartik, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 10 16:58:03 compute-0 systemd[1]: libpod-conmon-2a8db026d63140e8ef5fbb9868a0afc9076e57f6f5c27b2bd11b2f1b31bddf32.scope: Deactivated successfully.
Jan 10 16:58:03 compute-0 ansible-async_wrapper.py[79409]: Module complete (79409)
Jan 10 16:58:03 compute-0 sudo[79909]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv -Z /tmp/cephadm-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/config/ceph.client.admin.keyring.new /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/config/ceph.client.admin.keyring
Jan 10 16:58:03 compute-0 sudo[79909]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 16:58:03 compute-0 sudo[79909]: pam_unix(sudo:session): session closed for user root
Jan 10 16:58:03 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 10 16:58:03 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:58:03 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 10 16:58:03 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:58:03 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 10 16:58:03 compute-0 ceph-mon[75249]: Updating compute-0:/var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/config/ceph.conf
Jan 10 16:58:03 compute-0 ceph-mon[75249]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Jan 10 16:58:03 compute-0 ceph-mon[75249]: pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 10 16:58:03 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:58:03 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:58:03 compute-0 ceph-mgr[75538]: [progress INFO root] update: starting ev 43d00670-8115-4c9e-9c2a-f2473001d570 (Updating crash deployment (+1 -> 1))
Jan 10 16:58:03 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Jan 10 16:58:03 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Jan 10 16:58:03 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Jan 10 16:58:03 compute-0 sudo[79957]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wpitjbxuyfnqbdutnjliizskwvbgtnlq ; /usr/bin/python3'
Jan 10 16:58:03 compute-0 sudo[79957]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:58:03 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 16:58:03 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 16:58:03 compute-0 ceph-mgr[75538]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-0 on compute-0
Jan 10 16:58:03 compute-0 ceph-mgr[75538]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-0 on compute-0
Jan 10 16:58:03 compute-0 sudo[79960]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 16:58:03 compute-0 sudo[79960]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 16:58:03 compute-0 sudo[79960]: pam_unix(sudo:session): session closed for user root
Jan 10 16:58:03 compute-0 sudo[79985]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 _orch deploy --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4
Jan 10 16:58:03 compute-0 sudo[79985]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 16:58:03 compute-0 python3[79959]: ansible-ansible.legacy.async_status Invoked with jid=j905361730444.79355 mode=status _async_dir=/root/.ansible_async
Jan 10 16:58:03 compute-0 sudo[79957]: pam_unix(sudo:session): session closed for user root
Jan 10 16:58:03 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 16:58:03 compute-0 sudo[80056]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qbvacxlmhmncqsgvpivjhocpracmkkbo ; /usr/bin/python3'
Jan 10 16:58:03 compute-0 sudo[80056]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:58:03 compute-0 python3[80058]: ansible-ansible.legacy.async_status Invoked with jid=j905361730444.79355 mode=cleanup _async_dir=/root/.ansible_async
Jan 10 16:58:03 compute-0 sudo[80056]: pam_unix(sudo:session): session closed for user root
Jan 10 16:58:03 compute-0 podman[80100]: 2026-01-10 16:58:03.622744096 +0000 UTC m=+0.045652885 container create f23d2598105e22661590b50655da2cefd9ac17463bc54583ac72b1764cfba32b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_bardeen, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 10 16:58:03 compute-0 systemd[1]: Started libpod-conmon-f23d2598105e22661590b50655da2cefd9ac17463bc54583ac72b1764cfba32b.scope.
Jan 10 16:58:03 compute-0 systemd[1]: Started libcrun container.
Jan 10 16:58:03 compute-0 podman[80100]: 2026-01-10 16:58:03.696622918 +0000 UTC m=+0.119531747 container init f23d2598105e22661590b50655da2cefd9ac17463bc54583ac72b1764cfba32b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_bardeen, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 10 16:58:03 compute-0 podman[80100]: 2026-01-10 16:58:03.601572619 +0000 UTC m=+0.024481418 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 16:58:03 compute-0 podman[80100]: 2026-01-10 16:58:03.706142298 +0000 UTC m=+0.129051087 container start f23d2598105e22661590b50655da2cefd9ac17463bc54583ac72b1764cfba32b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_bardeen, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 10 16:58:03 compute-0 infallible_bardeen[80116]: 167 167
Jan 10 16:58:03 compute-0 podman[80100]: 2026-01-10 16:58:03.710189018 +0000 UTC m=+0.133097827 container attach f23d2598105e22661590b50655da2cefd9ac17463bc54583ac72b1764cfba32b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_bardeen, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 16:58:03 compute-0 systemd[1]: libpod-f23d2598105e22661590b50655da2cefd9ac17463bc54583ac72b1764cfba32b.scope: Deactivated successfully.
Jan 10 16:58:03 compute-0 podman[80100]: 2026-01-10 16:58:03.711942986 +0000 UTC m=+0.134851775 container died f23d2598105e22661590b50655da2cefd9ac17463bc54583ac72b1764cfba32b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_bardeen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 16:58:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-efe4af7f4251a8e81aee2a2db7546ed8e9551996150cddc555ec7f112f011f5b-merged.mount: Deactivated successfully.
Jan 10 16:58:03 compute-0 podman[80100]: 2026-01-10 16:58:03.753345034 +0000 UTC m=+0.176253813 container remove f23d2598105e22661590b50655da2cefd9ac17463bc54583ac72b1764cfba32b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_bardeen, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 10 16:58:03 compute-0 systemd[1]: libpod-conmon-f23d2598105e22661590b50655da2cefd9ac17463bc54583ac72b1764cfba32b.scope: Deactivated successfully.
Jan 10 16:58:03 compute-0 systemd[1]: Reloading.
Jan 10 16:58:03 compute-0 systemd-rc-local-generator[80186]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 16:58:03 compute-0 systemd-sysv-generator[80190]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 10 16:58:03 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 10 16:58:04 compute-0 sudo[80158]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vffziwqigyeljtnibizvkbbrndtxgjrw ; /usr/bin/python3'
Jan 10 16:58:04 compute-0 sudo[80158]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:58:04 compute-0 ceph-mon[75249]: Updating compute-0:/var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/config/ceph.client.admin.keyring
Jan 10 16:58:04 compute-0 ceph-mon[75249]: from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 10 16:58:04 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:58:04 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:58:04 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Jan 10 16:58:04 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Jan 10 16:58:04 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 16:58:04 compute-0 ceph-mon[75249]: Deploying daemon crash.compute-0 on compute-0
Jan 10 16:58:04 compute-0 systemd[1]: Reloading.
Jan 10 16:58:04 compute-0 python3[80196]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 10 16:58:04 compute-0 systemd-rc-local-generator[80226]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 16:58:04 compute-0 systemd-sysv-generator[80229]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 10 16:58:04 compute-0 systemd[1]: Starting Ceph crash.compute-0 for a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4...
Jan 10 16:58:04 compute-0 sudo[80158]: pam_unix(sudo:session): session closed for user root
Jan 10 16:58:04 compute-0 podman[80284]: 2026-01-10 16:58:04.61974698 +0000 UTC m=+0.046249432 container create 2d8e6ffc82d6a2078d3f81ea77434ec395ce6aa2d60412fa836e9aac21c1f66d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-crash-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 16:58:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9cdc464ffe110a3335ba20779adb496e9ec9f34882212aacc4603e9a6759d6ab/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 16:58:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9cdc464ffe110a3335ba20779adb496e9ec9f34882212aacc4603e9a6759d6ab/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 16:58:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9cdc464ffe110a3335ba20779adb496e9ec9f34882212aacc4603e9a6759d6ab/merged/etc/ceph/ceph.client.crash.compute-0.keyring supports timestamps until 2038 (0x7fffffff)
Jan 10 16:58:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9cdc464ffe110a3335ba20779adb496e9ec9f34882212aacc4603e9a6759d6ab/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 16:58:04 compute-0 podman[80284]: 2026-01-10 16:58:04.601147453 +0000 UTC m=+0.027649915 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 16:58:04 compute-0 podman[80284]: 2026-01-10 16:58:04.698633199 +0000 UTC m=+0.125135671 container init 2d8e6ffc82d6a2078d3f81ea77434ec395ce6aa2d60412fa836e9aac21c1f66d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-crash-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 16:58:04 compute-0 podman[80284]: 2026-01-10 16:58:04.711567271 +0000 UTC m=+0.138069713 container start 2d8e6ffc82d6a2078d3f81ea77434ec395ce6aa2d60412fa836e9aac21c1f66d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-crash-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Jan 10 16:58:04 compute-0 bash[80284]: 2d8e6ffc82d6a2078d3f81ea77434ec395ce6aa2d60412fa836e9aac21c1f66d
Jan 10 16:58:04 compute-0 systemd[1]: Started Ceph crash.compute-0 for a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4.
Jan 10 16:58:04 compute-0 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-crash-compute-0[80299]: INFO:ceph-crash:pinging cluster to exercise our key
Jan 10 16:58:04 compute-0 sudo[79985]: pam_unix(sudo:session): session closed for user root
Jan 10 16:58:04 compute-0 sudo[80328]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-csswmfwlvbmhkbngxglkaoilpqzextze ; /usr/bin/python3'
Jan 10 16:58:04 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 10 16:58:04 compute-0 sudo[80328]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:58:04 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:58:04 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 10 16:58:04 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:58:04 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Jan 10 16:58:04 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:58:04 compute-0 ceph-mgr[75538]: [progress INFO root] complete: finished ev 43d00670-8115-4c9e-9c2a-f2473001d570 (Updating crash deployment (+1 -> 1))
Jan 10 16:58:04 compute-0 ceph-mgr[75538]: [progress INFO root] Completed event 43d00670-8115-4c9e-9c2a-f2473001d570 (Updating crash deployment (+1 -> 1)) in 2 seconds
Jan 10 16:58:04 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Jan 10 16:58:04 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:58:04 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Jan 10 16:58:04 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:58:04 compute-0 ceph-mgr[75538]: [progress INFO root] update: starting ev e0f3e90c-1385-4e85-960d-7157f9f85130 (Updating mgr deployment (+1 -> 2))
Jan 10 16:58:04 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.ipbphh", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Jan 10 16:58:04 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get-or-create", "entity": "mgr.compute-0.ipbphh", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Jan 10 16:58:04 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.ipbphh", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Jan 10 16:58:04 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Jan 10 16:58:04 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "mgr services"} : dispatch
Jan 10 16:58:04 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 16:58:04 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 16:58:04 compute-0 ceph-mgr[75538]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-0.ipbphh on compute-0
Jan 10 16:58:04 compute-0 ceph-mgr[75538]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-0.ipbphh on compute-0
Jan 10 16:58:04 compute-0 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-crash-compute-0[80299]: 2026-01-10T16:58:04.876+0000 7fc2d4549640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Jan 10 16:58:04 compute-0 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-crash-compute-0[80299]: 2026-01-10T16:58:04.876+0000 7fc2d4549640 -1 AuthRegistry(0x7fc2cc053640) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Jan 10 16:58:04 compute-0 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-crash-compute-0[80299]: 2026-01-10T16:58:04.878+0000 7fc2d4549640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Jan 10 16:58:04 compute-0 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-crash-compute-0[80299]: 2026-01-10T16:58:04.878+0000 7fc2d4549640 -1 AuthRegistry(0x7fc2d4547fe0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Jan 10 16:58:04 compute-0 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-crash-compute-0[80299]: 2026-01-10T16:58:04.881+0000 7fc2d22be640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Jan 10 16:58:04 compute-0 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-crash-compute-0[80299]: 2026-01-10T16:58:04.881+0000 7fc2d4549640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Jan 10 16:58:04 compute-0 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-crash-compute-0[80299]: [errno 13] RADOS permission denied (error connecting to the cluster)
Jan 10 16:58:04 compute-0 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-crash-compute-0[80299]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Jan 10 16:58:04 compute-0 sudo[80332]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 16:58:04 compute-0 sudo[80332]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 16:58:04 compute-0 sudo[80332]: pam_unix(sudo:session): session closed for user root
Jan 10 16:58:04 compute-0 python3[80331]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 16:58:04 compute-0 ceph-mgr[75538]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 10 16:58:04 compute-0 sudo[80367]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 _orch deploy --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4
Jan 10 16:58:04 compute-0 sudo[80367]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 16:58:05 compute-0 podman[80374]: 2026-01-10 16:58:05.036214777 +0000 UTC m=+0.073008940 container create 2c2c3a1b02a8814970fac440653a73338185b7a2bc7af6d058ec88ceecc5884f (image=quay.io/ceph/ceph:v20, name=focused_shamir, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 10 16:58:05 compute-0 systemd[1]: Started libpod-conmon-2c2c3a1b02a8814970fac440653a73338185b7a2bc7af6d058ec88ceecc5884f.scope.
Jan 10 16:58:05 compute-0 ceph-mon[75249]: pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 10 16:58:05 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:58:05 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:58:05 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:58:05 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:58:05 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:58:05 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get-or-create", "entity": "mgr.compute-0.ipbphh", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Jan 10 16:58:05 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.ipbphh", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Jan 10 16:58:05 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "mgr services"} : dispatch
Jan 10 16:58:05 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 16:58:05 compute-0 podman[80374]: 2026-01-10 16:58:05.014480135 +0000 UTC m=+0.051274308 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 16:58:05 compute-0 systemd[1]: Started libcrun container.
Jan 10 16:58:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46ddc3e3c03967c1d3f744a45e154bc01095f1243f721ff0dc3388ec56ba78b6/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 16:58:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46ddc3e3c03967c1d3f744a45e154bc01095f1243f721ff0dc3388ec56ba78b6/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 16:58:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46ddc3e3c03967c1d3f744a45e154bc01095f1243f721ff0dc3388ec56ba78b6/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 10 16:58:05 compute-0 podman[80374]: 2026-01-10 16:58:05.135461741 +0000 UTC m=+0.172255934 container init 2c2c3a1b02a8814970fac440653a73338185b7a2bc7af6d058ec88ceecc5884f (image=quay.io/ceph/ceph:v20, name=focused_shamir, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 16:58:05 compute-0 podman[80374]: 2026-01-10 16:58:05.144804386 +0000 UTC m=+0.181598559 container start 2c2c3a1b02a8814970fac440653a73338185b7a2bc7af6d058ec88ceecc5884f (image=quay.io/ceph/ceph:v20, name=focused_shamir, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 10 16:58:05 compute-0 podman[80374]: 2026-01-10 16:58:05.147875989 +0000 UTC m=+0.184670162 container attach 2c2c3a1b02a8814970fac440653a73338185b7a2bc7af6d058ec88ceecc5884f (image=quay.io/ceph/ceph:v20, name=focused_shamir, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 16:58:05 compute-0 podman[80470]: 2026-01-10 16:58:05.451755529 +0000 UTC m=+0.030858122 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 16:58:05 compute-0 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.14166 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 10 16:58:05 compute-0 focused_shamir[80408]: 
Jan 10 16:58:05 compute-0 focused_shamir[80408]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Jan 10 16:58:05 compute-0 systemd[1]: libpod-2c2c3a1b02a8814970fac440653a73338185b7a2bc7af6d058ec88ceecc5884f.scope: Deactivated successfully.
Jan 10 16:58:05 compute-0 podman[80470]: 2026-01-10 16:58:05.642367233 +0000 UTC m=+0.221469806 container create cde994ef162baf371371b446b7e69206dc066638e3d45a44b7f8845416bfdee3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_goldberg, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 16:58:05 compute-0 podman[80374]: 2026-01-10 16:58:05.643665518 +0000 UTC m=+0.680459681 container died 2c2c3a1b02a8814970fac440653a73338185b7a2bc7af6d058ec88ceecc5884f (image=quay.io/ceph/ceph:v20, name=focused_shamir, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 10 16:58:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-46ddc3e3c03967c1d3f744a45e154bc01095f1243f721ff0dc3388ec56ba78b6-merged.mount: Deactivated successfully.
Jan 10 16:58:05 compute-0 podman[80374]: 2026-01-10 16:58:05.753765228 +0000 UTC m=+0.790559431 container remove 2c2c3a1b02a8814970fac440653a73338185b7a2bc7af6d058ec88ceecc5884f (image=quay.io/ceph/ceph:v20, name=focused_shamir, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 10 16:58:05 compute-0 systemd[1]: libpod-conmon-2c2c3a1b02a8814970fac440653a73338185b7a2bc7af6d058ec88ceecc5884f.scope: Deactivated successfully.
Jan 10 16:58:05 compute-0 sudo[80328]: pam_unix(sudo:session): session closed for user root
Jan 10 16:58:05 compute-0 systemd[1]: Started libpod-conmon-cde994ef162baf371371b446b7e69206dc066638e3d45a44b7f8845416bfdee3.scope.
Jan 10 16:58:05 compute-0 systemd[1]: Started libcrun container.
Jan 10 16:58:05 compute-0 podman[80470]: 2026-01-10 16:58:05.834858107 +0000 UTC m=+0.413960710 container init cde994ef162baf371371b446b7e69206dc066638e3d45a44b7f8845416bfdee3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_goldberg, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 16:58:05 compute-0 podman[80470]: 2026-01-10 16:58:05.841393425 +0000 UTC m=+0.420495998 container start cde994ef162baf371371b446b7e69206dc066638e3d45a44b7f8845416bfdee3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_goldberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 16:58:05 compute-0 intelligent_goldberg[80498]: 167 167
Jan 10 16:58:05 compute-0 systemd[1]: libpod-cde994ef162baf371371b446b7e69206dc066638e3d45a44b7f8845416bfdee3.scope: Deactivated successfully.
Jan 10 16:58:05 compute-0 podman[80470]: 2026-01-10 16:58:05.85513747 +0000 UTC m=+0.434240213 container attach cde994ef162baf371371b446b7e69206dc066638e3d45a44b7f8845416bfdee3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_goldberg, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 10 16:58:05 compute-0 podman[80470]: 2026-01-10 16:58:05.85735352 +0000 UTC m=+0.436456163 container died cde994ef162baf371371b446b7e69206dc066638e3d45a44b7f8845416bfdee3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_goldberg, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 10 16:58:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-903f7a70873eec468a7c2fb856683b482df1ba16ee0c7af0663c0f236514830d-merged.mount: Deactivated successfully.
Jan 10 16:58:05 compute-0 podman[80470]: 2026-01-10 16:58:05.904203727 +0000 UTC m=+0.483306300 container remove cde994ef162baf371371b446b7e69206dc066638e3d45a44b7f8845416bfdee3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_goldberg, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 16:58:05 compute-0 systemd[1]: libpod-conmon-cde994ef162baf371371b446b7e69206dc066638e3d45a44b7f8845416bfdee3.scope: Deactivated successfully.
Jan 10 16:58:05 compute-0 systemd[1]: Reloading.
Jan 10 16:58:05 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 10 16:58:06 compute-0 systemd-rc-local-generator[80549]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 16:58:06 compute-0 systemd-sysv-generator[80553]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 10 16:58:06 compute-0 ceph-mon[75249]: Deploying daemon mgr.compute-0.ipbphh on compute-0
Jan 10 16:58:06 compute-0 sudo[80574]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hfghlojzhpnsbwbvefcoaxxmsmecgmqf ; /usr/bin/python3'
Jan 10 16:58:06 compute-0 sudo[80574]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:58:06 compute-0 systemd[1]: Reloading.
Jan 10 16:58:06 compute-0 systemd-rc-local-generator[80608]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 16:58:06 compute-0 systemd-sysv-generator[80613]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 10 16:58:06 compute-0 python3[80578]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 16:58:06 compute-0 podman[80617]: 2026-01-10 16:58:06.486377149 +0000 UTC m=+0.068261541 container create 673b0017245bd6177b8663d4c225107a377d777320d7af042669283e0623b4aa (image=quay.io/ceph/ceph:v20, name=eloquent_ishizaka, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 10 16:58:06 compute-0 systemd[1]: Started libpod-conmon-673b0017245bd6177b8663d4c225107a377d777320d7af042669283e0623b4aa.scope.
Jan 10 16:58:06 compute-0 systemd[1]: Starting Ceph mgr.compute-0.ipbphh for a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4...
Jan 10 16:58:06 compute-0 podman[80617]: 2026-01-10 16:58:06.46253926 +0000 UTC m=+0.044423672 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 16:58:06 compute-0 systemd[1]: Started libcrun container.
Jan 10 16:58:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/837be2422d1541b8bea93fa32d3040d5f44772e121cf190b77dcd9da8410d32f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 16:58:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/837be2422d1541b8bea93fa32d3040d5f44772e121cf190b77dcd9da8410d32f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 16:58:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/837be2422d1541b8bea93fa32d3040d5f44772e121cf190b77dcd9da8410d32f/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 10 16:58:06 compute-0 podman[80617]: 2026-01-10 16:58:06.581767498 +0000 UTC m=+0.163651880 container init 673b0017245bd6177b8663d4c225107a377d777320d7af042669283e0623b4aa (image=quay.io/ceph/ceph:v20, name=eloquent_ishizaka, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 16:58:06 compute-0 podman[80617]: 2026-01-10 16:58:06.589656503 +0000 UTC m=+0.171540875 container start 673b0017245bd6177b8663d4c225107a377d777320d7af042669283e0623b4aa (image=quay.io/ceph/ceph:v20, name=eloquent_ishizaka, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 16:58:06 compute-0 podman[80617]: 2026-01-10 16:58:06.593750715 +0000 UTC m=+0.175635087 container attach 673b0017245bd6177b8663d4c225107a377d777320d7af042669283e0623b4aa (image=quay.io/ceph/ceph:v20, name=eloquent_ishizaka, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 10 16:58:06 compute-0 ansible-async_wrapper.py[79408]: Done in kid B.
Jan 10 16:58:06 compute-0 podman[80705]: 2026-01-10 16:58:06.81817762 +0000 UTC m=+0.043491066 container create 04d4013852c39c056183895fcaf6bf9efdade85b93515959452caf8dabcdfe7e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-mgr-compute-0-ipbphh, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 10 16:58:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e8879709b2c97c9bd9b874415f21c463d79238876163c808652871dd0218106/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 16:58:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e8879709b2c97c9bd9b874415f21c463d79238876163c808652871dd0218106/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 16:58:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e8879709b2c97c9bd9b874415f21c463d79238876163c808652871dd0218106/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 16:58:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e8879709b2c97c9bd9b874415f21c463d79238876163c808652871dd0218106/merged/var/lib/ceph/mgr/ceph-compute-0.ipbphh supports timestamps until 2038 (0x7fffffff)
Jan 10 16:58:06 compute-0 podman[80705]: 2026-01-10 16:58:06.884152777 +0000 UTC m=+0.109466243 container init 04d4013852c39c056183895fcaf6bf9efdade85b93515959452caf8dabcdfe7e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-mgr-compute-0-ipbphh, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 16:58:06 compute-0 podman[80705]: 2026-01-10 16:58:06.89011644 +0000 UTC m=+0.115429886 container start 04d4013852c39c056183895fcaf6bf9efdade85b93515959452caf8dabcdfe7e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-mgr-compute-0-ipbphh, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True)
Jan 10 16:58:06 compute-0 bash[80705]: 04d4013852c39c056183895fcaf6bf9efdade85b93515959452caf8dabcdfe7e
Jan 10 16:58:06 compute-0 podman[80705]: 2026-01-10 16:58:06.798171444 +0000 UTC m=+0.023484910 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 16:58:06 compute-0 systemd[1]: Started Ceph mgr.compute-0.ipbphh for a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4.
Jan 10 16:58:06 compute-0 ceph-mgr[80724]: set uid:gid to 167:167 (ceph:ceph)
Jan 10 16:58:06 compute-0 ceph-mgr[80724]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mgr, pid 2
Jan 10 16:58:06 compute-0 ceph-mgr[80724]: pidfile_write: ignore empty --pid-file
Jan 10 16:58:06 compute-0 ceph-mgr[75538]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 10 16:58:06 compute-0 ceph-mgr[80724]: mgr[py] Loading python module 'alerts'
Jan 10 16:58:06 compute-0 sudo[80367]: pam_unix(sudo:session): session closed for user root
Jan 10 16:58:06 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 10 16:58:06 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:58:06 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 10 16:58:07 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:58:07 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Jan 10 16:58:07 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:58:07 compute-0 ceph-mgr[75538]: [progress INFO root] complete: finished ev e0f3e90c-1385-4e85-960d-7157f9f85130 (Updating mgr deployment (+1 -> 2))
Jan 10 16:58:07 compute-0 ceph-mgr[75538]: [progress INFO root] Completed event e0f3e90c-1385-4e85-960d-7157f9f85130 (Updating mgr deployment (+1 -> 2)) in 2 seconds
Jan 10 16:58:07 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Jan 10 16:58:07 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:58:07 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=log_to_file}] v 0)
Jan 10 16:58:07 compute-0 sudo[80745]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 10 16:58:07 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4073465749' entity='client.admin' 
Jan 10 16:58:07 compute-0 sudo[80745]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 16:58:07 compute-0 ceph-mgr[80724]: mgr[py] Loading python module 'balancer'
Jan 10 16:58:07 compute-0 sudo[80745]: pam_unix(sudo:session): session closed for user root
Jan 10 16:58:07 compute-0 systemd[1]: libpod-673b0017245bd6177b8663d4c225107a377d777320d7af042669283e0623b4aa.scope: Deactivated successfully.
Jan 10 16:58:07 compute-0 podman[80617]: 2026-01-10 16:58:07.115308685 +0000 UTC m=+0.697193067 container died 673b0017245bd6177b8663d4c225107a377d777320d7af042669283e0623b4aa (image=quay.io/ceph/ceph:v20, name=eloquent_ishizaka, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 10 16:58:07 compute-0 ceph-mon[75249]: from='client.14166 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 10 16:58:07 compute-0 ceph-mon[75249]: pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 10 16:58:07 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:58:07 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:58:07 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:58:07 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:58:07 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/4073465749' entity='client.admin' 
Jan 10 16:58:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-837be2422d1541b8bea93fa32d3040d5f44772e121cf190b77dcd9da8410d32f-merged.mount: Deactivated successfully.
Jan 10 16:58:07 compute-0 podman[80617]: 2026-01-10 16:58:07.16061926 +0000 UTC m=+0.742503652 container remove 673b0017245bd6177b8663d4c225107a377d777320d7af042669283e0623b4aa (image=quay.io/ceph/ceph:v20, name=eloquent_ishizaka, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 10 16:58:07 compute-0 systemd[1]: libpod-conmon-673b0017245bd6177b8663d4c225107a377d777320d7af042669283e0623b4aa.scope: Deactivated successfully.
Jan 10 16:58:07 compute-0 sudo[80772]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 16:58:07 compute-0 sudo[80772]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 16:58:07 compute-0 sudo[80574]: pam_unix(sudo:session): session closed for user root
Jan 10 16:58:07 compute-0 sudo[80772]: pam_unix(sudo:session): session closed for user root
Jan 10 16:58:07 compute-0 ceph-mgr[80724]: mgr[py] Loading python module 'cephadm'
Jan 10 16:58:07 compute-0 sudo[80808]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Jan 10 16:58:07 compute-0 sudo[80808]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 16:58:07 compute-0 sudo[80856]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yvyuyypbqjctwamyzbawuukkwgnvlxyt ; /usr/bin/python3'
Jan 10 16:58:07 compute-0 sudo[80856]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:58:07 compute-0 python3[80858]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global mon_cluster_log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 16:58:07 compute-0 podman[80874]: 2026-01-10 16:58:07.54599065 +0000 UTC m=+0.051241167 container create bb574e897007afe5680b2bc61b31077d7026dd2bb22e4e7474d975f130ac5e67 (image=quay.io/ceph/ceph:v20, name=priceless_fermi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 10 16:58:07 compute-0 systemd[1]: Started libpod-conmon-bb574e897007afe5680b2bc61b31077d7026dd2bb22e4e7474d975f130ac5e67.scope.
Jan 10 16:58:07 compute-0 podman[80874]: 2026-01-10 16:58:07.523076276 +0000 UTC m=+0.028326793 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 16:58:07 compute-0 systemd[1]: Started libcrun container.
Jan 10 16:58:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ae94549b8c98c5945f76f626806d09b40cf21ee56f91556fe9564f1c49395b7/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 16:58:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ae94549b8c98c5945f76f626806d09b40cf21ee56f91556fe9564f1c49395b7/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 10 16:58:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ae94549b8c98c5945f76f626806d09b40cf21ee56f91556fe9564f1c49395b7/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 16:58:07 compute-0 podman[80874]: 2026-01-10 16:58:07.658213007 +0000 UTC m=+0.163463524 container init bb574e897007afe5680b2bc61b31077d7026dd2bb22e4e7474d975f130ac5e67 (image=quay.io/ceph/ceph:v20, name=priceless_fermi, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 10 16:58:07 compute-0 podman[80874]: 2026-01-10 16:58:07.665736432 +0000 UTC m=+0.170986929 container start bb574e897007afe5680b2bc61b31077d7026dd2bb22e4e7474d975f130ac5e67 (image=quay.io/ceph/ceph:v20, name=priceless_fermi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 10 16:58:07 compute-0 podman[80874]: 2026-01-10 16:58:07.690016253 +0000 UTC m=+0.195266750 container attach bb574e897007afe5680b2bc61b31077d7026dd2bb22e4e7474d975f130ac5e67 (image=quay.io/ceph/ceph:v20, name=priceless_fermi, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 16:58:07 compute-0 podman[80919]: 2026-01-10 16:58:07.745905036 +0000 UTC m=+0.127273428 container exec 69622407e4b336ab6e593d34ac16bfb19f7f8835a32ed22c7a89e50ee8c8d8e7 (image=quay.io/ceph/ceph:v20, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-mon-compute-0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 16:58:07 compute-0 podman[80919]: 2026-01-10 16:58:07.840327159 +0000 UTC m=+0.221695541 container exec_died 69622407e4b336ab6e593d34ac16bfb19f7f8835a32ed22c7a89e50ee8c8d8e7 (image=quay.io/ceph/ceph:v20, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 10 16:58:07 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 10 16:58:08 compute-0 ceph-mgr[80724]: mgr[py] Loading python module 'crash'
Jan 10 16:58:08 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mon_cluster_log_to_file}] v 0)
Jan 10 16:58:08 compute-0 ceph-mgr[80724]: mgr[py] Loading python module 'dashboard'
Jan 10 16:58:08 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4080358249' entity='client.admin' 
Jan 10 16:58:08 compute-0 systemd[1]: libpod-bb574e897007afe5680b2bc61b31077d7026dd2bb22e4e7474d975f130ac5e67.scope: Deactivated successfully.
Jan 10 16:58:08 compute-0 podman[80874]: 2026-01-10 16:58:08.234845508 +0000 UTC m=+0.740096035 container died bb574e897007afe5680b2bc61b31077d7026dd2bb22e4e7474d975f130ac5e67 (image=quay.io/ceph/ceph:v20, name=priceless_fermi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 16:58:08 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 16:58:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-3ae94549b8c98c5945f76f626806d09b40cf21ee56f91556fe9564f1c49395b7-merged.mount: Deactivated successfully.
Jan 10 16:58:08 compute-0 podman[80874]: 2026-01-10 16:58:08.678049914 +0000 UTC m=+1.183300421 container remove bb574e897007afe5680b2bc61b31077d7026dd2bb22e4e7474d975f130ac5e67 (image=quay.io/ceph/ceph:v20, name=priceless_fermi, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 16:58:08 compute-0 systemd[1]: libpod-conmon-bb574e897007afe5680b2bc61b31077d7026dd2bb22e4e7474d975f130ac5e67.scope: Deactivated successfully.
Jan 10 16:58:08 compute-0 sudo[80856]: pam_unix(sudo:session): session closed for user root
Jan 10 16:58:08 compute-0 sudo[80808]: pam_unix(sudo:session): session closed for user root
Jan 10 16:58:08 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 10 16:58:08 compute-0 sudo[81093]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-trehxrjhzmoqpazplcclbjiubkqmudpr ; /usr/bin/python3'
Jan 10 16:58:08 compute-0 sudo[81093]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:58:08 compute-0 ceph-mgr[80724]: mgr[py] Loading python module 'devicehealth'
Jan 10 16:58:08 compute-0 ceph-mgr[75538]: [progress INFO root] Writing back 2 completed events
Jan 10 16:58:08 compute-0 ceph-mgr[75538]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 10 16:58:08 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 16:58:08 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 16:58:08 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 16:58:08 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 16:58:08 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 16:58:08 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 16:58:09 compute-0 ceph-mgr[80724]: mgr[py] Loading python module 'diskprediction_local'
Jan 10 16:58:09 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 10 16:58:09 compute-0 python3[81095]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd set-require-min-compat-client mimic
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 16:58:09 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:58:09 compute-0 podman[81096]: 2026-01-10 16:58:09.173350139 +0000 UTC m=+0.106209905 container create 5078c7575c64fa9de24a58a06b11fcfcef93f92d8e799154fe7fbe732acbb19d (image=quay.io/ceph/ceph:v20, name=happy_heyrovsky, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 10 16:58:09 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 10 16:58:09 compute-0 ceph-mon[75249]: pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 10 16:58:09 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/4080358249' entity='client.admin' 
Jan 10 16:58:09 compute-0 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-mgr-compute-0-ipbphh[80720]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Jan 10 16:58:09 compute-0 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-mgr-compute-0-ipbphh[80720]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Jan 10 16:58:09 compute-0 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-mgr-compute-0-ipbphh[80720]:   from numpy import show_config as show_numpy_config
Jan 10 16:58:09 compute-0 podman[81096]: 2026-01-10 16:58:09.104931955 +0000 UTC m=+0.037791721 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 16:58:09 compute-0 ceph-mgr[80724]: mgr[py] Loading python module 'influx'
Jan 10 16:58:09 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:58:09 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:58:09 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 16:58:09 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 16:58:09 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 10 16:58:09 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 10 16:58:09 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 10 16:58:09 compute-0 systemd[1]: Started libpod-conmon-5078c7575c64fa9de24a58a06b11fcfcef93f92d8e799154fe7fbe732acbb19d.scope.
Jan 10 16:58:09 compute-0 systemd[1]: Started libcrun container.
Jan 10 16:58:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b87239b09842a15b121c69feb57cd6aa476480b29f57ea7f3dfdeb721efc09b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 16:58:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b87239b09842a15b121c69feb57cd6aa476480b29f57ea7f3dfdeb721efc09b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 16:58:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b87239b09842a15b121c69feb57cd6aa476480b29f57ea7f3dfdeb721efc09b/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 10 16:58:09 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:58:09 compute-0 ceph-mgr[80724]: mgr[py] Loading python module 'insights'
Jan 10 16:58:09 compute-0 podman[81096]: 2026-01-10 16:58:09.323267254 +0000 UTC m=+0.256126990 container init 5078c7575c64fa9de24a58a06b11fcfcef93f92d8e799154fe7fbe732acbb19d (image=quay.io/ceph/ceph:v20, name=happy_heyrovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 10 16:58:09 compute-0 podman[81096]: 2026-01-10 16:58:09.331168429 +0000 UTC m=+0.264028155 container start 5078c7575c64fa9de24a58a06b11fcfcef93f92d8e799154fe7fbe732acbb19d (image=quay.io/ceph/ceph:v20, name=happy_heyrovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 16:58:09 compute-0 podman[81096]: 2026-01-10 16:58:09.336474974 +0000 UTC m=+0.269334890 container attach 5078c7575c64fa9de24a58a06b11fcfcef93f92d8e799154fe7fbe732acbb19d (image=quay.io/ceph/ceph:v20, name=happy_heyrovsky, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 16:58:09 compute-0 sudo[81114]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 10 16:58:09 compute-0 sudo[81114]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 16:58:09 compute-0 sudo[81114]: pam_unix(sudo:session): session closed for user root
Jan 10 16:58:09 compute-0 ceph-mgr[75538]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (unknown last config time)...
Jan 10 16:58:09 compute-0 ceph-mgr[75538]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (unknown last config time)...
Jan 10 16:58:09 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Jan 10 16:58:09 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Jan 10 16:58:09 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Jan 10 16:58:09 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Jan 10 16:58:09 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 16:58:09 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 16:58:09 compute-0 ceph-mgr[75538]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Jan 10 16:58:09 compute-0 ceph-mgr[75538]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Jan 10 16:58:09 compute-0 ceph-mgr[80724]: mgr[py] Loading python module 'iostat'
Jan 10 16:58:09 compute-0 sudo[81142]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 16:58:09 compute-0 sudo[81142]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 16:58:09 compute-0 sudo[81142]: pam_unix(sudo:session): session closed for user root
Jan 10 16:58:09 compute-0 sudo[81167]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph:v20 --timeout 895 _orch deploy --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4
Jan 10 16:58:09 compute-0 sudo[81167]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 16:58:09 compute-0 ceph-mgr[80724]: mgr[py] Loading python module 'k8sevents'
Jan 10 16:58:09 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd set-require-min-compat-client", "version": "mimic"} v 0)
Jan 10 16:58:09 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2258513117' entity='client.admin' cmd={"prefix": "osd set-require-min-compat-client", "version": "mimic"} : dispatch
Jan 10 16:58:09 compute-0 podman[81227]: 2026-01-10 16:58:09.865661712 +0000 UTC m=+0.068070395 container create 6c3ad1d802d679e8f47c05a87c9f2712c1d280afa09a5846aa0aa83aca08bdf3 (image=quay.io/ceph/ceph:v20, name=practical_elion, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 10 16:58:09 compute-0 systemd[1]: Started libpod-conmon-6c3ad1d802d679e8f47c05a87c9f2712c1d280afa09a5846aa0aa83aca08bdf3.scope.
Jan 10 16:58:09 compute-0 podman[81227]: 2026-01-10 16:58:09.838313877 +0000 UTC m=+0.040722620 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 16:58:09 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 10 16:58:09 compute-0 systemd[1]: Started libcrun container.
Jan 10 16:58:09 compute-0 podman[81227]: 2026-01-10 16:58:09.982744502 +0000 UTC m=+0.185153255 container init 6c3ad1d802d679e8f47c05a87c9f2712c1d280afa09a5846aa0aa83aca08bdf3 (image=quay.io/ceph/ceph:v20, name=practical_elion, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 16:58:09 compute-0 podman[81227]: 2026-01-10 16:58:09.994003509 +0000 UTC m=+0.196412212 container start 6c3ad1d802d679e8f47c05a87c9f2712c1d280afa09a5846aa0aa83aca08bdf3 (image=quay.io/ceph/ceph:v20, name=practical_elion, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 16:58:09 compute-0 podman[81227]: 2026-01-10 16:58:09.998338847 +0000 UTC m=+0.200747520 container attach 6c3ad1d802d679e8f47c05a87c9f2712c1d280afa09a5846aa0aa83aca08bdf3 (image=quay.io/ceph/ceph:v20, name=practical_elion, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 16:58:09 compute-0 practical_elion[81244]: 167 167
Jan 10 16:58:10 compute-0 systemd[1]: libpod-6c3ad1d802d679e8f47c05a87c9f2712c1d280afa09a5846aa0aa83aca08bdf3.scope: Deactivated successfully.
Jan 10 16:58:10 compute-0 podman[81227]: 2026-01-10 16:58:10.007052235 +0000 UTC m=+0.209460908 container died 6c3ad1d802d679e8f47c05a87c9f2712c1d280afa09a5846aa0aa83aca08bdf3 (image=quay.io/ceph/ceph:v20, name=practical_elion, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 16:58:10 compute-0 ceph-mgr[80724]: mgr[py] Loading python module 'localpool'
Jan 10 16:58:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-982cab76a062523e164fed239fb13a5a1bdcd63c4b8ae5c6712ffa1d553d5641-merged.mount: Deactivated successfully.
Jan 10 16:58:10 compute-0 podman[81227]: 2026-01-10 16:58:10.052659897 +0000 UTC m=+0.255068560 container remove 6c3ad1d802d679e8f47c05a87c9f2712c1d280afa09a5846aa0aa83aca08bdf3 (image=quay.io/ceph/ceph:v20, name=practical_elion, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 10 16:58:10 compute-0 systemd[1]: libpod-conmon-6c3ad1d802d679e8f47c05a87c9f2712c1d280afa09a5846aa0aa83aca08bdf3.scope: Deactivated successfully.
Jan 10 16:58:10 compute-0 ceph-mgr[80724]: mgr[py] Loading python module 'mds_autoscaler'
Jan 10 16:58:10 compute-0 sudo[81167]: pam_unix(sudo:session): session closed for user root
Jan 10 16:58:10 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 10 16:58:10 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:58:10 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 10 16:58:10 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:58:10 compute-0 ceph-mgr[75538]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.mkxlpr (unknown last config time)...
Jan 10 16:58:10 compute-0 ceph-mgr[75538]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.mkxlpr (unknown last config time)...
Jan 10 16:58:10 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.mkxlpr", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Jan 10 16:58:10 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get-or-create", "entity": "mgr.compute-0.mkxlpr", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Jan 10 16:58:10 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Jan 10 16:58:10 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "mgr services"} : dispatch
Jan 10 16:58:10 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 16:58:10 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 16:58:10 compute-0 ceph-mgr[75538]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.mkxlpr on compute-0
Jan 10 16:58:10 compute-0 ceph-mgr[75538]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.mkxlpr on compute-0
Jan 10 16:58:10 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:58:10 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:58:10 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:58:10 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 16:58:10 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 10 16:58:10 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:58:10 compute-0 ceph-mon[75249]: Reconfiguring mon.compute-0 (unknown last config time)...
Jan 10 16:58:10 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Jan 10 16:58:10 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Jan 10 16:58:10 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 16:58:10 compute-0 ceph-mon[75249]: Reconfiguring daemon mon.compute-0 on compute-0
Jan 10 16:58:10 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/2258513117' entity='client.admin' cmd={"prefix": "osd set-require-min-compat-client", "version": "mimic"} : dispatch
Jan 10 16:58:10 compute-0 ceph-mon[75249]: pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 10 16:58:10 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:58:10 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:58:10 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get-or-create", "entity": "mgr.compute-0.mkxlpr", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Jan 10 16:58:10 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "mgr services"} : dispatch
Jan 10 16:58:10 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 16:58:10 compute-0 sudo[81262]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 16:58:10 compute-0 sudo[81262]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 16:58:10 compute-0 sudo[81262]: pam_unix(sudo:session): session closed for user root
Jan 10 16:58:10 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e2 do_prune osdmap full prune enabled
Jan 10 16:58:10 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e2 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 10 16:58:10 compute-0 sudo[81287]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph:v20 --timeout 895 _orch deploy --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4
Jan 10 16:58:10 compute-0 sudo[81287]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 16:58:10 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2258513117' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Jan 10 16:58:10 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e3 e3: 0 total, 0 up, 0 in
Jan 10 16:58:10 compute-0 happy_heyrovsky[81111]: set require_min_compat_client to mimic
Jan 10 16:58:10 compute-0 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e3: 0 total, 0 up, 0 in
Jan 10 16:58:10 compute-0 systemd[1]: libpod-5078c7575c64fa9de24a58a06b11fcfcef93f92d8e799154fe7fbe732acbb19d.scope: Deactivated successfully.
Jan 10 16:58:10 compute-0 podman[81096]: 2026-01-10 16:58:10.35040081 +0000 UTC m=+1.283260576 container died 5078c7575c64fa9de24a58a06b11fcfcef93f92d8e799154fe7fbe732acbb19d (image=quay.io/ceph/ceph:v20, name=happy_heyrovsky, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 10 16:58:10 compute-0 ceph-mgr[80724]: mgr[py] Loading python module 'mirroring'
Jan 10 16:58:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-3b87239b09842a15b121c69feb57cd6aa476480b29f57ea7f3dfdeb721efc09b-merged.mount: Deactivated successfully.
Jan 10 16:58:10 compute-0 podman[81096]: 2026-01-10 16:58:10.40766305 +0000 UTC m=+1.340522776 container remove 5078c7575c64fa9de24a58a06b11fcfcef93f92d8e799154fe7fbe732acbb19d (image=quay.io/ceph/ceph:v20, name=happy_heyrovsky, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 10 16:58:10 compute-0 systemd[1]: libpod-conmon-5078c7575c64fa9de24a58a06b11fcfcef93f92d8e799154fe7fbe732acbb19d.scope: Deactivated successfully.
Jan 10 16:58:10 compute-0 sudo[81093]: pam_unix(sudo:session): session closed for user root
Jan 10 16:58:10 compute-0 ceph-mgr[80724]: mgr[py] Loading python module 'nfs'
Jan 10 16:58:10 compute-0 podman[81341]: 2026-01-10 16:58:10.642636862 +0000 UTC m=+0.044139474 container create ffca4a86fb41e9f874457a5cab9373ef65cfcedbe39acff4a97500867c688fb7 (image=quay.io/ceph/ceph:v20, name=mystifying_hodgkin, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 10 16:58:10 compute-0 systemd[1]: Started libpod-conmon-ffca4a86fb41e9f874457a5cab9373ef65cfcedbe39acff4a97500867c688fb7.scope.
Jan 10 16:58:10 compute-0 systemd[1]: Started libcrun container.
Jan 10 16:58:10 compute-0 podman[81341]: 2026-01-10 16:58:10.622349369 +0000 UTC m=+0.023851981 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 16:58:10 compute-0 podman[81341]: 2026-01-10 16:58:10.726110336 +0000 UTC m=+0.127612978 container init ffca4a86fb41e9f874457a5cab9373ef65cfcedbe39acff4a97500867c688fb7 (image=quay.io/ceph/ceph:v20, name=mystifying_hodgkin, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 10 16:58:10 compute-0 ceph-mgr[80724]: mgr[py] Loading python module 'orchestrator'
Jan 10 16:58:10 compute-0 podman[81341]: 2026-01-10 16:58:10.735471772 +0000 UTC m=+0.136974414 container start ffca4a86fb41e9f874457a5cab9373ef65cfcedbe39acff4a97500867c688fb7 (image=quay.io/ceph/ceph:v20, name=mystifying_hodgkin, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 10 16:58:10 compute-0 podman[81341]: 2026-01-10 16:58:10.73981525 +0000 UTC m=+0.141317942 container attach ffca4a86fb41e9f874457a5cab9373ef65cfcedbe39acff4a97500867c688fb7 (image=quay.io/ceph/ceph:v20, name=mystifying_hodgkin, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 16:58:10 compute-0 mystifying_hodgkin[81356]: 167 167
Jan 10 16:58:10 compute-0 systemd[1]: libpod-ffca4a86fb41e9f874457a5cab9373ef65cfcedbe39acff4a97500867c688fb7.scope: Deactivated successfully.
Jan 10 16:58:10 compute-0 podman[81341]: 2026-01-10 16:58:10.741745942 +0000 UTC m=+0.143248554 container died ffca4a86fb41e9f874457a5cab9373ef65cfcedbe39acff4a97500867c688fb7 (image=quay.io/ceph/ceph:v20, name=mystifying_hodgkin, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 16:58:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-1e8a0218286aeed4dc01e6d0cbd86b96c4f9eb945cdae28544bfad470a752502-merged.mount: Deactivated successfully.
Jan 10 16:58:10 compute-0 podman[81341]: 2026-01-10 16:58:10.783970373 +0000 UTC m=+0.185472995 container remove ffca4a86fb41e9f874457a5cab9373ef65cfcedbe39acff4a97500867c688fb7 (image=quay.io/ceph/ceph:v20, name=mystifying_hodgkin, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 16:58:10 compute-0 systemd[1]: libpod-conmon-ffca4a86fb41e9f874457a5cab9373ef65cfcedbe39acff4a97500867c688fb7.scope: Deactivated successfully.
Jan 10 16:58:10 compute-0 sudo[81287]: pam_unix(sudo:session): session closed for user root
Jan 10 16:58:10 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 10 16:58:10 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:58:10 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 10 16:58:10 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:58:10 compute-0 ceph-mgr[75538]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 10 16:58:10 compute-0 sudo[81373]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 16:58:10 compute-0 sudo[81373]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 16:58:10 compute-0 sudo[81373]: pam_unix(sudo:session): session closed for user root
Jan 10 16:58:10 compute-0 sudo[81421]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wbbhlzrsnsorofyjdfeeelkvicpwesiq ; /usr/bin/python3'
Jan 10 16:58:10 compute-0 sudo[81421]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:58:11 compute-0 ceph-mgr[80724]: mgr[py] Loading python module 'osd_perf_query'
Jan 10 16:58:11 compute-0 sudo[81422]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Jan 10 16:58:11 compute-0 sudo[81422]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 16:58:11 compute-0 ceph-mgr[80724]: mgr[py] Loading python module 'osd_support'
Jan 10 16:58:11 compute-0 python3[81427]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 16:58:11 compute-0 ceph-mgr[80724]: mgr[py] Loading python module 'pg_autoscaler'
Jan 10 16:58:11 compute-0 podman[81449]: 2026-01-10 16:58:11.202170497 +0000 UTC m=+0.049956652 container create 499bb7b6bab80c142a72640f8dd14d8846f53e6880d431fd3f1545f6532742cb (image=quay.io/ceph/ceph:v20, name=stupefied_visvesvaraya, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 16:58:11 compute-0 systemd[1]: Started libpod-conmon-499bb7b6bab80c142a72640f8dd14d8846f53e6880d431fd3f1545f6532742cb.scope.
Jan 10 16:58:11 compute-0 podman[81449]: 2026-01-10 16:58:11.17362222 +0000 UTC m=+0.021408375 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 16:58:11 compute-0 ceph-mgr[80724]: mgr[py] Loading python module 'progress'
Jan 10 16:58:11 compute-0 systemd[1]: Started libcrun container.
Jan 10 16:58:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec208ef7c071c5e23c8be96041225a1ddc4728d1bd271cb334902b0f100d43c3/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 16:58:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec208ef7c071c5e23c8be96041225a1ddc4728d1bd271cb334902b0f100d43c3/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 10 16:58:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec208ef7c071c5e23c8be96041225a1ddc4728d1bd271cb334902b0f100d43c3/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 16:58:11 compute-0 ceph-mon[75249]: Reconfiguring mgr.compute-0.mkxlpr (unknown last config time)...
Jan 10 16:58:11 compute-0 ceph-mon[75249]: Reconfiguring daemon mgr.compute-0.mkxlpr on compute-0
Jan 10 16:58:11 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/2258513117' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Jan 10 16:58:11 compute-0 ceph-mon[75249]: osdmap e3: 0 total, 0 up, 0 in
Jan 10 16:58:11 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:58:11 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:58:11 compute-0 podman[81449]: 2026-01-10 16:58:11.333417133 +0000 UTC m=+0.181203298 container init 499bb7b6bab80c142a72640f8dd14d8846f53e6880d431fd3f1545f6532742cb (image=quay.io/ceph/ceph:v20, name=stupefied_visvesvaraya, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default)
Jan 10 16:58:11 compute-0 podman[81449]: 2026-01-10 16:58:11.342008867 +0000 UTC m=+0.189795012 container start 499bb7b6bab80c142a72640f8dd14d8846f53e6880d431fd3f1545f6532742cb (image=quay.io/ceph/ceph:v20, name=stupefied_visvesvaraya, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 10 16:58:11 compute-0 podman[81449]: 2026-01-10 16:58:11.362743342 +0000 UTC m=+0.210529507 container attach 499bb7b6bab80c142a72640f8dd14d8846f53e6880d431fd3f1545f6532742cb (image=quay.io/ceph/ceph:v20, name=stupefied_visvesvaraya, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 16:58:11 compute-0 ceph-mgr[80724]: mgr[py] Loading python module 'prometheus'
Jan 10 16:58:11 compute-0 podman[81527]: 2026-01-10 16:58:11.647881731 +0000 UTC m=+0.065906067 container exec 69622407e4b336ab6e593d34ac16bfb19f7f8835a32ed22c7a89e50ee8c8d8e7 (image=quay.io/ceph/ceph:v20, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 10 16:58:11 compute-0 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 16:58:11 compute-0 sudo[81549]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 16:58:11 compute-0 sudo[81549]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 16:58:11 compute-0 sudo[81549]: pam_unix(sudo:session): session closed for user root
Jan 10 16:58:11 compute-0 ceph-mgr[80724]: mgr[py] Loading python module 'rbd_support'
Jan 10 16:58:11 compute-0 sudo[81574]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host --expect-hostname compute-0
Jan 10 16:58:11 compute-0 sudo[81574]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 16:58:11 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 10 16:58:11 compute-0 podman[81527]: 2026-01-10 16:58:11.973310188 +0000 UTC m=+0.391334514 container exec_died 69622407e4b336ab6e593d34ac16bfb19f7f8835a32ed22c7a89e50ee8c8d8e7 (image=quay.io/ceph/ceph:v20, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 16:58:12 compute-0 ceph-mgr[80724]: mgr[py] Loading python module 'rgw'
Jan 10 16:58:12 compute-0 ceph-mgr[80724]: mgr[py] Loading python module 'rook'
Jan 10 16:58:12 compute-0 ceph-mon[75249]: from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 16:58:12 compute-0 ceph-mon[75249]: pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 10 16:58:12 compute-0 sudo[81574]: pam_unix(sudo:session): session closed for user root
Jan 10 16:58:12 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Jan 10 16:58:12 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:58:12 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Jan 10 16:58:12 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:58:12 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Jan 10 16:58:12 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:58:12 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Jan 10 16:58:12 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:58:12 compute-0 ceph-mgr[75538]: [cephadm INFO root] Added host compute-0
Jan 10 16:58:12 compute-0 ceph-mgr[75538]: log_channel(cephadm) log [INF] : Added host compute-0
Jan 10 16:58:12 compute-0 ceph-mgr[75538]: [cephadm INFO root] Saving service mon spec with placement compute-0
Jan 10 16:58:12 compute-0 ceph-mgr[75538]: log_channel(cephadm) log [INF] : Saving service mon spec with placement compute-0
Jan 10 16:58:12 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Jan 10 16:58:12 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:58:12 compute-0 ceph-mgr[75538]: [cephadm INFO root] Saving service mgr spec with placement compute-0
Jan 10 16:58:12 compute-0 ceph-mgr[75538]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement compute-0
Jan 10 16:58:12 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Jan 10 16:58:12 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:58:12 compute-0 ceph-mgr[75538]: [cephadm INFO root] Marking host: compute-0 for OSDSpec preview refresh.
Jan 10 16:58:12 compute-0 ceph-mgr[75538]: log_channel(cephadm) log [INF] : Marking host: compute-0 for OSDSpec preview refresh.
Jan 10 16:58:12 compute-0 ceph-mgr[75538]: [cephadm INFO root] Saving service osd.default_drive_group spec with placement compute-0
Jan 10 16:58:12 compute-0 ceph-mgr[75538]: log_channel(cephadm) log [INF] : Saving service osd.default_drive_group spec with placement compute-0
Jan 10 16:58:12 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.osd.default_drive_group}] v 0)
Jan 10 16:58:12 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:58:12 compute-0 stupefied_visvesvaraya[81463]: Added host 'compute-0' with addr '192.168.122.100'
Jan 10 16:58:12 compute-0 stupefied_visvesvaraya[81463]: Scheduled mon update...
Jan 10 16:58:12 compute-0 stupefied_visvesvaraya[81463]: Scheduled mgr update...
Jan 10 16:58:12 compute-0 stupefied_visvesvaraya[81463]: Scheduled osd.default_drive_group update...
Jan 10 16:58:12 compute-0 systemd[1]: libpod-499bb7b6bab80c142a72640f8dd14d8846f53e6880d431fd3f1545f6532742cb.scope: Deactivated successfully.
Jan 10 16:58:12 compute-0 podman[81449]: 2026-01-10 16:58:12.529492322 +0000 UTC m=+1.377278477 container died 499bb7b6bab80c142a72640f8dd14d8846f53e6880d431fd3f1545f6532742cb (image=quay.io/ceph/ceph:v20, name=stupefied_visvesvaraya, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 10 16:58:12 compute-0 sudo[81422]: pam_unix(sudo:session): session closed for user root
Jan 10 16:58:12 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 10 16:58:12 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:58:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-ec208ef7c071c5e23c8be96041225a1ddc4728d1bd271cb334902b0f100d43c3-merged.mount: Deactivated successfully.
Jan 10 16:58:12 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 10 16:58:12 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:58:12 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 10 16:58:12 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:58:12 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 10 16:58:12 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:58:12 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 16:58:12 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 16:58:12 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 10 16:58:12 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 10 16:58:12 compute-0 podman[81449]: 2026-01-10 16:58:12.594882004 +0000 UTC m=+1.442668149 container remove 499bb7b6bab80c142a72640f8dd14d8846f53e6880d431fd3f1545f6532742cb (image=quay.io/ceph/ceph:v20, name=stupefied_visvesvaraya, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030)
Jan 10 16:58:12 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 10 16:58:12 compute-0 systemd[1]: libpod-conmon-499bb7b6bab80c142a72640f8dd14d8846f53e6880d431fd3f1545f6532742cb.scope: Deactivated successfully.
Jan 10 16:58:12 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:58:12 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Jan 10 16:58:12 compute-0 sudo[81421]: pam_unix(sudo:session): session closed for user root
Jan 10 16:58:12 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:58:12 compute-0 ceph-mgr[75538]: [progress INFO root] update: starting ev 1b329ec7-f94d-4acf-bc23-ab98af1723ab (Updating mgr deployment (-1 -> 1))
Jan 10 16:58:12 compute-0 ceph-mgr[75538]: [cephadm INFO cephadm.serve] Removing daemon mgr.compute-0.ipbphh from compute-0 -- ports [8765]
Jan 10 16:58:12 compute-0 ceph-mgr[75538]: log_channel(cephadm) log [INF] : Removing daemon mgr.compute-0.ipbphh from compute-0 -- ports [8765]
Jan 10 16:58:12 compute-0 sudo[81720]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 16:58:12 compute-0 sudo[81720]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 16:58:12 compute-0 sudo[81720]: pam_unix(sudo:session): session closed for user root
Jan 10 16:58:12 compute-0 sudo[81745]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 rm-daemon --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 --name mgr.compute-0.ipbphh --force --tcp-ports 8765
Jan 10 16:58:12 compute-0 sudo[81745]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 16:58:12 compute-0 sudo[81793]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vujntlegzbtrovzlbnfyehbwhaulwzsp ; /usr/bin/python3'
Jan 10 16:58:12 compute-0 sudo[81793]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:58:12 compute-0 ceph-mgr[75538]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 10 16:58:12 compute-0 ceph-mgr[80724]: mgr[py] Loading python module 'selftest'
Jan 10 16:58:13 compute-0 ceph-mgr[80724]: mgr[py] Loading python module 'smb'
Jan 10 16:58:13 compute-0 systemd[1]: Stopping Ceph mgr.compute-0.ipbphh for a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4...
Jan 10 16:58:13 compute-0 python3[81795]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 16:58:13 compute-0 podman[81821]: 2026-01-10 16:58:13.187169072 +0000 UTC m=+0.074524902 container create 287424092b5c1a893b2c2924b54702f937c4d4bb1cacb9de8565e4161d4365cb (image=quay.io/ceph/ceph:v20, name=wizardly_proskuriakova, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 10 16:58:13 compute-0 systemd[1]: Started libpod-conmon-287424092b5c1a893b2c2924b54702f937c4d4bb1cacb9de8565e4161d4365cb.scope.
Jan 10 16:58:13 compute-0 systemd[1]: Started libcrun container.
Jan 10 16:58:13 compute-0 podman[81821]: 2026-01-10 16:58:13.152180579 +0000 UTC m=+0.039536429 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 16:58:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/654fef234f5b1ac56231a02c2efde53fd70e5b64263c35b34d0aad81dae2c077/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 16:58:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/654fef234f5b1ac56231a02c2efde53fd70e5b64263c35b34d0aad81dae2c077/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 10 16:58:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/654fef234f5b1ac56231a02c2efde53fd70e5b64263c35b34d0aad81dae2c077/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 16:58:13 compute-0 podman[81821]: 2026-01-10 16:58:13.258409743 +0000 UTC m=+0.145765623 container init 287424092b5c1a893b2c2924b54702f937c4d4bb1cacb9de8565e4161d4365cb (image=quay.io/ceph/ceph:v20, name=wizardly_proskuriakova, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 16:58:13 compute-0 podman[81821]: 2026-01-10 16:58:13.265683911 +0000 UTC m=+0.153039741 container start 287424092b5c1a893b2c2924b54702f937c4d4bb1cacb9de8565e4161d4365cb (image=quay.io/ceph/ceph:v20, name=wizardly_proskuriakova, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 16:58:13 compute-0 podman[81821]: 2026-01-10 16:58:13.269206297 +0000 UTC m=+0.156562157 container attach 287424092b5c1a893b2c2924b54702f937c4d4bb1cacb9de8565e4161d4365cb (image=quay.io/ceph/ceph:v20, name=wizardly_proskuriakova, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 10 16:58:13 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 16:58:13 compute-0 podman[81854]: 2026-01-10 16:58:13.313072162 +0000 UTC m=+0.095839112 container died 04d4013852c39c056183895fcaf6bf9efdade85b93515959452caf8dabcdfe7e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-mgr-compute-0-ipbphh, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 10 16:58:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-1e8879709b2c97c9bd9b874415f21c463d79238876163c808652871dd0218106-merged.mount: Deactivated successfully.
Jan 10 16:58:13 compute-0 podman[81854]: 2026-01-10 16:58:13.365614054 +0000 UTC m=+0.148381004 container remove 04d4013852c39c056183895fcaf6bf9efdade85b93515959452caf8dabcdfe7e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-mgr-compute-0-ipbphh, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 10 16:58:13 compute-0 bash[81854]: ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-mgr-compute-0-ipbphh
Jan 10 16:58:13 compute-0 systemd[1]: ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4@mgr.compute-0.ipbphh.service: Main process exited, code=exited, status=143/n/a
Jan 10 16:58:13 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:58:13 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:58:13 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:58:13 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:58:13 compute-0 ceph-mon[75249]: Added host compute-0
Jan 10 16:58:13 compute-0 ceph-mon[75249]: Saving service mon spec with placement compute-0
Jan 10 16:58:13 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:58:13 compute-0 ceph-mon[75249]: Saving service mgr spec with placement compute-0
Jan 10 16:58:13 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:58:13 compute-0 ceph-mon[75249]: Marking host: compute-0 for OSDSpec preview refresh.
Jan 10 16:58:13 compute-0 ceph-mon[75249]: Saving service osd.default_drive_group spec with placement compute-0
Jan 10 16:58:13 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:58:13 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:58:13 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:58:13 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:58:13 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:58:13 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 16:58:13 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 10 16:58:13 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:58:13 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:58:13 compute-0 ceph-mon[75249]: Removing daemon mgr.compute-0.ipbphh from compute-0 -- ports [8765]
Jan 10 16:58:13 compute-0 systemd[1]: ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4@mgr.compute-0.ipbphh.service: Failed with result 'exit-code'.
Jan 10 16:58:13 compute-0 systemd[1]: Stopped Ceph mgr.compute-0.ipbphh for a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4.
Jan 10 16:58:13 compute-0 systemd[1]: ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4@mgr.compute-0.ipbphh.service: Consumed 7.238s CPU time, 382.6M memory peak, read 0B from disk, written 560.0K to disk.
Jan 10 16:58:13 compute-0 systemd[1]: Reloading.
Jan 10 16:58:13 compute-0 systemd-rc-local-generator[81964]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 16:58:13 compute-0 systemd-sysv-generator[81969]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 10 16:58:13 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Jan 10 16:58:13 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/285699077' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Jan 10 16:58:13 compute-0 wizardly_proskuriakova[81860]: 
Jan 10 16:58:13 compute-0 wizardly_proskuriakova[81860]: {"fsid":"a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4","health":{"status":"HEALTH_WARN","checks":{"TOO_FEW_OSDS":{"severity":"HEALTH_WARN","summary":{"message":"OSD count 0 < osd_pool_default_size 1","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":55,"monmap":{"epoch":1,"min_mon_release_name":"tentacle","num_mons":1},"osdmap":{"epoch":3,"num_osds":0,"num_up_osds":0,"osd_up_since":0,"num_in_osds":0,"osd_in_since":0,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"btime":"2026-01-10T16:57:15:771836+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs"],"services":{}},"servicemap":{"epoch":1,"modified":"2026-01-10T16:57:15.774565+0000","services":{}},"progress_events":{}}
Jan 10 16:58:13 compute-0 podman[81821]: 2026-01-10 16:58:13.77213392 +0000 UTC m=+0.659489760 container died 287424092b5c1a893b2c2924b54702f937c4d4bb1cacb9de8565e4161d4365cb (image=quay.io/ceph/ceph:v20, name=wizardly_proskuriakova, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 16:58:13 compute-0 systemd[1]: libpod-287424092b5c1a893b2c2924b54702f937c4d4bb1cacb9de8565e4161d4365cb.scope: Deactivated successfully.
Jan 10 16:58:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-654fef234f5b1ac56231a02c2efde53fd70e5b64263c35b34d0aad81dae2c077-merged.mount: Deactivated successfully.
Jan 10 16:58:13 compute-0 podman[81821]: 2026-01-10 16:58:13.870420758 +0000 UTC m=+0.757776638 container remove 287424092b5c1a893b2c2924b54702f937c4d4bb1cacb9de8565e4161d4365cb (image=quay.io/ceph/ceph:v20, name=wizardly_proskuriakova, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 16:58:13 compute-0 systemd[1]: libpod-conmon-287424092b5c1a893b2c2924b54702f937c4d4bb1cacb9de8565e4161d4365cb.scope: Deactivated successfully.
Jan 10 16:58:13 compute-0 sudo[81793]: pam_unix(sudo:session): session closed for user root
Jan 10 16:58:13 compute-0 sudo[81745]: pam_unix(sudo:session): session closed for user root
Jan 10 16:58:13 compute-0 ceph-mgr[75538]: [cephadm INFO cephadm.services.cephadmservice] Removing key for mgr.compute-0.ipbphh
Jan 10 16:58:13 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "mgr.compute-0.ipbphh"} v 0)
Jan 10 16:58:13 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth rm", "entity": "mgr.compute-0.ipbphh"} : dispatch
Jan 10 16:58:13 compute-0 ceph-mgr[75538]: log_channel(cephadm) log [INF] : Removing key for mgr.compute-0.ipbphh
Jan 10 16:58:13 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "auth rm", "entity": "mgr.compute-0.ipbphh"}]': finished
Jan 10 16:58:13 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Jan 10 16:58:13 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 10 16:58:13 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:58:13 compute-0 ceph-mgr[75538]: [progress INFO root] complete: finished ev 1b329ec7-f94d-4acf-bc23-ab98af1723ab (Updating mgr deployment (-1 -> 1))
Jan 10 16:58:13 compute-0 ceph-mgr[75538]: [progress INFO root] Completed event 1b329ec7-f94d-4acf-bc23-ab98af1723ab (Updating mgr deployment (-1 -> 1)) in 1 seconds
Jan 10 16:58:13 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Jan 10 16:58:13 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:58:13 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 10 16:58:13 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 10 16:58:13 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 10 16:58:13 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 10 16:58:13 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 16:58:13 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 16:58:14 compute-0 sudo[81991]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 16:58:14 compute-0 sudo[81991]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 16:58:14 compute-0 sudo[81991]: pam_unix(sudo:session): session closed for user root
Jan 10 16:58:14 compute-0 sudo[82016]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 10 16:58:14 compute-0 sudo[82016]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 16:58:14 compute-0 ceph-mgr[75538]: [progress INFO root] Writing back 3 completed events
Jan 10 16:58:14 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 10 16:58:14 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:58:14 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/285699077' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Jan 10 16:58:14 compute-0 ceph-mon[75249]: Removing key for mgr.compute-0.ipbphh
Jan 10 16:58:14 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth rm", "entity": "mgr.compute-0.ipbphh"} : dispatch
Jan 10 16:58:14 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "auth rm", "entity": "mgr.compute-0.ipbphh"}]': finished
Jan 10 16:58:14 compute-0 ceph-mon[75249]: pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 10 16:58:14 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:58:14 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:58:14 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 10 16:58:14 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 10 16:58:14 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 16:58:14 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:58:14 compute-0 podman[82053]: 2026-01-10 16:58:14.549891081 +0000 UTC m=+0.060043447 container create ac02d479cf61f7dfb6eebf90bc842fbeb034338401e56e87712ea65cab186094 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_chatterjee, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 16:58:14 compute-0 systemd[1]: Started libpod-conmon-ac02d479cf61f7dfb6eebf90bc842fbeb034338401e56e87712ea65cab186094.scope.
Jan 10 16:58:14 compute-0 podman[82053]: 2026-01-10 16:58:14.529031862 +0000 UTC m=+0.039184238 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 16:58:14 compute-0 systemd[1]: Started libcrun container.
Jan 10 16:58:14 compute-0 podman[82053]: 2026-01-10 16:58:14.649891425 +0000 UTC m=+0.160043831 container init ac02d479cf61f7dfb6eebf90bc842fbeb034338401e56e87712ea65cab186094 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_chatterjee, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 16:58:14 compute-0 podman[82053]: 2026-01-10 16:58:14.672862431 +0000 UTC m=+0.183014787 container start ac02d479cf61f7dfb6eebf90bc842fbeb034338401e56e87712ea65cab186094 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_chatterjee, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 16:58:14 compute-0 podman[82053]: 2026-01-10 16:58:14.677839456 +0000 UTC m=+0.187991852 container attach ac02d479cf61f7dfb6eebf90bc842fbeb034338401e56e87712ea65cab186094 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_chatterjee, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 16:58:14 compute-0 brave_chatterjee[82070]: 167 167
Jan 10 16:58:14 compute-0 systemd[1]: libpod-ac02d479cf61f7dfb6eebf90bc842fbeb034338401e56e87712ea65cab186094.scope: Deactivated successfully.
Jan 10 16:58:14 compute-0 podman[82053]: 2026-01-10 16:58:14.681116646 +0000 UTC m=+0.191269022 container died ac02d479cf61f7dfb6eebf90bc842fbeb034338401e56e87712ea65cab186094 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_chatterjee, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 16:58:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-68e03338b6bef4f7aa780abfcda19b90c43d1ea27a77f84d639d4521ac2ea22b-merged.mount: Deactivated successfully.
Jan 10 16:58:14 compute-0 podman[82053]: 2026-01-10 16:58:14.739028094 +0000 UTC m=+0.249180460 container remove ac02d479cf61f7dfb6eebf90bc842fbeb034338401e56e87712ea65cab186094 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_chatterjee, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 16:58:14 compute-0 systemd[1]: libpod-conmon-ac02d479cf61f7dfb6eebf90bc842fbeb034338401e56e87712ea65cab186094.scope: Deactivated successfully.
Jan 10 16:58:14 compute-0 ceph-mgr[75538]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 10 16:58:14 compute-0 podman[82095]: 2026-01-10 16:58:14.966181402 +0000 UTC m=+0.059756449 container create b23cbc9334d1024541c159f2eb3d8a951cd784aaaf9676103deace2b82480c36 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_hertz, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 10 16:58:15 compute-0 systemd[1]: Started libpod-conmon-b23cbc9334d1024541c159f2eb3d8a951cd784aaaf9676103deace2b82480c36.scope.
Jan 10 16:58:15 compute-0 podman[82095]: 2026-01-10 16:58:14.939642769 +0000 UTC m=+0.033217866 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 16:58:15 compute-0 systemd[1]: Started libcrun container.
Jan 10 16:58:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ef725e03d39c6f7f34eb122e0ee68caada870e0e46d23ee46344dfa586fa862/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 16:58:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ef725e03d39c6f7f34eb122e0ee68caada870e0e46d23ee46344dfa586fa862/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 16:58:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ef725e03d39c6f7f34eb122e0ee68caada870e0e46d23ee46344dfa586fa862/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 16:58:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ef725e03d39c6f7f34eb122e0ee68caada870e0e46d23ee46344dfa586fa862/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 16:58:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ef725e03d39c6f7f34eb122e0ee68caada870e0e46d23ee46344dfa586fa862/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 10 16:58:15 compute-0 podman[82095]: 2026-01-10 16:58:15.062914657 +0000 UTC m=+0.156489704 container init b23cbc9334d1024541c159f2eb3d8a951cd784aaaf9676103deace2b82480c36 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_hertz, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 10 16:58:15 compute-0 podman[82095]: 2026-01-10 16:58:15.077506495 +0000 UTC m=+0.171081542 container start b23cbc9334d1024541c159f2eb3d8a951cd784aaaf9676103deace2b82480c36 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_hertz, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 16:58:15 compute-0 podman[82095]: 2026-01-10 16:58:15.08063766 +0000 UTC m=+0.174212717 container attach b23cbc9334d1024541c159f2eb3d8a951cd784aaaf9676103deace2b82480c36 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_hertz, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 10 16:58:15 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 10 16:58:15 compute-0 silly_hertz[82112]: --> passed data devices: 0 physical, 3 LVM
Jan 10 16:58:15 compute-0 silly_hertz[82112]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 10 16:58:16 compute-0 silly_hertz[82112]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 10 16:58:16 compute-0 silly_hertz[82112]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 9aa1dcc9-88f4-49c0-be40-744313964d3e
Jan 10 16:58:16 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "9aa1dcc9-88f4-49c0-be40-744313964d3e"} v 0)
Jan 10 16:58:16 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2337355461' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "9aa1dcc9-88f4-49c0-be40-744313964d3e"} : dispatch
Jan 10 16:58:16 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e3 do_prune osdmap full prune enabled
Jan 10 16:58:16 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e3 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 10 16:58:16 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2337355461' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "9aa1dcc9-88f4-49c0-be40-744313964d3e"}]': finished
Jan 10 16:58:16 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e4 e4: 1 total, 0 up, 1 in
Jan 10 16:58:16 compute-0 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e4: 1 total, 0 up, 1 in
Jan 10 16:58:16 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 10 16:58:16 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 10 16:58:16 compute-0 ceph-mgr[75538]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 10 16:58:16 compute-0 silly_hertz[82112]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
Jan 10 16:58:16 compute-0 silly_hertz[82112]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Jan 10 16:58:16 compute-0 lvm[82204]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 10 16:58:16 compute-0 lvm[82204]: VG ceph_vg0 finished
Jan 10 16:58:16 compute-0 silly_hertz[82112]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 10 16:58:16 compute-0 silly_hertz[82112]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Jan 10 16:58:16 compute-0 silly_hertz[82112]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
Jan 10 16:58:16 compute-0 ceph-mgr[75538]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 10 16:58:16 compute-0 ceph-mon[75249]: pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 10 16:58:16 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/2337355461' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "9aa1dcc9-88f4-49c0-be40-744313964d3e"} : dispatch
Jan 10 16:58:16 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/2337355461' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "9aa1dcc9-88f4-49c0-be40-744313964d3e"}]': finished
Jan 10 16:58:16 compute-0 ceph-mon[75249]: osdmap e4: 1 total, 0 up, 1 in
Jan 10 16:58:16 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 10 16:58:17 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Jan 10 16:58:17 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/890078762' entity='client.bootstrap-osd' cmd={"prefix": "mon getmap"} : dispatch
Jan 10 16:58:17 compute-0 silly_hertz[82112]:  stderr: got monmap epoch 1
Jan 10 16:58:17 compute-0 silly_hertz[82112]: --> Creating keyring file for osd.0
Jan 10 16:58:17 compute-0 silly_hertz[82112]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
Jan 10 16:58:17 compute-0 silly_hertz[82112]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
Jan 10 16:58:17 compute-0 silly_hertz[82112]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid 9aa1dcc9-88f4-49c0-be40-744313964d3e --setuser ceph --setgroup ceph
Jan 10 16:58:17 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 10 16:58:17 compute-0 ceph-mon[75249]: log_channel(cluster) log [INF] : Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Jan 10 16:58:17 compute-0 ceph-mon[75249]: log_channel(cluster) log [INF] : Cluster is now healthy
Jan 10 16:58:17 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/890078762' entity='client.bootstrap-osd' cmd={"prefix": "mon getmap"} : dispatch
Jan 10 16:58:18 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e4 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 16:58:18 compute-0 silly_hertz[82112]:  stderr: 2026-01-10T16:58:17.499+0000 7f2c8dc7b8c0 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) No valid bdev label found
Jan 10 16:58:18 compute-0 silly_hertz[82112]:  stderr: 2026-01-10T16:58:17.525+0000 7f2c8dc7b8c0 -1 bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid
Jan 10 16:58:18 compute-0 silly_hertz[82112]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Jan 10 16:58:18 compute-0 silly_hertz[82112]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 10 16:58:18 compute-0 silly_hertz[82112]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Jan 10 16:58:18 compute-0 silly_hertz[82112]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Jan 10 16:58:18 compute-0 silly_hertz[82112]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Jan 10 16:58:18 compute-0 silly_hertz[82112]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 10 16:58:18 compute-0 silly_hertz[82112]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 10 16:58:18 compute-0 silly_hertz[82112]: --> ceph-volume lvm activate successful for osd ID: 0
Jan 10 16:58:18 compute-0 silly_hertz[82112]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Jan 10 16:58:18 compute-0 silly_hertz[82112]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 10 16:58:18 compute-0 silly_hertz[82112]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 10 16:58:18 compute-0 silly_hertz[82112]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new e8e31518-65ae-476c-891c-e2fc550d0a1c
Jan 10 16:58:18 compute-0 ceph-mgr[75538]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 10 16:58:18 compute-0 ceph-mon[75249]: pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 10 16:58:18 compute-0 ceph-mon[75249]: Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Jan 10 16:58:18 compute-0 ceph-mon[75249]: Cluster is now healthy
Jan 10 16:58:19 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "e8e31518-65ae-476c-891c-e2fc550d0a1c"} v 0)
Jan 10 16:58:19 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3731380388' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "e8e31518-65ae-476c-891c-e2fc550d0a1c"} : dispatch
Jan 10 16:58:19 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e4 do_prune osdmap full prune enabled
Jan 10 16:58:19 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e4 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 10 16:58:19 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3731380388' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "e8e31518-65ae-476c-891c-e2fc550d0a1c"}]': finished
Jan 10 16:58:19 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e5 e5: 2 total, 0 up, 2 in
Jan 10 16:58:19 compute-0 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e5: 2 total, 0 up, 2 in
Jan 10 16:58:19 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 10 16:58:19 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 10 16:58:19 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 10 16:58:19 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 10 16:58:19 compute-0 ceph-mgr[75538]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 10 16:58:19 compute-0 ceph-mgr[75538]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 10 16:58:19 compute-0 lvm[83155]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 10 16:58:19 compute-0 lvm[83155]: VG ceph_vg1 finished
Jan 10 16:58:19 compute-0 silly_hertz[82112]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-1
Jan 10 16:58:19 compute-0 silly_hertz[82112]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg1/ceph_lv1
Jan 10 16:58:19 compute-0 silly_hertz[82112]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Jan 10 16:58:19 compute-0 silly_hertz[82112]: Running command: /usr/bin/ln -s /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Jan 10 16:58:19 compute-0 silly_hertz[82112]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-1/activate.monmap
Jan 10 16:58:19 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Jan 10 16:58:19 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1566005166' entity='client.bootstrap-osd' cmd={"prefix": "mon getmap"} : dispatch
Jan 10 16:58:19 compute-0 silly_hertz[82112]:  stderr: got monmap epoch 1
Jan 10 16:58:19 compute-0 silly_hertz[82112]: --> Creating keyring file for osd.1
Jan 10 16:58:19 compute-0 silly_hertz[82112]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/keyring
Jan 10 16:58:19 compute-0 silly_hertz[82112]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/
Jan 10 16:58:19 compute-0 silly_hertz[82112]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 1 --monmap /var/lib/ceph/osd/ceph-1/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-1/ --osd-uuid e8e31518-65ae-476c-891c-e2fc550d0a1c --setuser ceph --setgroup ceph
Jan 10 16:58:19 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 10 16:58:20 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/3731380388' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "e8e31518-65ae-476c-891c-e2fc550d0a1c"} : dispatch
Jan 10 16:58:20 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/3731380388' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "e8e31518-65ae-476c-891c-e2fc550d0a1c"}]': finished
Jan 10 16:58:20 compute-0 ceph-mon[75249]: osdmap e5: 2 total, 0 up, 2 in
Jan 10 16:58:20 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 10 16:58:20 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 10 16:58:20 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/1566005166' entity='client.bootstrap-osd' cmd={"prefix": "mon getmap"} : dispatch
Jan 10 16:58:20 compute-0 silly_hertz[82112]:  stderr: 2026-01-10T16:58:19.916+0000 7f5caeeea8c0 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) No valid bdev label found
Jan 10 16:58:20 compute-0 silly_hertz[82112]:  stderr: 2026-01-10T16:58:19.940+0000 7f5caeeea8c0 -1 bluestore(/var/lib/ceph/osd/ceph-1/) _read_fsid unparsable uuid
Jan 10 16:58:20 compute-0 silly_hertz[82112]: --> ceph-volume lvm prepare successful for: ceph_vg1/ceph_lv1
Jan 10 16:58:20 compute-0 silly_hertz[82112]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Jan 10 16:58:20 compute-0 silly_hertz[82112]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Jan 10 16:58:20 compute-0 ceph-mgr[75538]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 10 16:58:21 compute-0 silly_hertz[82112]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Jan 10 16:58:21 compute-0 silly_hertz[82112]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Jan 10 16:58:21 compute-0 silly_hertz[82112]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Jan 10 16:58:21 compute-0 silly_hertz[82112]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Jan 10 16:58:21 compute-0 silly_hertz[82112]: --> ceph-volume lvm activate successful for osd ID: 1
Jan 10 16:58:21 compute-0 silly_hertz[82112]: --> ceph-volume lvm create successful for: ceph_vg1/ceph_lv1
Jan 10 16:58:21 compute-0 silly_hertz[82112]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 10 16:58:21 compute-0 silly_hertz[82112]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 10 16:58:21 compute-0 silly_hertz[82112]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 87473727-6468-4f68-8371-e0bf60edaa43
Jan 10 16:58:21 compute-0 ceph-mon[75249]: pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 10 16:58:21 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "87473727-6468-4f68-8371-e0bf60edaa43"} v 0)
Jan 10 16:58:21 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4144688744' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "87473727-6468-4f68-8371-e0bf60edaa43"} : dispatch
Jan 10 16:58:21 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e5 do_prune osdmap full prune enabled
Jan 10 16:58:21 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e5 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 10 16:58:21 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4144688744' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "87473727-6468-4f68-8371-e0bf60edaa43"}]': finished
Jan 10 16:58:21 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e6 e6: 3 total, 0 up, 3 in
Jan 10 16:58:21 compute-0 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e6: 3 total, 0 up, 3 in
Jan 10 16:58:21 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 10 16:58:21 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 10 16:58:21 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 10 16:58:21 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 10 16:58:21 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 10 16:58:21 compute-0 ceph-mgr[75538]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 10 16:58:21 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 10 16:58:21 compute-0 ceph-mgr[75538]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 10 16:58:21 compute-0 ceph-mgr[75538]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 10 16:58:21 compute-0 lvm[84101]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 10 16:58:21 compute-0 lvm[84101]: VG ceph_vg2 finished
Jan 10 16:58:21 compute-0 silly_hertz[82112]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-2
Jan 10 16:58:21 compute-0 silly_hertz[82112]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg2/ceph_lv2
Jan 10 16:58:21 compute-0 silly_hertz[82112]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Jan 10 16:58:21 compute-0 silly_hertz[82112]: Running command: /usr/bin/ln -s /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Jan 10 16:58:21 compute-0 silly_hertz[82112]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-2/activate.monmap
Jan 10 16:58:21 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 10 16:58:22 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/4144688744' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "87473727-6468-4f68-8371-e0bf60edaa43"} : dispatch
Jan 10 16:58:22 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/4144688744' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "87473727-6468-4f68-8371-e0bf60edaa43"}]': finished
Jan 10 16:58:22 compute-0 ceph-mon[75249]: osdmap e6: 3 total, 0 up, 3 in
Jan 10 16:58:22 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 10 16:58:22 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 10 16:58:22 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 10 16:58:22 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Jan 10 16:58:22 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3745133082' entity='client.bootstrap-osd' cmd={"prefix": "mon getmap"} : dispatch
Jan 10 16:58:22 compute-0 silly_hertz[82112]:  stderr: got monmap epoch 1
Jan 10 16:58:22 compute-0 silly_hertz[82112]: --> Creating keyring file for osd.2
Jan 10 16:58:22 compute-0 silly_hertz[82112]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/keyring
Jan 10 16:58:22 compute-0 silly_hertz[82112]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/
Jan 10 16:58:22 compute-0 silly_hertz[82112]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 2 --monmap /var/lib/ceph/osd/ceph-2/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-2/ --osd-uuid 87473727-6468-4f68-8371-e0bf60edaa43 --setuser ceph --setgroup ceph
Jan 10 16:58:22 compute-0 ceph-mgr[75538]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 10 16:58:23 compute-0 ceph-mon[75249]: pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 10 16:58:23 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/3745133082' entity='client.bootstrap-osd' cmd={"prefix": "mon getmap"} : dispatch
Jan 10 16:58:23 compute-0 silly_hertz[82112]:  stderr: 2026-01-10T16:58:22.439+0000 7fb3b6dec8c0 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) No valid bdev label found
Jan 10 16:58:23 compute-0 silly_hertz[82112]:  stderr: 2026-01-10T16:58:22.455+0000 7fb3b6dec8c0 -1 bluestore(/var/lib/ceph/osd/ceph-2/) _read_fsid unparsable uuid
Jan 10 16:58:23 compute-0 silly_hertz[82112]: --> ceph-volume lvm prepare successful for: ceph_vg2/ceph_lv2
Jan 10 16:58:23 compute-0 silly_hertz[82112]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Jan 10 16:58:23 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 16:58:23 compute-0 silly_hertz[82112]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg2/ceph_lv2 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Jan 10 16:58:23 compute-0 silly_hertz[82112]: Running command: /usr/bin/ln -snf /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Jan 10 16:58:23 compute-0 silly_hertz[82112]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Jan 10 16:58:23 compute-0 silly_hertz[82112]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Jan 10 16:58:23 compute-0 silly_hertz[82112]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Jan 10 16:58:23 compute-0 silly_hertz[82112]: --> ceph-volume lvm activate successful for osd ID: 2
Jan 10 16:58:23 compute-0 silly_hertz[82112]: --> ceph-volume lvm create successful for: ceph_vg2/ceph_lv2
Jan 10 16:58:23 compute-0 systemd[1]: libpod-b23cbc9334d1024541c159f2eb3d8a951cd784aaaf9676103deace2b82480c36.scope: Deactivated successfully.
Jan 10 16:58:23 compute-0 systemd[1]: libpod-b23cbc9334d1024541c159f2eb3d8a951cd784aaaf9676103deace2b82480c36.scope: Consumed 6.799s CPU time.
Jan 10 16:58:23 compute-0 podman[82095]: 2026-01-10 16:58:23.447297481 +0000 UTC m=+8.540872528 container died b23cbc9334d1024541c159f2eb3d8a951cd784aaaf9676103deace2b82480c36 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_hertz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 16:58:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-3ef725e03d39c6f7f34eb122e0ee68caada870e0e46d23ee46344dfa586fa862-merged.mount: Deactivated successfully.
Jan 10 16:58:23 compute-0 podman[82095]: 2026-01-10 16:58:23.5446869 +0000 UTC m=+8.638261937 container remove b23cbc9334d1024541c159f2eb3d8a951cd784aaaf9676103deace2b82480c36 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_hertz, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 10 16:58:23 compute-0 systemd[1]: libpod-conmon-b23cbc9334d1024541c159f2eb3d8a951cd784aaaf9676103deace2b82480c36.scope: Deactivated successfully.
Jan 10 16:58:23 compute-0 sudo[82016]: pam_unix(sudo:session): session closed for user root
Jan 10 16:58:23 compute-0 sudo[85038]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 16:58:23 compute-0 sudo[85038]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 16:58:23 compute-0 sudo[85038]: pam_unix(sudo:session): session closed for user root
Jan 10 16:58:23 compute-0 sudo[85063]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 -- lvm list --format json
Jan 10 16:58:23 compute-0 sudo[85063]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 16:58:23 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 10 16:58:24 compute-0 podman[85099]: 2026-01-10 16:58:24.02086021 +0000 UTC m=+0.043201805 container create e2add8eb7e49bf9405fe5a662681e05c6f62018a70253b993afb7e7013f23be3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_wilbur, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 16:58:24 compute-0 systemd[1]: Started libpod-conmon-e2add8eb7e49bf9405fe5a662681e05c6f62018a70253b993afb7e7013f23be3.scope.
Jan 10 16:58:24 compute-0 podman[85099]: 2026-01-10 16:58:24.001876502 +0000 UTC m=+0.024218117 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 16:58:24 compute-0 systemd[1]: Started libcrun container.
Jan 10 16:58:24 compute-0 podman[85099]: 2026-01-10 16:58:24.112076044 +0000 UTC m=+0.134417719 container init e2add8eb7e49bf9405fe5a662681e05c6f62018a70253b993afb7e7013f23be3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_wilbur, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 10 16:58:24 compute-0 podman[85099]: 2026-01-10 16:58:24.118254369 +0000 UTC m=+0.140595964 container start e2add8eb7e49bf9405fe5a662681e05c6f62018a70253b993afb7e7013f23be3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_wilbur, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 10 16:58:24 compute-0 podman[85099]: 2026-01-10 16:58:24.121578174 +0000 UTC m=+0.143919849 container attach e2add8eb7e49bf9405fe5a662681e05c6f62018a70253b993afb7e7013f23be3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_wilbur, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True)
Jan 10 16:58:24 compute-0 eloquent_wilbur[85116]: 167 167
Jan 10 16:58:24 compute-0 systemd[1]: libpod-e2add8eb7e49bf9405fe5a662681e05c6f62018a70253b993afb7e7013f23be3.scope: Deactivated successfully.
Jan 10 16:58:24 compute-0 podman[85099]: 2026-01-10 16:58:24.123318933 +0000 UTC m=+0.145660528 container died e2add8eb7e49bf9405fe5a662681e05c6f62018a70253b993afb7e7013f23be3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_wilbur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 10 16:58:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-fb32be3cf9188cb1c63c4a3fc06f9688b1b4de92a6dd022f96a7a935dd1c1608-merged.mount: Deactivated successfully.
Jan 10 16:58:24 compute-0 podman[85099]: 2026-01-10 16:58:24.163242824 +0000 UTC m=+0.185584459 container remove e2add8eb7e49bf9405fe5a662681e05c6f62018a70253b993afb7e7013f23be3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_wilbur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 16:58:24 compute-0 systemd[1]: libpod-conmon-e2add8eb7e49bf9405fe5a662681e05c6f62018a70253b993afb7e7013f23be3.scope: Deactivated successfully.
Jan 10 16:58:24 compute-0 podman[85141]: 2026-01-10 16:58:24.361734247 +0000 UTC m=+0.055125002 container create a49857944294bedcdf451ff1f2e9c3157393c3f2c0befe1509e1d4717304209a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_jang, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 10 16:58:24 compute-0 systemd[1]: Started libpod-conmon-a49857944294bedcdf451ff1f2e9c3157393c3f2c0befe1509e1d4717304209a.scope.
Jan 10 16:58:24 compute-0 podman[85141]: 2026-01-10 16:58:24.334160256 +0000 UTC m=+0.027551071 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 16:58:24 compute-0 systemd[1]: Started libcrun container.
Jan 10 16:58:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3b4d5321570074e251458578aa749bca589b4e6201b26456e97d5cbe6f36d2c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 16:58:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3b4d5321570074e251458578aa749bca589b4e6201b26456e97d5cbe6f36d2c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 16:58:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3b4d5321570074e251458578aa749bca589b4e6201b26456e97d5cbe6f36d2c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 16:58:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3b4d5321570074e251458578aa749bca589b4e6201b26456e97d5cbe6f36d2c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 16:58:24 compute-0 podman[85141]: 2026-01-10 16:58:24.457653555 +0000 UTC m=+0.151044300 container init a49857944294bedcdf451ff1f2e9c3157393c3f2c0befe1509e1d4717304209a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_jang, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 10 16:58:24 compute-0 podman[85141]: 2026-01-10 16:58:24.465481087 +0000 UTC m=+0.158871792 container start a49857944294bedcdf451ff1f2e9c3157393c3f2c0befe1509e1d4717304209a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_jang, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 16:58:24 compute-0 podman[85141]: 2026-01-10 16:58:24.469629514 +0000 UTC m=+0.163020229 container attach a49857944294bedcdf451ff1f2e9c3157393c3f2c0befe1509e1d4717304209a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_jang, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 16:58:24 compute-0 cool_jang[85158]: {
Jan 10 16:58:24 compute-0 cool_jang[85158]:     "0": [
Jan 10 16:58:24 compute-0 cool_jang[85158]:         {
Jan 10 16:58:24 compute-0 cool_jang[85158]:             "devices": [
Jan 10 16:58:24 compute-0 cool_jang[85158]:                 "/dev/loop3"
Jan 10 16:58:24 compute-0 cool_jang[85158]:             ],
Jan 10 16:58:24 compute-0 cool_jang[85158]:             "lv_name": "ceph_lv0",
Jan 10 16:58:24 compute-0 cool_jang[85158]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 10 16:58:24 compute-0 cool_jang[85158]:             "lv_size": "21470642176",
Jan 10 16:58:24 compute-0 cool_jang[85158]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9aa1dcc9-88f4-49c0-be40-744313964d3e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 16:58:24 compute-0 cool_jang[85158]:             "lv_uuid": "fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2",
Jan 10 16:58:24 compute-0 cool_jang[85158]:             "name": "ceph_lv0",
Jan 10 16:58:24 compute-0 cool_jang[85158]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 10 16:58:24 compute-0 cool_jang[85158]:             "tags": {
Jan 10 16:58:24 compute-0 cool_jang[85158]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 10 16:58:24 compute-0 cool_jang[85158]:                 "ceph.block_uuid": "fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2",
Jan 10 16:58:24 compute-0 cool_jang[85158]:                 "ceph.cephx_lockbox_secret": "",
Jan 10 16:58:24 compute-0 cool_jang[85158]:                 "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 16:58:24 compute-0 cool_jang[85158]:                 "ceph.cluster_name": "ceph",
Jan 10 16:58:24 compute-0 cool_jang[85158]:                 "ceph.crush_device_class": "",
Jan 10 16:58:24 compute-0 cool_jang[85158]:                 "ceph.encrypted": "0",
Jan 10 16:58:24 compute-0 cool_jang[85158]:                 "ceph.objectstore": "bluestore",
Jan 10 16:58:24 compute-0 cool_jang[85158]:                 "ceph.osd_fsid": "9aa1dcc9-88f4-49c0-be40-744313964d3e",
Jan 10 16:58:24 compute-0 cool_jang[85158]:                 "ceph.osd_id": "0",
Jan 10 16:58:24 compute-0 cool_jang[85158]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 10 16:58:24 compute-0 cool_jang[85158]:                 "ceph.type": "block",
Jan 10 16:58:24 compute-0 cool_jang[85158]:                 "ceph.vdo": "0",
Jan 10 16:58:24 compute-0 cool_jang[85158]:                 "ceph.with_tpm": "0"
Jan 10 16:58:24 compute-0 cool_jang[85158]:             },
Jan 10 16:58:24 compute-0 cool_jang[85158]:             "type": "block",
Jan 10 16:58:24 compute-0 cool_jang[85158]:             "vg_name": "ceph_vg0"
Jan 10 16:58:24 compute-0 cool_jang[85158]:         }
Jan 10 16:58:24 compute-0 cool_jang[85158]:     ],
Jan 10 16:58:24 compute-0 cool_jang[85158]:     "1": [
Jan 10 16:58:24 compute-0 cool_jang[85158]:         {
Jan 10 16:58:24 compute-0 cool_jang[85158]:             "devices": [
Jan 10 16:58:24 compute-0 cool_jang[85158]:                 "/dev/loop4"
Jan 10 16:58:24 compute-0 cool_jang[85158]:             ],
Jan 10 16:58:24 compute-0 cool_jang[85158]:             "lv_name": "ceph_lv1",
Jan 10 16:58:24 compute-0 cool_jang[85158]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 10 16:58:24 compute-0 cool_jang[85158]:             "lv_size": "21470642176",
Jan 10 16:58:24 compute-0 cool_jang[85158]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e8e31518-65ae-476c-891c-e2fc550d0a1c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 16:58:24 compute-0 cool_jang[85158]:             "lv_uuid": "jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE",
Jan 10 16:58:24 compute-0 cool_jang[85158]:             "name": "ceph_lv1",
Jan 10 16:58:24 compute-0 cool_jang[85158]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 10 16:58:24 compute-0 cool_jang[85158]:             "tags": {
Jan 10 16:58:24 compute-0 cool_jang[85158]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 10 16:58:24 compute-0 cool_jang[85158]:                 "ceph.block_uuid": "jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE",
Jan 10 16:58:24 compute-0 cool_jang[85158]:                 "ceph.cephx_lockbox_secret": "",
Jan 10 16:58:24 compute-0 cool_jang[85158]:                 "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 16:58:24 compute-0 cool_jang[85158]:                 "ceph.cluster_name": "ceph",
Jan 10 16:58:24 compute-0 cool_jang[85158]:                 "ceph.crush_device_class": "",
Jan 10 16:58:24 compute-0 cool_jang[85158]:                 "ceph.encrypted": "0",
Jan 10 16:58:24 compute-0 cool_jang[85158]:                 "ceph.objectstore": "bluestore",
Jan 10 16:58:24 compute-0 cool_jang[85158]:                 "ceph.osd_fsid": "e8e31518-65ae-476c-891c-e2fc550d0a1c",
Jan 10 16:58:24 compute-0 cool_jang[85158]:                 "ceph.osd_id": "1",
Jan 10 16:58:24 compute-0 cool_jang[85158]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 10 16:58:24 compute-0 cool_jang[85158]:                 "ceph.type": "block",
Jan 10 16:58:24 compute-0 cool_jang[85158]:                 "ceph.vdo": "0",
Jan 10 16:58:24 compute-0 cool_jang[85158]:                 "ceph.with_tpm": "0"
Jan 10 16:58:24 compute-0 cool_jang[85158]:             },
Jan 10 16:58:24 compute-0 cool_jang[85158]:             "type": "block",
Jan 10 16:58:24 compute-0 cool_jang[85158]:             "vg_name": "ceph_vg1"
Jan 10 16:58:24 compute-0 cool_jang[85158]:         }
Jan 10 16:58:24 compute-0 cool_jang[85158]:     ],
Jan 10 16:58:24 compute-0 cool_jang[85158]:     "2": [
Jan 10 16:58:24 compute-0 cool_jang[85158]:         {
Jan 10 16:58:24 compute-0 cool_jang[85158]:             "devices": [
Jan 10 16:58:24 compute-0 cool_jang[85158]:                 "/dev/loop5"
Jan 10 16:58:24 compute-0 cool_jang[85158]:             ],
Jan 10 16:58:24 compute-0 cool_jang[85158]:             "lv_name": "ceph_lv2",
Jan 10 16:58:24 compute-0 cool_jang[85158]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 10 16:58:24 compute-0 cool_jang[85158]:             "lv_size": "21470642176",
Jan 10 16:58:24 compute-0 cool_jang[85158]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=87473727-6468-4f68-8371-e0bf60edaa43,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 16:58:24 compute-0 cool_jang[85158]:             "lv_uuid": "47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL",
Jan 10 16:58:24 compute-0 cool_jang[85158]:             "name": "ceph_lv2",
Jan 10 16:58:24 compute-0 cool_jang[85158]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 10 16:58:24 compute-0 cool_jang[85158]:             "tags": {
Jan 10 16:58:24 compute-0 cool_jang[85158]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 10 16:58:24 compute-0 cool_jang[85158]:                 "ceph.block_uuid": "47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL",
Jan 10 16:58:24 compute-0 cool_jang[85158]:                 "ceph.cephx_lockbox_secret": "",
Jan 10 16:58:24 compute-0 cool_jang[85158]:                 "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 16:58:24 compute-0 cool_jang[85158]:                 "ceph.cluster_name": "ceph",
Jan 10 16:58:24 compute-0 cool_jang[85158]:                 "ceph.crush_device_class": "",
Jan 10 16:58:24 compute-0 cool_jang[85158]:                 "ceph.encrypted": "0",
Jan 10 16:58:24 compute-0 cool_jang[85158]:                 "ceph.objectstore": "bluestore",
Jan 10 16:58:24 compute-0 cool_jang[85158]:                 "ceph.osd_fsid": "87473727-6468-4f68-8371-e0bf60edaa43",
Jan 10 16:58:24 compute-0 cool_jang[85158]:                 "ceph.osd_id": "2",
Jan 10 16:58:24 compute-0 cool_jang[85158]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 10 16:58:24 compute-0 cool_jang[85158]:                 "ceph.type": "block",
Jan 10 16:58:24 compute-0 cool_jang[85158]:                 "ceph.vdo": "0",
Jan 10 16:58:24 compute-0 cool_jang[85158]:                 "ceph.with_tpm": "0"
Jan 10 16:58:24 compute-0 cool_jang[85158]:             },
Jan 10 16:58:24 compute-0 cool_jang[85158]:             "type": "block",
Jan 10 16:58:24 compute-0 cool_jang[85158]:             "vg_name": "ceph_vg2"
Jan 10 16:58:24 compute-0 cool_jang[85158]:         }
Jan 10 16:58:24 compute-0 cool_jang[85158]:     ]
Jan 10 16:58:24 compute-0 cool_jang[85158]: }
Jan 10 16:58:24 compute-0 systemd[1]: libpod-a49857944294bedcdf451ff1f2e9c3157393c3f2c0befe1509e1d4717304209a.scope: Deactivated successfully.
Jan 10 16:58:24 compute-0 podman[85141]: 2026-01-10 16:58:24.774377838 +0000 UTC m=+0.467768563 container died a49857944294bedcdf451ff1f2e9c3157393c3f2c0befe1509e1d4717304209a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_jang, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 16:58:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-f3b4d5321570074e251458578aa749bca589b4e6201b26456e97d5cbe6f36d2c-merged.mount: Deactivated successfully.
Jan 10 16:58:24 compute-0 podman[85141]: 2026-01-10 16:58:24.812306922 +0000 UTC m=+0.505697637 container remove a49857944294bedcdf451ff1f2e9c3157393c3f2c0befe1509e1d4717304209a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_jang, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 10 16:58:24 compute-0 systemd[1]: libpod-conmon-a49857944294bedcdf451ff1f2e9c3157393c3f2c0befe1509e1d4717304209a.scope: Deactivated successfully.
Jan 10 16:58:24 compute-0 sudo[85063]: pam_unix(sudo:session): session closed for user root
Jan 10 16:58:24 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0)
Jan 10 16:58:24 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch
Jan 10 16:58:24 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 16:58:24 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 16:58:24 compute-0 ceph-mgr[75538]: [cephadm INFO cephadm.serve] Deploying daemon osd.0 on compute-0
Jan 10 16:58:24 compute-0 ceph-mgr[75538]: log_channel(cephadm) log [INF] : Deploying daemon osd.0 on compute-0
Jan 10 16:58:24 compute-0 sudo[85178]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 16:58:24 compute-0 sudo[85178]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 16:58:24 compute-0 sudo[85178]: pam_unix(sudo:session): session closed for user root
Jan 10 16:58:24 compute-0 ceph-mgr[75538]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 10 16:58:24 compute-0 sudo[85203]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 _orch deploy --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4
Jan 10 16:58:24 compute-0 sudo[85203]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 16:58:25 compute-0 ceph-mon[75249]: pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 10 16:58:25 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch
Jan 10 16:58:25 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 16:58:25 compute-0 podman[85267]: 2026-01-10 16:58:25.346231519 +0000 UTC m=+0.037114973 container create efe52c8c59a1f02c1c5b06a182d8cd4f58acdfce2346ed5837a55ab82486c300 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_wilson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 16:58:25 compute-0 systemd[1]: Started libpod-conmon-efe52c8c59a1f02c1c5b06a182d8cd4f58acdfce2346ed5837a55ab82486c300.scope.
Jan 10 16:58:25 compute-0 systemd[1]: Started libcrun container.
Jan 10 16:58:25 compute-0 podman[85267]: 2026-01-10 16:58:25.416347885 +0000 UTC m=+0.107231369 container init efe52c8c59a1f02c1c5b06a182d8cd4f58acdfce2346ed5837a55ab82486c300 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_wilson, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 16:58:25 compute-0 podman[85267]: 2026-01-10 16:58:25.421773439 +0000 UTC m=+0.112656893 container start efe52c8c59a1f02c1c5b06a182d8cd4f58acdfce2346ed5837a55ab82486c300 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_wilson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 16:58:25 compute-0 podman[85267]: 2026-01-10 16:58:25.425235487 +0000 UTC m=+0.116118971 container attach efe52c8c59a1f02c1c5b06a182d8cd4f58acdfce2346ed5837a55ab82486c300 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_wilson, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 16:58:25 compute-0 stupefied_wilson[85283]: 167 167
Jan 10 16:58:25 compute-0 podman[85267]: 2026-01-10 16:58:25.42675738 +0000 UTC m=+0.117640904 container died efe52c8c59a1f02c1c5b06a182d8cd4f58acdfce2346ed5837a55ab82486c300 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_wilson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 16:58:25 compute-0 systemd[1]: libpod-efe52c8c59a1f02c1c5b06a182d8cd4f58acdfce2346ed5837a55ab82486c300.scope: Deactivated successfully.
Jan 10 16:58:25 compute-0 podman[85267]: 2026-01-10 16:58:25.330442711 +0000 UTC m=+0.021326185 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 16:58:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-fc3261ae95df1658f703649134ed7f7a85969e388b2cda06ffb74aa938f1379c-merged.mount: Deactivated successfully.
Jan 10 16:58:25 compute-0 podman[85267]: 2026-01-10 16:58:25.470824489 +0000 UTC m=+0.161707933 container remove efe52c8c59a1f02c1c5b06a182d8cd4f58acdfce2346ed5837a55ab82486c300 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_wilson, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 10 16:58:25 compute-0 systemd[1]: libpod-conmon-efe52c8c59a1f02c1c5b06a182d8cd4f58acdfce2346ed5837a55ab82486c300.scope: Deactivated successfully.
Jan 10 16:58:25 compute-0 podman[85314]: 2026-01-10 16:58:25.712010561 +0000 UTC m=+0.058162819 container create 1de9864f0c899d14568b9a800491a2c46d04cb83865e34fe7b39dcdf79f3f5f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-0-activate-test, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 10 16:58:25 compute-0 systemd[1]: Started libpod-conmon-1de9864f0c899d14568b9a800491a2c46d04cb83865e34fe7b39dcdf79f3f5f5.scope.
Jan 10 16:58:25 compute-0 systemd[1]: Started libcrun container.
Jan 10 16:58:25 compute-0 podman[85314]: 2026-01-10 16:58:25.69044657 +0000 UTC m=+0.036598828 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 16:58:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58585fae57886a343f733fbddcf823d5c60125e28c5c595ecdd3af390dceca18/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 16:58:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58585fae57886a343f733fbddcf823d5c60125e28c5c595ecdd3af390dceca18/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 16:58:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58585fae57886a343f733fbddcf823d5c60125e28c5c595ecdd3af390dceca18/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 16:58:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58585fae57886a343f733fbddcf823d5c60125e28c5c595ecdd3af390dceca18/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 16:58:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58585fae57886a343f733fbddcf823d5c60125e28c5c595ecdd3af390dceca18/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Jan 10 16:58:25 compute-0 podman[85314]: 2026-01-10 16:58:25.815652667 +0000 UTC m=+0.161804925 container init 1de9864f0c899d14568b9a800491a2c46d04cb83865e34fe7b39dcdf79f3f5f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-0-activate-test, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 16:58:25 compute-0 podman[85314]: 2026-01-10 16:58:25.823803858 +0000 UTC m=+0.169956086 container start 1de9864f0c899d14568b9a800491a2c46d04cb83865e34fe7b39dcdf79f3f5f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-0-activate-test, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 10 16:58:25 compute-0 podman[85314]: 2026-01-10 16:58:25.827192904 +0000 UTC m=+0.173345162 container attach 1de9864f0c899d14568b9a800491a2c46d04cb83865e34fe7b39dcdf79f3f5f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-0-activate-test, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 16:58:25 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 10 16:58:26 compute-0 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-0-activate-test[85330]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_FSID]
Jan 10 16:58:26 compute-0 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-0-activate-test[85330]:                             [--no-systemd] [--no-tmpfs]
Jan 10 16:58:26 compute-0 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-0-activate-test[85330]: ceph-volume activate: error: unrecognized arguments: --bad-option
Jan 10 16:58:26 compute-0 systemd[1]: libpod-1de9864f0c899d14568b9a800491a2c46d04cb83865e34fe7b39dcdf79f3f5f5.scope: Deactivated successfully.
Jan 10 16:58:26 compute-0 podman[85314]: 2026-01-10 16:58:26.034819976 +0000 UTC m=+0.380972204 container died 1de9864f0c899d14568b9a800491a2c46d04cb83865e34fe7b39dcdf79f3f5f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-0-activate-test, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 16:58:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-58585fae57886a343f733fbddcf823d5c60125e28c5c595ecdd3af390dceca18-merged.mount: Deactivated successfully.
Jan 10 16:58:26 compute-0 ceph-mon[75249]: Deploying daemon osd.0 on compute-0
Jan 10 16:58:26 compute-0 podman[85314]: 2026-01-10 16:58:26.277613854 +0000 UTC m=+0.623766092 container remove 1de9864f0c899d14568b9a800491a2c46d04cb83865e34fe7b39dcdf79f3f5f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-0-activate-test, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 10 16:58:26 compute-0 systemd[1]: libpod-conmon-1de9864f0c899d14568b9a800491a2c46d04cb83865e34fe7b39dcdf79f3f5f5.scope: Deactivated successfully.
Jan 10 16:58:26 compute-0 systemd[1]: Reloading.
Jan 10 16:58:26 compute-0 systemd-rc-local-generator[85387]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 16:58:26 compute-0 systemd-sysv-generator[85391]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 10 16:58:26 compute-0 systemd[1]: Reloading.
Jan 10 16:58:26 compute-0 ceph-mgr[75538]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 10 16:58:26 compute-0 systemd-rc-local-generator[85427]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 16:58:26 compute-0 systemd-sysv-generator[85433]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 10 16:58:27 compute-0 systemd[1]: Starting Ceph osd.0 for a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4...
Jan 10 16:58:27 compute-0 ceph-mon[75249]: pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 10 16:58:27 compute-0 podman[85489]: 2026-01-10 16:58:27.406928019 +0000 UTC m=+0.041532578 container create f159b6d2b3b622662674d8f4f3b0ccc3e7a2d2ed8ebb1bd8b9315c90d87799dc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-0-activate, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 10 16:58:27 compute-0 systemd[1]: Started libcrun container.
Jan 10 16:58:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2cb7314cc5eeb5188aa2469443f220fa45cf959fde2fc58d537338c53cb281b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 16:58:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2cb7314cc5eeb5188aa2469443f220fa45cf959fde2fc58d537338c53cb281b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 16:58:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2cb7314cc5eeb5188aa2469443f220fa45cf959fde2fc58d537338c53cb281b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 16:58:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2cb7314cc5eeb5188aa2469443f220fa45cf959fde2fc58d537338c53cb281b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 16:58:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2cb7314cc5eeb5188aa2469443f220fa45cf959fde2fc58d537338c53cb281b/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Jan 10 16:58:27 compute-0 podman[85489]: 2026-01-10 16:58:27.385964015 +0000 UTC m=+0.020568564 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 16:58:27 compute-0 podman[85489]: 2026-01-10 16:58:27.49273791 +0000 UTC m=+0.127342479 container init f159b6d2b3b622662674d8f4f3b0ccc3e7a2d2ed8ebb1bd8b9315c90d87799dc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-0-activate, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 10 16:58:27 compute-0 podman[85489]: 2026-01-10 16:58:27.500787508 +0000 UTC m=+0.135392027 container start f159b6d2b3b622662674d8f4f3b0ccc3e7a2d2ed8ebb1bd8b9315c90d87799dc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-0-activate, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 10 16:58:27 compute-0 podman[85489]: 2026-01-10 16:58:27.504270587 +0000 UTC m=+0.138875126 container attach f159b6d2b3b622662674d8f4f3b0ccc3e7a2d2ed8ebb1bd8b9315c90d87799dc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-0-activate, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 16:58:27 compute-0 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-0-activate[85504]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 10 16:58:27 compute-0 bash[85489]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 10 16:58:27 compute-0 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-0-activate[85504]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 10 16:58:27 compute-0 bash[85489]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 10 16:58:27 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 10 16:58:28 compute-0 ceph-mon[75249]: pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 10 16:58:28 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 16:58:28 compute-0 lvm[85590]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 10 16:58:28 compute-0 lvm[85590]: VG ceph_vg1 finished
Jan 10 16:58:28 compute-0 lvm[85589]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 10 16:58:28 compute-0 lvm[85589]: VG ceph_vg0 finished
Jan 10 16:58:28 compute-0 lvm[85592]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 10 16:58:28 compute-0 lvm[85592]: VG ceph_vg2 finished
Jan 10 16:58:28 compute-0 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-0-activate[85504]: --> Failed to activate via raw: did not find any matching OSD to activate
Jan 10 16:58:28 compute-0 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-0-activate[85504]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 10 16:58:28 compute-0 bash[85489]: --> Failed to activate via raw: did not find any matching OSD to activate
Jan 10 16:58:28 compute-0 bash[85489]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 10 16:58:28 compute-0 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-0-activate[85504]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 10 16:58:28 compute-0 bash[85489]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 10 16:58:28 compute-0 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-0-activate[85504]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 10 16:58:28 compute-0 bash[85489]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 10 16:58:28 compute-0 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-0-activate[85504]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Jan 10 16:58:28 compute-0 bash[85489]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Jan 10 16:58:28 compute-0 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-0-activate[85504]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Jan 10 16:58:28 compute-0 bash[85489]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Jan 10 16:58:28 compute-0 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-0-activate[85504]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Jan 10 16:58:28 compute-0 bash[85489]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Jan 10 16:58:28 compute-0 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-0-activate[85504]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 10 16:58:28 compute-0 bash[85489]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 10 16:58:28 compute-0 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-0-activate[85504]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 10 16:58:28 compute-0 bash[85489]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 10 16:58:28 compute-0 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-0-activate[85504]: --> ceph-volume lvm activate successful for osd ID: 0
Jan 10 16:58:28 compute-0 bash[85489]: --> ceph-volume lvm activate successful for osd ID: 0
Jan 10 16:58:28 compute-0 systemd[1]: libpod-f159b6d2b3b622662674d8f4f3b0ccc3e7a2d2ed8ebb1bd8b9315c90d87799dc.scope: Deactivated successfully.
Jan 10 16:58:28 compute-0 systemd[1]: libpod-f159b6d2b3b622662674d8f4f3b0ccc3e7a2d2ed8ebb1bd8b9315c90d87799dc.scope: Consumed 1.709s CPU time.
Jan 10 16:58:28 compute-0 podman[85687]: 2026-01-10 16:58:28.779324047 +0000 UTC m=+0.047641348 container died f159b6d2b3b622662674d8f4f3b0ccc3e7a2d2ed8ebb1bd8b9315c90d87799dc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-0-activate, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 16:58:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-c2cb7314cc5eeb5188aa2469443f220fa45cf959fde2fc58d537338c53cb281b-merged.mount: Deactivated successfully.
Jan 10 16:58:28 compute-0 podman[85687]: 2026-01-10 16:58:28.837440455 +0000 UTC m=+0.105757686 container remove f159b6d2b3b622662674d8f4f3b0ccc3e7a2d2ed8ebb1bd8b9315c90d87799dc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-0-activate, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 16:58:28 compute-0 ceph-mgr[75538]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 10 16:58:29 compute-0 podman[85745]: 2026-01-10 16:58:29.076986355 +0000 UTC m=+0.056744640 container create 8bba0bcac67d61e0603c28f652cb0c35b79bb52b692367cba939aa36e7ec12fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 10 16:58:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f57f8c71121aa7c0f7143a63eac2ebce6be919029a57e8d90f0966a7775b4160/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 16:58:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f57f8c71121aa7c0f7143a63eac2ebce6be919029a57e8d90f0966a7775b4160/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 16:58:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f57f8c71121aa7c0f7143a63eac2ebce6be919029a57e8d90f0966a7775b4160/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 16:58:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f57f8c71121aa7c0f7143a63eac2ebce6be919029a57e8d90f0966a7775b4160/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 16:58:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f57f8c71121aa7c0f7143a63eac2ebce6be919029a57e8d90f0966a7775b4160/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Jan 10 16:58:29 compute-0 podman[85745]: 2026-01-10 16:58:29.048368298 +0000 UTC m=+0.028126623 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 16:58:29 compute-0 podman[85745]: 2026-01-10 16:58:29.154221355 +0000 UTC m=+0.133979690 container init 8bba0bcac67d61e0603c28f652cb0c35b79bb52b692367cba939aa36e7ec12fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 10 16:58:29 compute-0 podman[85745]: 2026-01-10 16:58:29.159600231 +0000 UTC m=+0.139358516 container start 8bba0bcac67d61e0603c28f652cb0c35b79bb52b692367cba939aa36e7ec12fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 10 16:58:29 compute-0 bash[85745]: 8bba0bcac67d61e0603c28f652cb0c35b79bb52b692367cba939aa36e7ec12fd
Jan 10 16:58:29 compute-0 systemd[1]: Started Ceph osd.0 for a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4.
Jan 10 16:58:29 compute-0 ceph-osd[85764]: set uid:gid to 167:167 (ceph:ceph)
Jan 10 16:58:29 compute-0 ceph-osd[85764]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-osd, pid 2
Jan 10 16:58:29 compute-0 ceph-osd[85764]: pidfile_write: ignore empty --pid-file
Jan 10 16:58:29 compute-0 ceph-osd[85764]: bdev(0x560f2dd84000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 10 16:58:29 compute-0 ceph-osd[85764]: bdev(0x560f2dd84000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 10 16:58:29 compute-0 ceph-osd[85764]: bdev(0x560f2dd84000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 10 16:58:29 compute-0 ceph-osd[85764]: bdev(0x560f2dd84000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 10 16:58:29 compute-0 ceph-osd[85764]: bdev(0x560f2dd84000 /var/lib/ceph/osd/ceph-0/block) close
Jan 10 16:58:29 compute-0 ceph-osd[85764]: bdev(0x560f2dd84000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 10 16:58:29 compute-0 ceph-osd[85764]: bdev(0x560f2dd84000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 10 16:58:29 compute-0 ceph-osd[85764]: bdev(0x560f2dd84000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 10 16:58:29 compute-0 ceph-osd[85764]: bdev(0x560f2dd84000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 10 16:58:29 compute-0 ceph-osd[85764]: bdev(0x560f2dd84000 /var/lib/ceph/osd/ceph-0/block) close
Jan 10 16:58:29 compute-0 ceph-osd[85764]: bdev(0x560f2dd84000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 10 16:58:29 compute-0 ceph-osd[85764]: bdev(0x560f2dd84000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 10 16:58:29 compute-0 ceph-osd[85764]: bdev(0x560f2dd84000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 10 16:58:29 compute-0 ceph-osd[85764]: bdev(0x560f2dd84000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 10 16:58:29 compute-0 ceph-osd[85764]: bdev(0x560f2dd84000 /var/lib/ceph/osd/ceph-0/block) close
Jan 10 16:58:29 compute-0 sudo[85203]: pam_unix(sudo:session): session closed for user root
Jan 10 16:58:29 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 10 16:58:29 compute-0 ceph-osd[85764]: bdev(0x560f2dd84000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 10 16:58:29 compute-0 ceph-osd[85764]: bdev(0x560f2dd84000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 10 16:58:29 compute-0 ceph-osd[85764]: bdev(0x560f2dd84000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 10 16:58:29 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:58:29 compute-0 ceph-osd[85764]: bdev(0x560f2dd84000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 10 16:58:29 compute-0 ceph-osd[85764]: bdev(0x560f2dd84000 /var/lib/ceph/osd/ceph-0/block) close
Jan 10 16:58:29 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 10 16:58:29 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:58:29 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0)
Jan 10 16:58:29 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch
Jan 10 16:58:29 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 16:58:29 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 16:58:29 compute-0 ceph-mgr[75538]: [cephadm INFO cephadm.serve] Deploying daemon osd.1 on compute-0
Jan 10 16:58:29 compute-0 ceph-mgr[75538]: log_channel(cephadm) log [INF] : Deploying daemon osd.1 on compute-0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: bdev(0x560f2dd84000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 10 16:58:29 compute-0 ceph-osd[85764]: bdev(0x560f2dd84000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 10 16:58:29 compute-0 ceph-osd[85764]: bdev(0x560f2dd84000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 10 16:58:29 compute-0 ceph-osd[85764]: bdev(0x560f2dd84000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 10 16:58:29 compute-0 ceph-osd[85764]: bdev(0x560f2dd84000 /var/lib/ceph/osd/ceph-0/block) close
Jan 10 16:58:29 compute-0 ceph-osd[85764]: bdev(0x560f2dd84000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 10 16:58:29 compute-0 ceph-osd[85764]: bdev(0x560f2dd84000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 10 16:58:29 compute-0 ceph-osd[85764]: bdev(0x560f2dd84000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 10 16:58:29 compute-0 ceph-osd[85764]: bdev(0x560f2dd84000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 10 16:58:29 compute-0 ceph-osd[85764]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 10 16:58:29 compute-0 ceph-osd[85764]: bdev(0x560f2dd84400 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 10 16:58:29 compute-0 ceph-osd[85764]: bdev(0x560f2dd84400 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 10 16:58:29 compute-0 ceph-osd[85764]: bdev(0x560f2dd84400 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 10 16:58:29 compute-0 ceph-osd[85764]: bdev(0x560f2dd84400 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 10 16:58:29 compute-0 ceph-osd[85764]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Jan 10 16:58:29 compute-0 ceph-osd[85764]: bdev(0x560f2dd84400 /var/lib/ceph/osd/ceph-0/block) close
Jan 10 16:58:29 compute-0 sudo[85782]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 16:58:29 compute-0 sudo[85782]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 16:58:29 compute-0 ceph-osd[85764]: bdev(0x560f2dd84000 /var/lib/ceph/osd/ceph-0/block) close
Jan 10 16:58:29 compute-0 sudo[85782]: pam_unix(sudo:session): session closed for user root
Jan 10 16:58:29 compute-0 ceph-osd[85764]: starting osd.0 osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal
Jan 10 16:58:29 compute-0 ceph-osd[85764]: load: jerasure load: lrc 
Jan 10 16:58:29 compute-0 ceph-osd[85764]: bdev(0x560f2dd85c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 10 16:58:29 compute-0 ceph-osd[85764]: bdev(0x560f2dd85c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 10 16:58:29 compute-0 ceph-osd[85764]: bdev(0x560f2dd85c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 10 16:58:29 compute-0 ceph-osd[85764]: bdev(0x560f2dd85c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 10 16:58:29 compute-0 ceph-osd[85764]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 10 16:58:29 compute-0 ceph-osd[85764]: bdev(0x560f2dd85c00 /var/lib/ceph/osd/ceph-0/block) close
Jan 10 16:58:29 compute-0 sudo[85813]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 _orch deploy --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4
Jan 10 16:58:29 compute-0 sudo[85813]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 16:58:29 compute-0 ceph-osd[85764]: bdev(0x560f2dd85c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 10 16:58:29 compute-0 ceph-osd[85764]: bdev(0x560f2dd85c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 10 16:58:29 compute-0 ceph-osd[85764]: bdev(0x560f2dd85c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 10 16:58:29 compute-0 ceph-osd[85764]: bdev(0x560f2dd85c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 10 16:58:29 compute-0 ceph-osd[85764]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 10 16:58:29 compute-0 ceph-osd[85764]: bdev(0x560f2dd85c00 /var/lib/ceph/osd/ceph-0/block) close
Jan 10 16:58:29 compute-0 ceph-osd[85764]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Jan 10 16:58:29 compute-0 ceph-osd[85764]: osd.0:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Jan 10 16:58:29 compute-0 ceph-osd[85764]: bdev(0x560f2dd85c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 10 16:58:29 compute-0 ceph-osd[85764]: bdev(0x560f2dd85c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 10 16:58:29 compute-0 ceph-osd[85764]: bdev(0x560f2dd85c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 10 16:58:29 compute-0 ceph-osd[85764]: bdev(0x560f2dd85c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 10 16:58:29 compute-0 ceph-osd[85764]: bdev(0x560f2dd85c00 /var/lib/ceph/osd/ceph-0/block) close
Jan 10 16:58:29 compute-0 ceph-osd[85764]: bdev(0x560f2dd85c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 10 16:58:29 compute-0 ceph-osd[85764]: bdev(0x560f2dd85c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 10 16:58:29 compute-0 ceph-osd[85764]: bdev(0x560f2dd85c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 10 16:58:29 compute-0 ceph-osd[85764]: bdev(0x560f2dd85c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 10 16:58:29 compute-0 ceph-osd[85764]: bdev(0x560f2dd85c00 /var/lib/ceph/osd/ceph-0/block) close
Jan 10 16:58:29 compute-0 ceph-osd[85764]: bdev(0x560f2dd85c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 10 16:58:29 compute-0 ceph-osd[85764]: bdev(0x560f2dd85c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 10 16:58:29 compute-0 ceph-osd[85764]: bdev(0x560f2dd85c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 10 16:58:29 compute-0 ceph-osd[85764]: bdev(0x560f2dd85c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 10 16:58:29 compute-0 ceph-osd[85764]: bdev(0x560f2dd85c00 /var/lib/ceph/osd/ceph-0/block) close
Jan 10 16:58:29 compute-0 ceph-osd[85764]: bdev(0x560f2dd85c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 10 16:58:29 compute-0 ceph-osd[85764]: bdev(0x560f2dd85c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 10 16:58:29 compute-0 ceph-osd[85764]: bdev(0x560f2dd85c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 10 16:58:29 compute-0 ceph-osd[85764]: bdev(0x560f2dd85c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 10 16:58:29 compute-0 ceph-osd[85764]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 10 16:58:29 compute-0 ceph-osd[85764]: bdev(0x560f2ea1b800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 10 16:58:29 compute-0 ceph-osd[85764]: bdev(0x560f2ea1b800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 10 16:58:29 compute-0 ceph-osd[85764]: bdev(0x560f2ea1b800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 10 16:58:29 compute-0 ceph-osd[85764]: bdev(0x560f2ea1b800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 10 16:58:29 compute-0 ceph-osd[85764]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Jan 10 16:58:29 compute-0 ceph-osd[85764]: bluefs mount
Jan 10 16:58:29 compute-0 ceph-osd[85764]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 10 16:58:29 compute-0 ceph-osd[85764]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 10 16:58:29 compute-0 ceph-osd[85764]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 10 16:58:29 compute-0 ceph-osd[85764]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 10 16:58:29 compute-0 ceph-osd[85764]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 10 16:58:29 compute-0 ceph-osd[85764]: bluefs mount shared_bdev_used = 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: RocksDB version: 7.9.2
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Git sha 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Compile date 2025-10-30 15:42:43
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: DB SUMMARY
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: DB Session ID:  0RH7XH576014Q9A9FBIT
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: CURRENT file:  CURRENT
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: IDENTITY file:  IDENTITY
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                         Options.error_if_exists: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                       Options.create_if_missing: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                         Options.paranoid_checks: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                                     Options.env: 0x560f2dc15ea0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                                      Options.fs: LegacyFileSystem
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                                Options.info_log: 0x560f2ec668a0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                Options.max_file_opening_threads: 16
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                              Options.statistics: (nil)
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                               Options.use_fsync: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                       Options.max_log_file_size: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                         Options.allow_fallocate: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                        Options.use_direct_reads: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:          Options.create_missing_column_families: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                              Options.db_log_dir: 
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                                 Options.wal_dir: db.wal
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                   Options.advise_random_on_open: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                    Options.write_buffer_manager: 0x560f2dc7ab40
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                            Options.rate_limiter: (nil)
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                  Options.unordered_write: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                               Options.row_cache: None
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                              Options.wal_filter: None
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:             Options.allow_ingest_behind: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:             Options.two_write_queues: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:             Options.manual_wal_flush: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:             Options.wal_compression: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:             Options.atomic_flush: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                 Options.log_readahead_size: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:             Options.allow_data_in_errors: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:             Options.db_host_id: __hostname__
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:             Options.max_background_jobs: 4
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:             Options.max_background_compactions: -1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:             Options.max_subcompactions: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:           Options.writable_file_max_buffer_size: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:             Options.max_total_wal_size: 1073741824
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                          Options.max_open_files: -1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                          Options.bytes_per_sync: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:       Options.compaction_readahead_size: 2097152
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                  Options.max_background_flushes: -1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Compression algorithms supported:
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         kZSTD supported: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         kXpressCompression supported: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         kBZip2Compression supported: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         kZSTDNotFinalCompression supported: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         kLZ4Compression supported: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         kZlibCompression supported: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         kLZ4HCCompression supported: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         kSnappyCompression supported: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:        Options.compaction_filter: None
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:        Options.compaction_filter_factory: None
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:  Options.sst_partitioner_factory: None
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560f2ec66c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x560f2dc198d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:        Options.write_buffer_size: 16777216
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:  Options.max_write_buffer_number: 64
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:          Options.compression: LZ4
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:       Options.prefix_extractor: nullptr
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:             Options.num_levels: 7
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                  Options.compression_opts.level: 32767
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:               Options.compression_opts.strategy: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                  Options.compression_opts.enabled: false
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                        Options.arena_block_size: 1048576
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                Options.disable_auto_compactions: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                   Options.inplace_update_support: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                           Options.bloom_locality: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                    Options.max_successive_merges: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                Options.paranoid_file_checks: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                Options.force_consistency_checks: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                Options.report_bg_io_stats: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                               Options.ttl: 2592000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                       Options.enable_blob_files: false
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                           Options.min_blob_size: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                          Options.blob_file_size: 268435456
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                Options.blob_file_starting_level: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:           Options.merge_operator: None
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:        Options.compaction_filter: None
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:        Options.compaction_filter_factory: None
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:  Options.sst_partitioner_factory: None
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560f2ec66c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x560f2dc198d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:        Options.write_buffer_size: 16777216
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:  Options.max_write_buffer_number: 64
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:          Options.compression: LZ4
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:       Options.prefix_extractor: nullptr
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:             Options.num_levels: 7
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                  Options.compression_opts.level: 32767
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:               Options.compression_opts.strategy: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                  Options.compression_opts.enabled: false
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                        Options.arena_block_size: 1048576
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                Options.disable_auto_compactions: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                   Options.inplace_update_support: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                           Options.bloom_locality: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                    Options.max_successive_merges: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                Options.paranoid_file_checks: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                Options.force_consistency_checks: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                Options.report_bg_io_stats: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                               Options.ttl: 2592000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                       Options.enable_blob_files: false
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                           Options.min_blob_size: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                          Options.blob_file_size: 268435456
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                Options.blob_file_starting_level: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:           Options.merge_operator: None
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:        Options.compaction_filter: None
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:        Options.compaction_filter_factory: None
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:  Options.sst_partitioner_factory: None
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560f2ec66c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x560f2dc198d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:        Options.write_buffer_size: 16777216
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:  Options.max_write_buffer_number: 64
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:          Options.compression: LZ4
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:       Options.prefix_extractor: nullptr
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:             Options.num_levels: 7
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                  Options.compression_opts.level: 32767
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:               Options.compression_opts.strategy: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                  Options.compression_opts.enabled: false
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                        Options.arena_block_size: 1048576
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                Options.disable_auto_compactions: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                   Options.inplace_update_support: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                           Options.bloom_locality: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                    Options.max_successive_merges: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                Options.paranoid_file_checks: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                Options.force_consistency_checks: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                Options.report_bg_io_stats: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                               Options.ttl: 2592000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                       Options.enable_blob_files: false
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                           Options.min_blob_size: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                          Options.blob_file_size: 268435456
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                Options.blob_file_starting_level: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:           Options.merge_operator: None
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:        Options.compaction_filter: None
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:        Options.compaction_filter_factory: None
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:  Options.sst_partitioner_factory: None
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560f2ec66c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x560f2dc198d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:        Options.write_buffer_size: 16777216
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:  Options.max_write_buffer_number: 64
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:          Options.compression: LZ4
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:       Options.prefix_extractor: nullptr
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:             Options.num_levels: 7
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                  Options.compression_opts.level: 32767
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:               Options.compression_opts.strategy: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                  Options.compression_opts.enabled: false
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                        Options.arena_block_size: 1048576
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                Options.disable_auto_compactions: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                   Options.inplace_update_support: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                           Options.bloom_locality: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                    Options.max_successive_merges: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                Options.paranoid_file_checks: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                Options.force_consistency_checks: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                Options.report_bg_io_stats: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                               Options.ttl: 2592000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                       Options.enable_blob_files: false
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                           Options.min_blob_size: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                          Options.blob_file_size: 268435456
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                Options.blob_file_starting_level: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:           Options.merge_operator: None
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:        Options.compaction_filter: None
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:        Options.compaction_filter_factory: None
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:  Options.sst_partitioner_factory: None
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560f2ec66c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x560f2dc198d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:        Options.write_buffer_size: 16777216
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:  Options.max_write_buffer_number: 64
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:          Options.compression: LZ4
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:       Options.prefix_extractor: nullptr
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:             Options.num_levels: 7
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                  Options.compression_opts.level: 32767
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:               Options.compression_opts.strategy: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                  Options.compression_opts.enabled: false
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                        Options.arena_block_size: 1048576
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                Options.disable_auto_compactions: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                   Options.inplace_update_support: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                           Options.bloom_locality: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                    Options.max_successive_merges: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                Options.paranoid_file_checks: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                Options.force_consistency_checks: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                Options.report_bg_io_stats: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                               Options.ttl: 2592000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                       Options.enable_blob_files: false
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                           Options.min_blob_size: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                          Options.blob_file_size: 268435456
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                Options.blob_file_starting_level: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:           Options.merge_operator: None
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:        Options.compaction_filter: None
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:        Options.compaction_filter_factory: None
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:  Options.sst_partitioner_factory: None
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560f2ec66c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x560f2dc198d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:        Options.write_buffer_size: 16777216
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:  Options.max_write_buffer_number: 64
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:          Options.compression: LZ4
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:       Options.prefix_extractor: nullptr
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:             Options.num_levels: 7
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                  Options.compression_opts.level: 32767
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:               Options.compression_opts.strategy: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                  Options.compression_opts.enabled: false
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                        Options.arena_block_size: 1048576
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                Options.disable_auto_compactions: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                   Options.inplace_update_support: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                           Options.bloom_locality: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                    Options.max_successive_merges: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                Options.paranoid_file_checks: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                Options.force_consistency_checks: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                Options.report_bg_io_stats: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                               Options.ttl: 2592000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                       Options.enable_blob_files: false
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                           Options.min_blob_size: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                          Options.blob_file_size: 268435456
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                Options.blob_file_starting_level: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:           Options.merge_operator: None
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:        Options.compaction_filter: None
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:        Options.compaction_filter_factory: None
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:  Options.sst_partitioner_factory: None
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560f2ec66c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x560f2dc198d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:        Options.write_buffer_size: 16777216
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:  Options.max_write_buffer_number: 64
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:          Options.compression: LZ4
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:       Options.prefix_extractor: nullptr
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:             Options.num_levels: 7
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                  Options.compression_opts.level: 32767
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:               Options.compression_opts.strategy: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                  Options.compression_opts.enabled: false
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                        Options.arena_block_size: 1048576
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                Options.disable_auto_compactions: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                   Options.inplace_update_support: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                           Options.bloom_locality: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                    Options.max_successive_merges: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                Options.paranoid_file_checks: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                Options.force_consistency_checks: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                Options.report_bg_io_stats: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                               Options.ttl: 2592000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                       Options.enable_blob_files: false
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                           Options.min_blob_size: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                          Options.blob_file_size: 268435456
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                Options.blob_file_starting_level: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:           Options.merge_operator: None
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:        Options.compaction_filter: None
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:        Options.compaction_filter_factory: None
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:  Options.sst_partitioner_factory: None
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560f2ec66c80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x560f2dc19a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:        Options.write_buffer_size: 16777216
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:  Options.max_write_buffer_number: 64
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:          Options.compression: LZ4
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:       Options.prefix_extractor: nullptr
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:             Options.num_levels: 7
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                  Options.compression_opts.level: 32767
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:               Options.compression_opts.strategy: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                  Options.compression_opts.enabled: false
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                        Options.arena_block_size: 1048576
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                Options.disable_auto_compactions: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                   Options.inplace_update_support: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                           Options.bloom_locality: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                    Options.max_successive_merges: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                Options.paranoid_file_checks: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                Options.force_consistency_checks: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                Options.report_bg_io_stats: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                               Options.ttl: 2592000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                       Options.enable_blob_files: false
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                           Options.min_blob_size: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                          Options.blob_file_size: 268435456
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                Options.blob_file_starting_level: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:           Options.merge_operator: None
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:        Options.compaction_filter: None
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:        Options.compaction_filter_factory: None
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:  Options.sst_partitioner_factory: None
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560f2ec66c80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x560f2dc19a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:        Options.write_buffer_size: 16777216
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:  Options.max_write_buffer_number: 64
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:          Options.compression: LZ4
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:       Options.prefix_extractor: nullptr
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:             Options.num_levels: 7
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                  Options.compression_opts.level: 32767
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:               Options.compression_opts.strategy: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                  Options.compression_opts.enabled: false
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                        Options.arena_block_size: 1048576
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                Options.disable_auto_compactions: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                   Options.inplace_update_support: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                           Options.bloom_locality: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                    Options.max_successive_merges: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                Options.paranoid_file_checks: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                Options.force_consistency_checks: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                Options.report_bg_io_stats: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                               Options.ttl: 2592000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                       Options.enable_blob_files: false
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                           Options.min_blob_size: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                          Options.blob_file_size: 268435456
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                Options.blob_file_starting_level: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:           Options.merge_operator: None
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:        Options.compaction_filter: None
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:        Options.compaction_filter_factory: None
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:  Options.sst_partitioner_factory: None
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560f2ec66c80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x560f2dc19a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:        Options.write_buffer_size: 16777216
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:  Options.max_write_buffer_number: 64
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:          Options.compression: LZ4
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:       Options.prefix_extractor: nullptr
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:             Options.num_levels: 7
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                  Options.compression_opts.level: 32767
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:               Options.compression_opts.strategy: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                  Options.compression_opts.enabled: false
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                        Options.arena_block_size: 1048576
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                Options.disable_auto_compactions: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                   Options.inplace_update_support: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                           Options.bloom_locality: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                    Options.max_successive_merges: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                Options.paranoid_file_checks: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                Options.force_consistency_checks: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                Options.report_bg_io_stats: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                               Options.ttl: 2592000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                       Options.enable_blob_files: false
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                           Options.min_blob_size: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                          Options.blob_file_size: 268435456
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                Options.blob_file_starting_level: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 43bbcf8f-3aee-403e-8276-7bdc1e6e65cd
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768064309631931, "job": 1, "event": "recovery_started", "wal_files": [31]}
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768064309634066, "job": 1, "event": "recovery_finished"}
Jan 10 16:58:29 compute-0 ceph-osd[85764]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old nid_max 1025
Jan 10 16:58:29 compute-0 ceph-osd[85764]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old blobid_max 10240
Jan 10 16:58:29 compute-0 ceph-osd[85764]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Jan 10 16:58:29 compute-0 ceph-osd[85764]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta min_alloc_size 0x1000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: freelist init
Jan 10 16:58:29 compute-0 ceph-osd[85764]: freelist _read_cfg
Jan 10 16:58:29 compute-0 ceph-osd[85764]: bluestore(/var/lib/ceph/osd/ceph-0) _open_fm effective freelist_type = bitmap, freelist_alloc_size = 0x1000, min_alloc_size = 0x1000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: bluestore(/var/lib/ceph/osd/ceph-0) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Jan 10 16:58:29 compute-0 ceph-osd[85764]: bluefs umount
Jan 10 16:58:29 compute-0 ceph-osd[85764]: bdev(0x560f2ea1b800 /var/lib/ceph/osd/ceph-0/block) close
Jan 10 16:58:29 compute-0 ceph-osd[85764]: bdev(0x560f2ea1b800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 10 16:58:29 compute-0 ceph-osd[85764]: bdev(0x560f2ea1b800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 10 16:58:29 compute-0 ceph-osd[85764]: bdev(0x560f2ea1b800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 10 16:58:29 compute-0 ceph-osd[85764]: bdev(0x560f2ea1b800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 10 16:58:29 compute-0 ceph-osd[85764]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Jan 10 16:58:29 compute-0 ceph-osd[85764]: bluefs mount
Jan 10 16:58:29 compute-0 ceph-osd[85764]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 10 16:58:29 compute-0 ceph-osd[85764]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 10 16:58:29 compute-0 ceph-osd[85764]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 10 16:58:29 compute-0 ceph-osd[85764]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 10 16:58:29 compute-0 ceph-osd[85764]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 10 16:58:29 compute-0 ceph-osd[85764]: bluefs mount shared_bdev_used = 27262976
Jan 10 16:58:29 compute-0 ceph-osd[85764]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: RocksDB version: 7.9.2
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Git sha 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Compile date 2025-10-30 15:42:43
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: DB SUMMARY
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: DB Session ID:  0RH7XH576014Q9A9FBIS
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: CURRENT file:  CURRENT
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: IDENTITY file:  IDENTITY
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                         Options.error_if_exists: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                       Options.create_if_missing: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                         Options.paranoid_checks: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                                     Options.env: 0x560f2ee36a80
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                                      Options.fs: LegacyFileSystem
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                                Options.info_log: 0x560f2ec66960
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                Options.max_file_opening_threads: 16
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                              Options.statistics: (nil)
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                               Options.use_fsync: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                       Options.max_log_file_size: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                         Options.allow_fallocate: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                        Options.use_direct_reads: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:          Options.create_missing_column_families: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                              Options.db_log_dir: 
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                                 Options.wal_dir: db.wal
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                   Options.advise_random_on_open: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                    Options.write_buffer_manager: 0x560f2dc7ab40
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                            Options.rate_limiter: (nil)
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                  Options.unordered_write: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                               Options.row_cache: None
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                              Options.wal_filter: None
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:             Options.allow_ingest_behind: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:             Options.two_write_queues: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:             Options.manual_wal_flush: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:             Options.wal_compression: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:             Options.atomic_flush: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                 Options.log_readahead_size: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:             Options.allow_data_in_errors: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:             Options.db_host_id: __hostname__
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:             Options.max_background_jobs: 4
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:             Options.max_background_compactions: -1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:             Options.max_subcompactions: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:           Options.writable_file_max_buffer_size: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:             Options.max_total_wal_size: 1073741824
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                          Options.max_open_files: -1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                          Options.bytes_per_sync: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:       Options.compaction_readahead_size: 2097152
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                  Options.max_background_flushes: -1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Compression algorithms supported:
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         kZSTD supported: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         kXpressCompression supported: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         kBZip2Compression supported: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         kZSTDNotFinalCompression supported: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         kLZ4Compression supported: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         kZlibCompression supported: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         kLZ4HCCompression supported: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         kSnappyCompression supported: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:        Options.compaction_filter: None
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:        Options.compaction_filter_factory: None
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:  Options.sst_partitioner_factory: None
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560f2ec66bc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x560f2dc198d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:        Options.write_buffer_size: 16777216
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:  Options.max_write_buffer_number: 64
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:          Options.compression: LZ4
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:       Options.prefix_extractor: nullptr
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:             Options.num_levels: 7
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                  Options.compression_opts.level: 32767
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:               Options.compression_opts.strategy: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                  Options.compression_opts.enabled: false
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                        Options.arena_block_size: 1048576
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                Options.disable_auto_compactions: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                   Options.inplace_update_support: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                           Options.bloom_locality: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                    Options.max_successive_merges: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                Options.paranoid_file_checks: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                Options.force_consistency_checks: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                Options.report_bg_io_stats: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                               Options.ttl: 2592000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                       Options.enable_blob_files: false
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                           Options.min_blob_size: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                          Options.blob_file_size: 268435456
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                Options.blob_file_starting_level: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:           Options.merge_operator: None
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:        Options.compaction_filter: None
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:        Options.compaction_filter_factory: None
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:  Options.sst_partitioner_factory: None
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560f2ec66bc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x560f2dc198d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:        Options.write_buffer_size: 16777216
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:  Options.max_write_buffer_number: 64
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:          Options.compression: LZ4
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:       Options.prefix_extractor: nullptr
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:             Options.num_levels: 7
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                  Options.compression_opts.level: 32767
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:               Options.compression_opts.strategy: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                  Options.compression_opts.enabled: false
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                        Options.arena_block_size: 1048576
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                Options.disable_auto_compactions: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                   Options.inplace_update_support: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                           Options.bloom_locality: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                    Options.max_successive_merges: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                Options.paranoid_file_checks: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                Options.force_consistency_checks: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                Options.report_bg_io_stats: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                               Options.ttl: 2592000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                       Options.enable_blob_files: false
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                           Options.min_blob_size: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                          Options.blob_file_size: 268435456
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                Options.blob_file_starting_level: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:           Options.merge_operator: None
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:        Options.compaction_filter: None
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:        Options.compaction_filter_factory: None
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:  Options.sst_partitioner_factory: None
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560f2ec66bc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x560f2dc198d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:        Options.write_buffer_size: 16777216
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:  Options.max_write_buffer_number: 64
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:          Options.compression: LZ4
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:       Options.prefix_extractor: nullptr
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:             Options.num_levels: 7
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                  Options.compression_opts.level: 32767
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:               Options.compression_opts.strategy: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                  Options.compression_opts.enabled: false
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                        Options.arena_block_size: 1048576
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                Options.disable_auto_compactions: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                   Options.inplace_update_support: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                           Options.bloom_locality: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                    Options.max_successive_merges: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                Options.paranoid_file_checks: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                Options.force_consistency_checks: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                Options.report_bg_io_stats: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                               Options.ttl: 2592000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                       Options.enable_blob_files: false
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                           Options.min_blob_size: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                          Options.blob_file_size: 268435456
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                Options.blob_file_starting_level: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:           Options.merge_operator: None
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:        Options.compaction_filter: None
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:        Options.compaction_filter_factory: None
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:  Options.sst_partitioner_factory: None
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560f2ec66bc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x560f2dc198d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:        Options.write_buffer_size: 16777216
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:  Options.max_write_buffer_number: 64
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:          Options.compression: LZ4
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:       Options.prefix_extractor: nullptr
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:             Options.num_levels: 7
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                  Options.compression_opts.level: 32767
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:               Options.compression_opts.strategy: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                  Options.compression_opts.enabled: false
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                        Options.arena_block_size: 1048576
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                Options.disable_auto_compactions: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                   Options.inplace_update_support: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                           Options.bloom_locality: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                    Options.max_successive_merges: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                Options.paranoid_file_checks: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                Options.force_consistency_checks: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                Options.report_bg_io_stats: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                               Options.ttl: 2592000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                       Options.enable_blob_files: false
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                           Options.min_blob_size: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                          Options.blob_file_size: 268435456
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                Options.blob_file_starting_level: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:           Options.merge_operator: None
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:        Options.compaction_filter: None
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:        Options.compaction_filter_factory: None
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:  Options.sst_partitioner_factory: None
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560f2ec66bc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x560f2dc198d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:        Options.write_buffer_size: 16777216
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:  Options.max_write_buffer_number: 64
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:          Options.compression: LZ4
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:       Options.prefix_extractor: nullptr
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:             Options.num_levels: 7
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                  Options.compression_opts.level: 32767
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:               Options.compression_opts.strategy: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                  Options.compression_opts.enabled: false
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                        Options.arena_block_size: 1048576
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                Options.disable_auto_compactions: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                   Options.inplace_update_support: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                           Options.bloom_locality: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                    Options.max_successive_merges: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                Options.paranoid_file_checks: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                Options.force_consistency_checks: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                Options.report_bg_io_stats: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                               Options.ttl: 2592000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                       Options.enable_blob_files: false
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                           Options.min_blob_size: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                          Options.blob_file_size: 268435456
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                Options.blob_file_starting_level: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:           Options.merge_operator: None
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:        Options.compaction_filter: None
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:        Options.compaction_filter_factory: None
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:  Options.sst_partitioner_factory: None
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560f2ec66bc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x560f2dc198d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:        Options.write_buffer_size: 16777216
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:  Options.max_write_buffer_number: 64
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:          Options.compression: LZ4
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:       Options.prefix_extractor: nullptr
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:             Options.num_levels: 7
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                  Options.compression_opts.level: 32767
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:               Options.compression_opts.strategy: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                  Options.compression_opts.enabled: false
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                        Options.arena_block_size: 1048576
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                Options.disable_auto_compactions: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                   Options.inplace_update_support: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                           Options.bloom_locality: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                    Options.max_successive_merges: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                Options.paranoid_file_checks: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                Options.force_consistency_checks: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                Options.report_bg_io_stats: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                               Options.ttl: 2592000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                       Options.enable_blob_files: false
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                           Options.min_blob_size: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                          Options.blob_file_size: 268435456
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                Options.blob_file_starting_level: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:           Options.merge_operator: None
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:        Options.compaction_filter: None
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:        Options.compaction_filter_factory: None
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:  Options.sst_partitioner_factory: None
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560f2ec66bc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x560f2dc198d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:        Options.write_buffer_size: 16777216
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:  Options.max_write_buffer_number: 64
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:          Options.compression: LZ4
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:       Options.prefix_extractor: nullptr
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:             Options.num_levels: 7
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                  Options.compression_opts.level: 32767
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:               Options.compression_opts.strategy: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                  Options.compression_opts.enabled: false
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                        Options.arena_block_size: 1048576
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                Options.disable_auto_compactions: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                   Options.inplace_update_support: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                           Options.bloom_locality: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                    Options.max_successive_merges: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                Options.paranoid_file_checks: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                Options.force_consistency_checks: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                Options.report_bg_io_stats: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                               Options.ttl: 2592000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                       Options.enable_blob_files: false
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                           Options.min_blob_size: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                          Options.blob_file_size: 268435456
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                Options.blob_file_starting_level: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:           Options.merge_operator: None
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:        Options.compaction_filter: None
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:        Options.compaction_filter_factory: None
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:  Options.sst_partitioner_factory: None
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560f2ec670c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x560f2dc19a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:        Options.write_buffer_size: 16777216
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:  Options.max_write_buffer_number: 64
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:          Options.compression: LZ4
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:       Options.prefix_extractor: nullptr
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:             Options.num_levels: 7
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                  Options.compression_opts.level: 32767
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:               Options.compression_opts.strategy: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                  Options.compression_opts.enabled: false
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                        Options.arena_block_size: 1048576
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                Options.disable_auto_compactions: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                   Options.inplace_update_support: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                           Options.bloom_locality: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                    Options.max_successive_merges: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                Options.paranoid_file_checks: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                Options.force_consistency_checks: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                Options.report_bg_io_stats: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                               Options.ttl: 2592000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                       Options.enable_blob_files: false
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                           Options.min_blob_size: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                          Options.blob_file_size: 268435456
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                Options.blob_file_starting_level: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:           Options.merge_operator: None
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:        Options.compaction_filter: None
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:        Options.compaction_filter_factory: None
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:  Options.sst_partitioner_factory: None
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560f2ec670c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x560f2dc19a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:        Options.write_buffer_size: 16777216
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:  Options.max_write_buffer_number: 64
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:          Options.compression: LZ4
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:       Options.prefix_extractor: nullptr
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:             Options.num_levels: 7
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                  Options.compression_opts.level: 32767
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:               Options.compression_opts.strategy: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                  Options.compression_opts.enabled: false
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                        Options.arena_block_size: 1048576
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                Options.disable_auto_compactions: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                   Options.inplace_update_support: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                           Options.bloom_locality: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                    Options.max_successive_merges: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                Options.paranoid_file_checks: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                Options.force_consistency_checks: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                Options.report_bg_io_stats: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                               Options.ttl: 2592000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                       Options.enable_blob_files: false
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                           Options.min_blob_size: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                          Options.blob_file_size: 268435456
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                Options.blob_file_starting_level: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:           Options.merge_operator: None
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:        Options.compaction_filter: None
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:        Options.compaction_filter_factory: None
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:  Options.sst_partitioner_factory: None
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560f2ec670c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x560f2dc19a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:        Options.write_buffer_size: 16777216
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:  Options.max_write_buffer_number: 64
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:          Options.compression: LZ4
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:       Options.prefix_extractor: nullptr
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:             Options.num_levels: 7
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                  Options.compression_opts.level: 32767
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:               Options.compression_opts.strategy: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                  Options.compression_opts.enabled: false
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                        Options.arena_block_size: 1048576
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                Options.disable_auto_compactions: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                   Options.inplace_update_support: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                           Options.bloom_locality: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                    Options.max_successive_merges: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                Options.paranoid_file_checks: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                Options.force_consistency_checks: 1
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                Options.report_bg_io_stats: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                               Options.ttl: 2592000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                       Options.enable_blob_files: false
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                           Options.min_blob_size: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                          Options.blob_file_size: 268435456
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb:                Options.blob_file_starting_level: 0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 43bbcf8f-3aee-403e-8276-7bdc1e6e65cd
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768064309690347, "job": 1, "event": "recovery_started", "wal_files": [31]}
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768064309696097, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 131, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768064309, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "43bbcf8f-3aee-403e-8276-7bdc1e6e65cd", "db_session_id": "0RH7XH576014Q9A9FBIS", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768064309699305, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 571, "raw_average_value_size": 285, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768064309, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "43bbcf8f-3aee-403e-8276-7bdc1e6e65cd", "db_session_id": "0RH7XH576014Q9A9FBIS", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768064309703096, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768064309, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "43bbcf8f-3aee-403e-8276-7bdc1e6e65cd", "db_session_id": "0RH7XH576014Q9A9FBIS", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768064309704577, "job": 1, "event": "recovery_finished"}
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x560f2ee80000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: DB pointer 0x560f2ee20000
Jan 10 16:58:29 compute-0 ceph-osd[85764]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Jan 10 16:58:29 compute-0 ceph-osd[85764]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super from 4, latest 4
Jan 10 16:58:29 compute-0 ceph-osd[85764]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super done
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 10 16:58:29 compute-0 ceph-osd[85764]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560f2dc198d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560f2dc198d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560f2dc198d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560f2dc198d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560f2dc198d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560f2dc198d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560f2dc198d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560f2dc19a30#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 1.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560f2dc19a30#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 1.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560f2dc19a30#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 1.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560f2dc198d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560f2dc198d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 10 16:58:29 compute-0 ceph-osd[85764]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Jan 10 16:58:29 compute-0 ceph-osd[85764]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/hello/cls_hello.cc:316: loading cls_hello
Jan 10 16:58:29 compute-0 ceph-osd[85764]: _get_class not permitted to load lua
Jan 10 16:58:29 compute-0 ceph-osd[85764]: _get_class not permitted to load sdk
Jan 10 16:58:29 compute-0 ceph-osd[85764]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Jan 10 16:58:29 compute-0 ceph-osd[85764]: osd.0 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Jan 10 16:58:29 compute-0 ceph-osd[85764]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Jan 10 16:58:29 compute-0 ceph-osd[85764]: osd.0 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Jan 10 16:58:29 compute-0 ceph-osd[85764]: osd.0 0 load_pgs
Jan 10 16:58:29 compute-0 ceph-osd[85764]: osd.0 0 load_pgs opened 0 pgs
Jan 10 16:58:29 compute-0 ceph-osd[85764]: osd.0 0 log_to_monitors true
Jan 10 16:58:29 compute-0 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-0[85760]: 2026-01-10T16:58:29.746+0000 7f051afd28c0 -1 osd.0 0 log_to_monitors true
Jan 10 16:58:29 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} v 0)
Jan 10 16:58:29 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/150683745,v1:192.168.122.100:6803/150683745]' entity='osd.0' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} : dispatch
Jan 10 16:58:29 compute-0 podman[86301]: 2026-01-10 16:58:29.863499805 +0000 UTC m=+0.034864849 container create df0cef2280bfdb8ef8e3fe18bce493202e2eb791e26e3267fbd92607766e3ebb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_rhodes, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 10 16:58:29 compute-0 systemd[1]: Started libpod-conmon-df0cef2280bfdb8ef8e3fe18bce493202e2eb791e26e3267fbd92607766e3ebb.scope.
Jan 10 16:58:29 compute-0 systemd[1]: Started libcrun container.
Jan 10 16:58:29 compute-0 podman[86301]: 2026-01-10 16:58:29.849098979 +0000 UTC m=+0.020464043 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 16:58:29 compute-0 podman[86301]: 2026-01-10 16:58:29.948430128 +0000 UTC m=+0.119795292 container init df0cef2280bfdb8ef8e3fe18bce493202e2eb791e26e3267fbd92607766e3ebb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_rhodes, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 16:58:29 compute-0 podman[86301]: 2026-01-10 16:58:29.956520132 +0000 UTC m=+0.127885176 container start df0cef2280bfdb8ef8e3fe18bce493202e2eb791e26e3267fbd92607766e3ebb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_rhodes, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 10 16:58:29 compute-0 podman[86301]: 2026-01-10 16:58:29.960149987 +0000 UTC m=+0.131515041 container attach df0cef2280bfdb8ef8e3fe18bce493202e2eb791e26e3267fbd92607766e3ebb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_rhodes, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 10 16:58:29 compute-0 interesting_rhodes[86318]: 167 167
Jan 10 16:58:29 compute-0 systemd[1]: libpod-df0cef2280bfdb8ef8e3fe18bce493202e2eb791e26e3267fbd92607766e3ebb.scope: Deactivated successfully.
Jan 10 16:58:29 compute-0 podman[86301]: 2026-01-10 16:58:29.966677795 +0000 UTC m=+0.138042849 container died df0cef2280bfdb8ef8e3fe18bce493202e2eb791e26e3267fbd92607766e3ebb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_rhodes, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 16:58:29 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 10 16:58:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-b6899152b90c70e7977b647c3222fa75825ff72fbb26e44a029b85b5124de67a-merged.mount: Deactivated successfully.
Jan 10 16:58:30 compute-0 podman[86301]: 2026-01-10 16:58:30.017565885 +0000 UTC m=+0.188930929 container remove df0cef2280bfdb8ef8e3fe18bce493202e2eb791e26e3267fbd92607766e3ebb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_rhodes, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 10 16:58:30 compute-0 systemd[1]: libpod-conmon-df0cef2280bfdb8ef8e3fe18bce493202e2eb791e26e3267fbd92607766e3ebb.scope: Deactivated successfully.
Jan 10 16:58:30 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:58:30 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:58:30 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch
Jan 10 16:58:30 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 16:58:30 compute-0 ceph-mon[75249]: Deploying daemon osd.1 on compute-0
Jan 10 16:58:30 compute-0 ceph-mon[75249]: from='osd.0 [v2:192.168.122.100:6802/150683745,v1:192.168.122.100:6803/150683745]' entity='osd.0' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} : dispatch
Jan 10 16:58:30 compute-0 ceph-mon[75249]: pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 10 16:58:30 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e6 do_prune osdmap full prune enabled
Jan 10 16:58:30 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e6 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 10 16:58:30 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/150683745,v1:192.168.122.100:6803/150683745]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Jan 10 16:58:30 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e7 e7: 3 total, 0 up, 3 in
Jan 10 16:58:30 compute-0 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e7: 3 total, 0 up, 3 in
Jan 10 16:58:30 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0)
Jan 10 16:58:30 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/150683745,v1:192.168.122.100:6803/150683745]' entity='osd.0' cmd={"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Jan 10 16:58:30 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e7 create-or-move crush item name 'osd.0' initial_weight 0.02 at location {host=compute-0,root=default}
Jan 10 16:58:30 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 10 16:58:30 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 10 16:58:30 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 10 16:58:30 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 10 16:58:30 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 10 16:58:30 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 10 16:58:30 compute-0 ceph-mgr[75538]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 10 16:58:30 compute-0 ceph-mgr[75538]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 10 16:58:30 compute-0 ceph-mgr[75538]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 10 16:58:30 compute-0 podman[86347]: 2026-01-10 16:58:30.336171029 +0000 UTC m=+0.062989571 container create 1b9880b26aca710f2fdb475f2ea290d1885095cffdf030610542c939ee9528fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-1-activate-test, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 16:58:30 compute-0 systemd[1]: Started libpod-conmon-1b9880b26aca710f2fdb475f2ea290d1885095cffdf030610542c939ee9528fc.scope.
Jan 10 16:58:30 compute-0 podman[86347]: 2026-01-10 16:58:30.313513944 +0000 UTC m=+0.040332486 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 16:58:30 compute-0 systemd[1]: Started libcrun container.
Jan 10 16:58:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37ef848a00c737ed432384bbf0859dda4c9816834951f5cc4915975b627d3b49/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 16:58:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37ef848a00c737ed432384bbf0859dda4c9816834951f5cc4915975b627d3b49/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 16:58:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37ef848a00c737ed432384bbf0859dda4c9816834951f5cc4915975b627d3b49/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 16:58:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37ef848a00c737ed432384bbf0859dda4c9816834951f5cc4915975b627d3b49/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 16:58:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37ef848a00c737ed432384bbf0859dda4c9816834951f5cc4915975b627d3b49/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Jan 10 16:58:30 compute-0 podman[86347]: 2026-01-10 16:58:30.44282936 +0000 UTC m=+0.169647902 container init 1b9880b26aca710f2fdb475f2ea290d1885095cffdf030610542c939ee9528fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-1-activate-test, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 16:58:30 compute-0 podman[86347]: 2026-01-10 16:58:30.458817762 +0000 UTC m=+0.185636284 container start 1b9880b26aca710f2fdb475f2ea290d1885095cffdf030610542c939ee9528fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-1-activate-test, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 10 16:58:30 compute-0 podman[86347]: 2026-01-10 16:58:30.463467826 +0000 UTC m=+0.190286448 container attach 1b9880b26aca710f2fdb475f2ea290d1885095cffdf030610542c939ee9528fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-1-activate-test, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 10 16:58:30 compute-0 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-1-activate-test[86363]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_FSID]
Jan 10 16:58:30 compute-0 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-1-activate-test[86363]:                             [--no-systemd] [--no-tmpfs]
Jan 10 16:58:30 compute-0 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-1-activate-test[86363]: ceph-volume activate: error: unrecognized arguments: --bad-option
Jan 10 16:58:30 compute-0 systemd[1]: libpod-1b9880b26aca710f2fdb475f2ea290d1885095cffdf030610542c939ee9528fc.scope: Deactivated successfully.
Jan 10 16:58:30 compute-0 podman[86347]: 2026-01-10 16:58:30.708378061 +0000 UTC m=+0.435196593 container died 1b9880b26aca710f2fdb475f2ea290d1885095cffdf030610542c939ee9528fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-1-activate-test, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 10 16:58:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-37ef848a00c737ed432384bbf0859dda4c9816834951f5cc4915975b627d3b49-merged.mount: Deactivated successfully.
Jan 10 16:58:30 compute-0 podman[86347]: 2026-01-10 16:58:30.766312014 +0000 UTC m=+0.493130516 container remove 1b9880b26aca710f2fdb475f2ea290d1885095cffdf030610542c939ee9528fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-1-activate-test, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 16:58:30 compute-0 systemd[1]: libpod-conmon-1b9880b26aca710f2fdb475f2ea290d1885095cffdf030610542c939ee9528fc.scope: Deactivated successfully.
Jan 10 16:58:30 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Jan 10 16:58:30 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Jan 10 16:58:30 compute-0 ceph-mgr[75538]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 10 16:58:31 compute-0 systemd[1]: Reloading.
Jan 10 16:58:31 compute-0 systemd-rc-local-generator[86430]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 16:58:31 compute-0 systemd-sysv-generator[86434]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 10 16:58:31 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e7 do_prune osdmap full prune enabled
Jan 10 16:58:31 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e7 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 10 16:58:31 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/150683745,v1:192.168.122.100:6803/150683745]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Jan 10 16:58:31 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e8 e8: 3 total, 0 up, 3 in
Jan 10 16:58:31 compute-0 ceph-osd[85764]: osd.0 0 done with init, starting boot process
Jan 10 16:58:31 compute-0 ceph-osd[85764]: osd.0 0 start_boot
Jan 10 16:58:31 compute-0 ceph-osd[85764]: osd.0 0 maybe_override_options_for_qos osd_max_backfills set to 1
Jan 10 16:58:31 compute-0 ceph-osd[85764]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Jan 10 16:58:31 compute-0 ceph-osd[85764]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Jan 10 16:58:31 compute-0 ceph-osd[85764]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Jan 10 16:58:31 compute-0 ceph-osd[85764]: osd.0 0  bench count 12288000 bsize 4 KiB
Jan 10 16:58:31 compute-0 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e8: 3 total, 0 up, 3 in
Jan 10 16:58:31 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 10 16:58:31 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 10 16:58:31 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 10 16:58:31 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 10 16:58:31 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 10 16:58:31 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 10 16:58:31 compute-0 ceph-mgr[75538]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 10 16:58:31 compute-0 ceph-mgr[75538]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 10 16:58:31 compute-0 ceph-mgr[75538]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 10 16:58:31 compute-0 ceph-mon[75249]: from='osd.0 [v2:192.168.122.100:6802/150683745,v1:192.168.122.100:6803/150683745]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Jan 10 16:58:31 compute-0 ceph-mon[75249]: osdmap e7: 3 total, 0 up, 3 in
Jan 10 16:58:31 compute-0 ceph-mon[75249]: from='osd.0 [v2:192.168.122.100:6802/150683745,v1:192.168.122.100:6803/150683745]' entity='osd.0' cmd={"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Jan 10 16:58:31 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 10 16:58:31 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 10 16:58:31 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 10 16:58:31 compute-0 ceph-mgr[75538]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/150683745; not ready for session (expect reconnect)
Jan 10 16:58:31 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 10 16:58:31 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 10 16:58:31 compute-0 ceph-mgr[75538]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 10 16:58:31 compute-0 systemd[1]: Reloading.
Jan 10 16:58:31 compute-0 systemd-rc-local-generator[86465]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 16:58:31 compute-0 systemd-sysv-generator[86468]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 10 16:58:31 compute-0 systemd[1]: Starting Ceph osd.1 for a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4...
Jan 10 16:58:31 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 10 16:58:32 compute-0 ceph-mgr[75538]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/150683745; not ready for session (expect reconnect)
Jan 10 16:58:32 compute-0 podman[86522]: 2026-01-10 16:58:32.251528428 +0000 UTC m=+0.030909983 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 16:58:32 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 10 16:58:32 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 10 16:58:32 compute-0 ceph-mgr[75538]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 10 16:58:32 compute-0 podman[86522]: 2026-01-10 16:58:32.746014123 +0000 UTC m=+0.525395568 container create eb455802a229983b59b0ec7b48445ef5acf6c4f8546e7f0d29c89a4c6a980304 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-1-activate, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 16:58:32 compute-0 ceph-mon[75249]: from='osd.0 [v2:192.168.122.100:6802/150683745,v1:192.168.122.100:6803/150683745]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Jan 10 16:58:32 compute-0 ceph-mon[75249]: osdmap e8: 3 total, 0 up, 3 in
Jan 10 16:58:32 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 10 16:58:32 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 10 16:58:32 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 10 16:58:32 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 10 16:58:32 compute-0 ceph-mon[75249]: pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 10 16:58:32 compute-0 systemd[1]: Started libcrun container.
Jan 10 16:58:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38db49f157b25ff43141dccfb0d9c9d85c8ab9fc04ed9726c501d8bc7102b743/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 16:58:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38db49f157b25ff43141dccfb0d9c9d85c8ab9fc04ed9726c501d8bc7102b743/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 16:58:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38db49f157b25ff43141dccfb0d9c9d85c8ab9fc04ed9726c501d8bc7102b743/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 16:58:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38db49f157b25ff43141dccfb0d9c9d85c8ab9fc04ed9726c501d8bc7102b743/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 16:58:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38db49f157b25ff43141dccfb0d9c9d85c8ab9fc04ed9726c501d8bc7102b743/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Jan 10 16:58:32 compute-0 podman[86522]: 2026-01-10 16:58:32.849850362 +0000 UTC m=+0.629231837 container init eb455802a229983b59b0ec7b48445ef5acf6c4f8546e7f0d29c89a4c6a980304 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-1-activate, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 10 16:58:32 compute-0 podman[86522]: 2026-01-10 16:58:32.858921684 +0000 UTC m=+0.638303129 container start eb455802a229983b59b0ec7b48445ef5acf6c4f8546e7f0d29c89a4c6a980304 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-1-activate, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 10 16:58:32 compute-0 podman[86522]: 2026-01-10 16:58:32.877728617 +0000 UTC m=+0.657110062 container attach eb455802a229983b59b0ec7b48445ef5acf6c4f8546e7f0d29c89a4c6a980304 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-1-activate, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 10 16:58:32 compute-0 ceph-mgr[75538]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 10 16:58:33 compute-0 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-1-activate[86537]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 10 16:58:33 compute-0 bash[86522]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 10 16:58:33 compute-0 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-1-activate[86537]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 10 16:58:33 compute-0 bash[86522]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 10 16:58:33 compute-0 ceph-mgr[75538]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/150683745; not ready for session (expect reconnect)
Jan 10 16:58:33 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 10 16:58:33 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 10 16:58:33 compute-0 ceph-mgr[75538]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 10 16:58:33 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e8 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 16:58:33 compute-0 lvm[86622]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 10 16:58:33 compute-0 lvm[86622]: VG ceph_vg0 finished
Jan 10 16:58:33 compute-0 lvm[86623]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 10 16:58:33 compute-0 lvm[86623]: VG ceph_vg1 finished
Jan 10 16:58:33 compute-0 lvm[86625]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 10 16:58:33 compute-0 lvm[86625]: VG ceph_vg2 finished
Jan 10 16:58:33 compute-0 ceph-mon[75249]: purged_snaps scrub starts
Jan 10 16:58:33 compute-0 ceph-mon[75249]: purged_snaps scrub ok
Jan 10 16:58:33 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 10 16:58:33 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 10 16:58:33 compute-0 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-1-activate[86537]: --> Failed to activate via raw: did not find any matching OSD to activate
Jan 10 16:58:33 compute-0 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-1-activate[86537]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 10 16:58:33 compute-0 bash[86522]: --> Failed to activate via raw: did not find any matching OSD to activate
Jan 10 16:58:33 compute-0 bash[86522]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 10 16:58:33 compute-0 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-1-activate[86537]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 10 16:58:33 compute-0 bash[86522]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 10 16:58:33 compute-0 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-1-activate[86537]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Jan 10 16:58:33 compute-0 bash[86522]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Jan 10 16:58:33 compute-0 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-1-activate[86537]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Jan 10 16:58:33 compute-0 bash[86522]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Jan 10 16:58:33 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 10 16:58:34 compute-0 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-1-activate[86537]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Jan 10 16:58:34 compute-0 bash[86522]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Jan 10 16:58:34 compute-0 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-1-activate[86537]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Jan 10 16:58:34 compute-0 bash[86522]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Jan 10 16:58:34 compute-0 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-1-activate[86537]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Jan 10 16:58:34 compute-0 bash[86522]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Jan 10 16:58:34 compute-0 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-1-activate[86537]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Jan 10 16:58:34 compute-0 bash[86522]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Jan 10 16:58:34 compute-0 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-1-activate[86537]: --> ceph-volume lvm activate successful for osd ID: 1
Jan 10 16:58:34 compute-0 bash[86522]: --> ceph-volume lvm activate successful for osd ID: 1
Jan 10 16:58:34 compute-0 systemd[1]: libpod-eb455802a229983b59b0ec7b48445ef5acf6c4f8546e7f0d29c89a4c6a980304.scope: Deactivated successfully.
Jan 10 16:58:34 compute-0 podman[86522]: 2026-01-10 16:58:34.108664365 +0000 UTC m=+1.888045810 container died eb455802a229983b59b0ec7b48445ef5acf6c4f8546e7f0d29c89a4c6a980304 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-1-activate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 10 16:58:34 compute-0 systemd[1]: libpod-eb455802a229983b59b0ec7b48445ef5acf6c4f8546e7f0d29c89a4c6a980304.scope: Consumed 1.860s CPU time.
Jan 10 16:58:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-38db49f157b25ff43141dccfb0d9c9d85c8ab9fc04ed9726c501d8bc7102b743-merged.mount: Deactivated successfully.
Jan 10 16:58:34 compute-0 podman[86522]: 2026-01-10 16:58:34.224962895 +0000 UTC m=+2.004344330 container remove eb455802a229983b59b0ec7b48445ef5acf6c4f8546e7f0d29c89a4c6a980304 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-1-activate, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 10 16:58:34 compute-0 ceph-mgr[75538]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/150683745; not ready for session (expect reconnect)
Jan 10 16:58:34 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 10 16:58:34 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 10 16:58:34 compute-0 ceph-mgr[75538]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 10 16:58:34 compute-0 podman[86790]: 2026-01-10 16:58:34.476429429 +0000 UTC m=+0.027745432 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 16:58:34 compute-0 podman[86790]: 2026-01-10 16:58:34.573330868 +0000 UTC m=+0.124646841 container create 2086bc4111bf1ec90a2c9e86b7eed8efdf33f3d17156c7273e188563a328765d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-1, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 16:58:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8d223e993025546a65a9f275da11fc8fb7498cccda2489d98879e97dd1dc74a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 16:58:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8d223e993025546a65a9f275da11fc8fb7498cccda2489d98879e97dd1dc74a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 16:58:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8d223e993025546a65a9f275da11fc8fb7498cccda2489d98879e97dd1dc74a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 16:58:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8d223e993025546a65a9f275da11fc8fb7498cccda2489d98879e97dd1dc74a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 16:58:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8d223e993025546a65a9f275da11fc8fb7498cccda2489d98879e97dd1dc74a/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Jan 10 16:58:34 compute-0 podman[86790]: 2026-01-10 16:58:34.683414659 +0000 UTC m=+0.234730662 container init 2086bc4111bf1ec90a2c9e86b7eed8efdf33f3d17156c7273e188563a328765d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 10 16:58:34 compute-0 podman[86790]: 2026-01-10 16:58:34.69316237 +0000 UTC m=+0.244478343 container start 2086bc4111bf1ec90a2c9e86b7eed8efdf33f3d17156c7273e188563a328765d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 16:58:34 compute-0 bash[86790]: 2086bc4111bf1ec90a2c9e86b7eed8efdf33f3d17156c7273e188563a328765d
Jan 10 16:58:34 compute-0 systemd[1]: Started Ceph osd.1 for a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4.
Jan 10 16:58:34 compute-0 ceph-osd[86809]: set uid:gid to 167:167 (ceph:ceph)
Jan 10 16:58:34 compute-0 ceph-osd[86809]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-osd, pid 2
Jan 10 16:58:34 compute-0 ceph-osd[86809]: pidfile_write: ignore empty --pid-file
Jan 10 16:58:34 compute-0 ceph-osd[86809]: bdev(0x55d5953ee000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 10 16:58:34 compute-0 ceph-osd[86809]: bdev(0x55d5953ee000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 10 16:58:34 compute-0 ceph-osd[86809]: bdev(0x55d5953ee000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 10 16:58:34 compute-0 ceph-osd[86809]: bdev(0x55d5953ee000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 10 16:58:34 compute-0 ceph-osd[86809]: bdev(0x55d5953ee000 /var/lib/ceph/osd/ceph-1/block) close
Jan 10 16:58:34 compute-0 ceph-osd[86809]: bdev(0x55d5953ee000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 10 16:58:34 compute-0 ceph-osd[86809]: bdev(0x55d5953ee000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 10 16:58:34 compute-0 ceph-osd[86809]: bdev(0x55d5953ee000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 10 16:58:34 compute-0 ceph-osd[86809]: bdev(0x55d5953ee000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 10 16:58:34 compute-0 ceph-osd[86809]: bdev(0x55d5953ee000 /var/lib/ceph/osd/ceph-1/block) close
Jan 10 16:58:34 compute-0 ceph-osd[86809]: bdev(0x55d5953ee000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 10 16:58:34 compute-0 ceph-osd[86809]: bdev(0x55d5953ee000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 10 16:58:34 compute-0 ceph-osd[86809]: bdev(0x55d5953ee000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 10 16:58:34 compute-0 ceph-osd[86809]: bdev(0x55d5953ee000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 10 16:58:34 compute-0 ceph-osd[86809]: bdev(0x55d5953ee000 /var/lib/ceph/osd/ceph-1/block) close
Jan 10 16:58:34 compute-0 sudo[85813]: pam_unix(sudo:session): session closed for user root
Jan 10 16:58:34 compute-0 ceph-mon[75249]: pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 10 16:58:34 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 10 16:58:34 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 10 16:58:34 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:58:34 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 10 16:58:34 compute-0 ceph-osd[86809]: bdev(0x55d5953ee000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 10 16:58:34 compute-0 ceph-osd[86809]: bdev(0x55d5953ee000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 10 16:58:34 compute-0 ceph-osd[86809]: bdev(0x55d5953ee000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 10 16:58:34 compute-0 ceph-osd[86809]: bdev(0x55d5953ee000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 10 16:58:34 compute-0 ceph-osd[86809]: bdev(0x55d5953ee000 /var/lib/ceph/osd/ceph-1/block) close
Jan 10 16:58:34 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:58:34 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0)
Jan 10 16:58:34 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch
Jan 10 16:58:34 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 16:58:34 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 16:58:34 compute-0 ceph-mgr[75538]: [cephadm INFO cephadm.serve] Deploying daemon osd.2 on compute-0
Jan 10 16:58:34 compute-0 ceph-mgr[75538]: log_channel(cephadm) log [INF] : Deploying daemon osd.2 on compute-0
Jan 10 16:58:34 compute-0 ceph-osd[86809]: bdev(0x55d5953ee000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 10 16:58:34 compute-0 ceph-osd[86809]: bdev(0x55d5953ee000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 10 16:58:34 compute-0 ceph-osd[86809]: bdev(0x55d5953ee000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 10 16:58:34 compute-0 ceph-osd[86809]: bdev(0x55d5953ee000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 10 16:58:34 compute-0 ceph-osd[86809]: bdev(0x55d5953ee000 /var/lib/ceph/osd/ceph-1/block) close
Jan 10 16:58:34 compute-0 ceph-osd[86809]: bdev(0x55d5953ee000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 10 16:58:34 compute-0 ceph-osd[86809]: bdev(0x55d5953ee000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 10 16:58:34 compute-0 ceph-osd[86809]: bdev(0x55d5953ee000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 10 16:58:34 compute-0 ceph-osd[86809]: bdev(0x55d5953ee000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 10 16:58:34 compute-0 ceph-osd[86809]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 10 16:58:34 compute-0 ceph-osd[86809]: bdev(0x55d5953ee400 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 10 16:58:34 compute-0 ceph-osd[86809]: bdev(0x55d5953ee400 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 10 16:58:34 compute-0 ceph-osd[86809]: bdev(0x55d5953ee400 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 10 16:58:34 compute-0 ceph-osd[86809]: bdev(0x55d5953ee400 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 10 16:58:34 compute-0 ceph-osd[86809]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Jan 10 16:58:34 compute-0 ceph-osd[86809]: bdev(0x55d5953ee400 /var/lib/ceph/osd/ceph-1/block) close
Jan 10 16:58:34 compute-0 sudo[86827]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 16:58:34 compute-0 sudo[86827]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 16:58:34 compute-0 sudo[86827]: pam_unix(sudo:session): session closed for user root
Jan 10 16:58:34 compute-0 ceph-osd[86809]: bdev(0x55d5953ee000 /var/lib/ceph/osd/ceph-1/block) close
Jan 10 16:58:34 compute-0 ceph-osd[86809]: starting osd.1 osd_data /var/lib/ceph/osd/ceph-1 /var/lib/ceph/osd/ceph-1/journal
Jan 10 16:58:34 compute-0 ceph-mgr[75538]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 10 16:58:34 compute-0 ceph-osd[86809]: load: jerasure load: lrc 
Jan 10 16:58:34 compute-0 ceph-osd[86809]: bdev(0x55d5953efc00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 10 16:58:34 compute-0 ceph-osd[86809]: bdev(0x55d5953efc00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 10 16:58:34 compute-0 ceph-osd[86809]: bdev(0x55d5953efc00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 10 16:58:34 compute-0 ceph-osd[86809]: bdev(0x55d5953efc00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 10 16:58:34 compute-0 ceph-osd[86809]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 10 16:58:34 compute-0 ceph-osd[86809]: bdev(0x55d5953efc00 /var/lib/ceph/osd/ceph-1/block) close
Jan 10 16:58:34 compute-0 sudo[86858]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 _orch deploy --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4
Jan 10 16:58:34 compute-0 sudo[86858]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 16:58:34 compute-0 ceph-osd[86809]: bdev(0x55d5953efc00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 10 16:58:34 compute-0 ceph-osd[86809]: bdev(0x55d5953efc00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 10 16:58:34 compute-0 ceph-osd[86809]: bdev(0x55d5953efc00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 10 16:58:34 compute-0 ceph-osd[86809]: bdev(0x55d5953efc00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 10 16:58:34 compute-0 ceph-osd[86809]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 10 16:58:34 compute-0 ceph-osd[86809]: bdev(0x55d5953efc00 /var/lib/ceph/osd/ceph-1/block) close
Jan 10 16:58:35 compute-0 ceph-osd[86809]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Jan 10 16:58:35 compute-0 ceph-osd[86809]: osd.1:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Jan 10 16:58:35 compute-0 ceph-osd[86809]: bdev(0x55d5953efc00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 10 16:58:35 compute-0 ceph-osd[86809]: bdev(0x55d5953efc00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 10 16:58:35 compute-0 ceph-osd[86809]: bdev(0x55d5953efc00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 10 16:58:35 compute-0 ceph-osd[86809]: bdev(0x55d5953efc00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 10 16:58:35 compute-0 ceph-osd[86809]: bdev(0x55d5953efc00 /var/lib/ceph/osd/ceph-1/block) close
Jan 10 16:58:35 compute-0 ceph-osd[86809]: bdev(0x55d5953efc00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 10 16:58:35 compute-0 ceph-osd[86809]: bdev(0x55d5953efc00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 10 16:58:35 compute-0 ceph-osd[86809]: bdev(0x55d5953efc00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 10 16:58:35 compute-0 ceph-osd[86809]: bdev(0x55d5953efc00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 10 16:58:35 compute-0 ceph-osd[86809]: bdev(0x55d5953efc00 /var/lib/ceph/osd/ceph-1/block) close
Jan 10 16:58:35 compute-0 ceph-osd[86809]: bdev(0x55d5953efc00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 10 16:58:35 compute-0 ceph-osd[86809]: bdev(0x55d5953efc00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 10 16:58:35 compute-0 ceph-osd[86809]: bdev(0x55d5953efc00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 10 16:58:35 compute-0 ceph-osd[86809]: bdev(0x55d5953efc00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 10 16:58:35 compute-0 ceph-osd[86809]: bdev(0x55d5953efc00 /var/lib/ceph/osd/ceph-1/block) close
Jan 10 16:58:35 compute-0 ceph-osd[86809]: bdev(0x55d5953efc00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 10 16:58:35 compute-0 ceph-osd[86809]: bdev(0x55d5953efc00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 10 16:58:35 compute-0 ceph-osd[86809]: bdev(0x55d5953efc00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 10 16:58:35 compute-0 ceph-osd[86809]: bdev(0x55d5953efc00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 10 16:58:35 compute-0 ceph-osd[86809]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 10 16:58:35 compute-0 ceph-osd[86809]: bdev(0x55d596085800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 10 16:58:35 compute-0 ceph-osd[86809]: bdev(0x55d596085800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 10 16:58:35 compute-0 ceph-osd[86809]: bdev(0x55d596085800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 10 16:58:35 compute-0 ceph-osd[86809]: bdev(0x55d596085800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 10 16:58:35 compute-0 ceph-osd[86809]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Jan 10 16:58:35 compute-0 ceph-osd[86809]: bluefs mount
Jan 10 16:58:35 compute-0 ceph-osd[86809]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 10 16:58:35 compute-0 ceph-osd[86809]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 10 16:58:35 compute-0 ceph-osd[86809]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 10 16:58:35 compute-0 ceph-osd[86809]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 10 16:58:35 compute-0 ceph-osd[86809]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 10 16:58:35 compute-0 ceph-osd[86809]: bluefs mount shared_bdev_used = 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: RocksDB version: 7.9.2
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Git sha 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Compile date 2025-10-30 15:42:43
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: DB SUMMARY
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: DB Session ID:  THG7H36IMJBHPS2MRPBN
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: CURRENT file:  CURRENT
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: IDENTITY file:  IDENTITY
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                         Options.error_if_exists: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                       Options.create_if_missing: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                         Options.paranoid_checks: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                                     Options.env: 0x55d5960cbc00
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                                      Options.fs: LegacyFileSystem
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                                Options.info_log: 0x55d5962ce900
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                Options.max_file_opening_threads: 16
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                              Options.statistics: (nil)
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                               Options.use_fsync: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                       Options.max_log_file_size: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                         Options.allow_fallocate: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                        Options.use_direct_reads: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:          Options.create_missing_column_families: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                              Options.db_log_dir: 
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                                 Options.wal_dir: db.wal
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                   Options.advise_random_on_open: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                    Options.write_buffer_manager: 0x55d596172b40
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                            Options.rate_limiter: (nil)
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                  Options.unordered_write: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                               Options.row_cache: None
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                              Options.wal_filter: None
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:             Options.allow_ingest_behind: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:             Options.two_write_queues: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:             Options.manual_wal_flush: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:             Options.wal_compression: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:             Options.atomic_flush: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                 Options.log_readahead_size: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:             Options.allow_data_in_errors: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:             Options.db_host_id: __hostname__
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:             Options.max_background_jobs: 4
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:             Options.max_background_compactions: -1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:             Options.max_subcompactions: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:           Options.writable_file_max_buffer_size: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:             Options.max_total_wal_size: 1073741824
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                          Options.max_open_files: -1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                          Options.bytes_per_sync: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:       Options.compaction_readahead_size: 2097152
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                  Options.max_background_flushes: -1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Compression algorithms supported:
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         kZSTD supported: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         kXpressCompression supported: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         kBZip2Compression supported: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         kZSTDNotFinalCompression supported: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         kLZ4Compression supported: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         kZlibCompression supported: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         kLZ4HCCompression supported: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         kSnappyCompression supported: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:        Options.compaction_filter: None
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:        Options.compaction_filter_factory: None
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:  Options.sst_partitioner_factory: None
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d5962cecc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55d5952838d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:        Options.write_buffer_size: 16777216
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:  Options.max_write_buffer_number: 64
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:          Options.compression: LZ4
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:       Options.prefix_extractor: nullptr
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:             Options.num_levels: 7
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                  Options.compression_opts.level: 32767
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:               Options.compression_opts.strategy: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                  Options.compression_opts.enabled: false
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                        Options.arena_block_size: 1048576
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                Options.disable_auto_compactions: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                   Options.inplace_update_support: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                           Options.bloom_locality: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                    Options.max_successive_merges: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                Options.paranoid_file_checks: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                Options.force_consistency_checks: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                Options.report_bg_io_stats: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                               Options.ttl: 2592000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                       Options.enable_blob_files: false
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                           Options.min_blob_size: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                          Options.blob_file_size: 268435456
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                Options.blob_file_starting_level: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:           Options.merge_operator: None
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:        Options.compaction_filter: None
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:        Options.compaction_filter_factory: None
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:  Options.sst_partitioner_factory: None
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d5962cecc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55d5952838d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:        Options.write_buffer_size: 16777216
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:  Options.max_write_buffer_number: 64
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:          Options.compression: LZ4
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:       Options.prefix_extractor: nullptr
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:             Options.num_levels: 7
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                  Options.compression_opts.level: 32767
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:               Options.compression_opts.strategy: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                  Options.compression_opts.enabled: false
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                        Options.arena_block_size: 1048576
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                Options.disable_auto_compactions: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                   Options.inplace_update_support: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                           Options.bloom_locality: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                    Options.max_successive_merges: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                Options.paranoid_file_checks: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                Options.force_consistency_checks: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                Options.report_bg_io_stats: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                               Options.ttl: 2592000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                       Options.enable_blob_files: false
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                           Options.min_blob_size: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                          Options.blob_file_size: 268435456
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                Options.blob_file_starting_level: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:           Options.merge_operator: None
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:        Options.compaction_filter: None
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:        Options.compaction_filter_factory: None
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:  Options.sst_partitioner_factory: None
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d5962cecc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55d5952838d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:        Options.write_buffer_size: 16777216
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:  Options.max_write_buffer_number: 64
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:          Options.compression: LZ4
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:       Options.prefix_extractor: nullptr
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:             Options.num_levels: 7
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                  Options.compression_opts.level: 32767
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:               Options.compression_opts.strategy: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                  Options.compression_opts.enabled: false
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                        Options.arena_block_size: 1048576
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                Options.disable_auto_compactions: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                   Options.inplace_update_support: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                           Options.bloom_locality: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                    Options.max_successive_merges: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                Options.paranoid_file_checks: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                Options.force_consistency_checks: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                Options.report_bg_io_stats: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                               Options.ttl: 2592000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                       Options.enable_blob_files: false
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                           Options.min_blob_size: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                          Options.blob_file_size: 268435456
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                Options.blob_file_starting_level: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:           Options.merge_operator: None
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:        Options.compaction_filter: None
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:        Options.compaction_filter_factory: None
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:  Options.sst_partitioner_factory: None
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d5962cecc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55d5952838d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:        Options.write_buffer_size: 16777216
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:  Options.max_write_buffer_number: 64
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:          Options.compression: LZ4
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:       Options.prefix_extractor: nullptr
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:             Options.num_levels: 7
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                  Options.compression_opts.level: 32767
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:               Options.compression_opts.strategy: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                  Options.compression_opts.enabled: false
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                        Options.arena_block_size: 1048576
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                Options.disable_auto_compactions: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                   Options.inplace_update_support: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                           Options.bloom_locality: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                    Options.max_successive_merges: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                Options.paranoid_file_checks: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                Options.force_consistency_checks: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                Options.report_bg_io_stats: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                               Options.ttl: 2592000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                       Options.enable_blob_files: false
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                           Options.min_blob_size: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                          Options.blob_file_size: 268435456
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                Options.blob_file_starting_level: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:           Options.merge_operator: None
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:        Options.compaction_filter: None
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:        Options.compaction_filter_factory: None
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:  Options.sst_partitioner_factory: None
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d5962cecc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55d5952838d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:        Options.write_buffer_size: 16777216
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:  Options.max_write_buffer_number: 64
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:          Options.compression: LZ4
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:       Options.prefix_extractor: nullptr
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:             Options.num_levels: 7
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                  Options.compression_opts.level: 32767
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:               Options.compression_opts.strategy: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                  Options.compression_opts.enabled: false
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                        Options.arena_block_size: 1048576
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                Options.disable_auto_compactions: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                   Options.inplace_update_support: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                           Options.bloom_locality: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                    Options.max_successive_merges: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                Options.paranoid_file_checks: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                Options.force_consistency_checks: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                Options.report_bg_io_stats: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                               Options.ttl: 2592000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                       Options.enable_blob_files: false
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                           Options.min_blob_size: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                          Options.blob_file_size: 268435456
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                Options.blob_file_starting_level: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:           Options.merge_operator: None
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:        Options.compaction_filter: None
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:        Options.compaction_filter_factory: None
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:  Options.sst_partitioner_factory: None
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d5962cecc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55d5952838d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:        Options.write_buffer_size: 16777216
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:  Options.max_write_buffer_number: 64
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:          Options.compression: LZ4
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:       Options.prefix_extractor: nullptr
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:             Options.num_levels: 7
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                  Options.compression_opts.level: 32767
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:               Options.compression_opts.strategy: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                  Options.compression_opts.enabled: false
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                        Options.arena_block_size: 1048576
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                Options.disable_auto_compactions: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                   Options.inplace_update_support: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                           Options.bloom_locality: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                    Options.max_successive_merges: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                Options.paranoid_file_checks: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                Options.force_consistency_checks: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                Options.report_bg_io_stats: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                               Options.ttl: 2592000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                       Options.enable_blob_files: false
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                           Options.min_blob_size: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                          Options.blob_file_size: 268435456
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                Options.blob_file_starting_level: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:           Options.merge_operator: None
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:        Options.compaction_filter: None
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:        Options.compaction_filter_factory: None
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:  Options.sst_partitioner_factory: None
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d5962cecc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55d5952838d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:        Options.write_buffer_size: 16777216
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:  Options.max_write_buffer_number: 64
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:          Options.compression: LZ4
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:       Options.prefix_extractor: nullptr
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:             Options.num_levels: 7
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                  Options.compression_opts.level: 32767
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:               Options.compression_opts.strategy: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                  Options.compression_opts.enabled: false
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                        Options.arena_block_size: 1048576
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                Options.disable_auto_compactions: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                   Options.inplace_update_support: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                           Options.bloom_locality: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                    Options.max_successive_merges: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                Options.paranoid_file_checks: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                Options.force_consistency_checks: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                Options.report_bg_io_stats: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                               Options.ttl: 2592000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                       Options.enable_blob_files: false
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                           Options.min_blob_size: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                          Options.blob_file_size: 268435456
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                Options.blob_file_starting_level: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:           Options.merge_operator: None
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:        Options.compaction_filter: None
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:        Options.compaction_filter_factory: None
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:  Options.sst_partitioner_factory: None
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d5962cece0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55d595283a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:        Options.write_buffer_size: 16777216
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:  Options.max_write_buffer_number: 64
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:          Options.compression: LZ4
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:       Options.prefix_extractor: nullptr
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:             Options.num_levels: 7
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                  Options.compression_opts.level: 32767
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:               Options.compression_opts.strategy: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                  Options.compression_opts.enabled: false
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                        Options.arena_block_size: 1048576
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                Options.disable_auto_compactions: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                   Options.inplace_update_support: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                           Options.bloom_locality: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                    Options.max_successive_merges: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                Options.paranoid_file_checks: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                Options.force_consistency_checks: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                Options.report_bg_io_stats: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                               Options.ttl: 2592000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                       Options.enable_blob_files: false
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                           Options.min_blob_size: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                          Options.blob_file_size: 268435456
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                Options.blob_file_starting_level: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:           Options.merge_operator: None
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:        Options.compaction_filter: None
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:        Options.compaction_filter_factory: None
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:  Options.sst_partitioner_factory: None
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d5962cece0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55d595283a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:        Options.write_buffer_size: 16777216
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:  Options.max_write_buffer_number: 64
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:          Options.compression: LZ4
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:       Options.prefix_extractor: nullptr
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:             Options.num_levels: 7
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                  Options.compression_opts.level: 32767
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:               Options.compression_opts.strategy: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                  Options.compression_opts.enabled: false
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                        Options.arena_block_size: 1048576
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                Options.disable_auto_compactions: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                   Options.inplace_update_support: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                           Options.bloom_locality: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                    Options.max_successive_merges: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                Options.paranoid_file_checks: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                Options.force_consistency_checks: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                Options.report_bg_io_stats: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                               Options.ttl: 2592000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                       Options.enable_blob_files: false
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                           Options.min_blob_size: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                          Options.blob_file_size: 268435456
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                Options.blob_file_starting_level: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:           Options.merge_operator: None
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:        Options.compaction_filter: None
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:        Options.compaction_filter_factory: None
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:  Options.sst_partitioner_factory: None
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d5962cece0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55d595283a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:        Options.write_buffer_size: 16777216
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:  Options.max_write_buffer_number: 64
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:          Options.compression: LZ4
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:       Options.prefix_extractor: nullptr
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:             Options.num_levels: 7
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                  Options.compression_opts.level: 32767
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:               Options.compression_opts.strategy: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                  Options.compression_opts.enabled: false
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                        Options.arena_block_size: 1048576
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                Options.disable_auto_compactions: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                   Options.inplace_update_support: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                           Options.bloom_locality: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                    Options.max_successive_merges: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                Options.paranoid_file_checks: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                Options.force_consistency_checks: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                Options.report_bg_io_stats: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                               Options.ttl: 2592000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                       Options.enable_blob_files: false
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                           Options.min_blob_size: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                          Options.blob_file_size: 268435456
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                Options.blob_file_starting_level: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 14842030-9ecb-4d28-b0e7-76776ffb878c
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768064315161751, "job": 1, "event": "recovery_started", "wal_files": [31]}
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768064315165205, "job": 1, "event": "recovery_finished"}
Jan 10 16:58:35 compute-0 ceph-osd[86809]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old nid_max 1025
Jan 10 16:58:35 compute-0 ceph-osd[86809]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old blobid_max 10240
Jan 10 16:58:35 compute-0 ceph-osd[86809]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Jan 10 16:58:35 compute-0 ceph-osd[86809]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta min_alloc_size 0x1000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: freelist init
Jan 10 16:58:35 compute-0 ceph-osd[86809]: freelist _read_cfg
Jan 10 16:58:35 compute-0 ceph-osd[86809]: bluestore(/var/lib/ceph/osd/ceph-1) _open_fm effective freelist_type = bitmap, freelist_alloc_size = 0x1000, min_alloc_size = 0x1000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: bluestore(/var/lib/ceph/osd/ceph-1) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Jan 10 16:58:35 compute-0 ceph-osd[86809]: bluefs umount
Jan 10 16:58:35 compute-0 ceph-osd[86809]: bdev(0x55d596085800 /var/lib/ceph/osd/ceph-1/block) close
Jan 10 16:58:35 compute-0 ceph-osd[86809]: bdev(0x55d596085800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 10 16:58:35 compute-0 ceph-osd[86809]: bdev(0x55d596085800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 10 16:58:35 compute-0 ceph-osd[86809]: bdev(0x55d596085800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 10 16:58:35 compute-0 ceph-osd[86809]: bdev(0x55d596085800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 10 16:58:35 compute-0 ceph-osd[86809]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Jan 10 16:58:35 compute-0 ceph-osd[86809]: bluefs mount
Jan 10 16:58:35 compute-0 ceph-osd[86809]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 10 16:58:35 compute-0 ceph-osd[86809]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 10 16:58:35 compute-0 ceph-osd[86809]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 10 16:58:35 compute-0 ceph-osd[86809]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 10 16:58:35 compute-0 ceph-osd[86809]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 10 16:58:35 compute-0 ceph-osd[86809]: bluefs mount shared_bdev_used = 27262976
Jan 10 16:58:35 compute-0 ceph-osd[86809]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: RocksDB version: 7.9.2
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Git sha 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Compile date 2025-10-30 15:42:43
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: DB SUMMARY
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: DB Session ID:  THG7H36IMJBHPS2MRPBM
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: CURRENT file:  CURRENT
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: IDENTITY file:  IDENTITY
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                         Options.error_if_exists: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                       Options.create_if_missing: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                         Options.paranoid_checks: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                                     Options.env: 0x55d59527fab0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                                      Options.fs: LegacyFileSystem
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                                Options.info_log: 0x55d5962cea80
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                Options.max_file_opening_threads: 16
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                              Options.statistics: (nil)
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                               Options.use_fsync: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                       Options.max_log_file_size: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                         Options.allow_fallocate: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                        Options.use_direct_reads: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:          Options.create_missing_column_families: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                              Options.db_log_dir: 
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                                 Options.wal_dir: db.wal
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                   Options.advise_random_on_open: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                    Options.write_buffer_manager: 0x55d596173900
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                            Options.rate_limiter: (nil)
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                  Options.unordered_write: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                               Options.row_cache: None
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                              Options.wal_filter: None
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:             Options.allow_ingest_behind: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:             Options.two_write_queues: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:             Options.manual_wal_flush: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:             Options.wal_compression: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:             Options.atomic_flush: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                 Options.log_readahead_size: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:             Options.allow_data_in_errors: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:             Options.db_host_id: __hostname__
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:             Options.max_background_jobs: 4
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:             Options.max_background_compactions: -1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:             Options.max_subcompactions: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:           Options.writable_file_max_buffer_size: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:             Options.max_total_wal_size: 1073741824
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                          Options.max_open_files: -1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                          Options.bytes_per_sync: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:       Options.compaction_readahead_size: 2097152
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                  Options.max_background_flushes: -1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Compression algorithms supported:
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         kZSTD supported: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         kXpressCompression supported: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         kBZip2Compression supported: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         kZSTDNotFinalCompression supported: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         kLZ4Compression supported: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         kZlibCompression supported: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         kLZ4HCCompression supported: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         kSnappyCompression supported: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:        Options.compaction_filter: None
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:        Options.compaction_filter_factory: None
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:  Options.sst_partitioner_factory: None
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d5962cff60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55d5952838d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:        Options.write_buffer_size: 16777216
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:  Options.max_write_buffer_number: 64
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:          Options.compression: LZ4
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:       Options.prefix_extractor: nullptr
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:             Options.num_levels: 7
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                  Options.compression_opts.level: 32767
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:               Options.compression_opts.strategy: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                  Options.compression_opts.enabled: false
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                        Options.arena_block_size: 1048576
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                Options.disable_auto_compactions: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                   Options.inplace_update_support: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                           Options.bloom_locality: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                    Options.max_successive_merges: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                Options.paranoid_file_checks: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                Options.force_consistency_checks: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                Options.report_bg_io_stats: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                               Options.ttl: 2592000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                       Options.enable_blob_files: false
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                           Options.min_blob_size: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                          Options.blob_file_size: 268435456
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                Options.blob_file_starting_level: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:           Options.merge_operator: None
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:        Options.compaction_filter: None
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:        Options.compaction_filter_factory: None
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:  Options.sst_partitioner_factory: None
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d5962cff60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55d5952838d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:        Options.write_buffer_size: 16777216
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:  Options.max_write_buffer_number: 64
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:          Options.compression: LZ4
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:       Options.prefix_extractor: nullptr
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:             Options.num_levels: 7
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                  Options.compression_opts.level: 32767
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:               Options.compression_opts.strategy: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                  Options.compression_opts.enabled: false
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                        Options.arena_block_size: 1048576
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                Options.disable_auto_compactions: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                   Options.inplace_update_support: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                           Options.bloom_locality: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                    Options.max_successive_merges: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                Options.paranoid_file_checks: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                Options.force_consistency_checks: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                Options.report_bg_io_stats: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                               Options.ttl: 2592000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                       Options.enable_blob_files: false
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                           Options.min_blob_size: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                          Options.blob_file_size: 268435456
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                Options.blob_file_starting_level: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:           Options.merge_operator: None
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:        Options.compaction_filter: None
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:        Options.compaction_filter_factory: None
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:  Options.sst_partitioner_factory: None
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d5962cff60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55d5952838d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:        Options.write_buffer_size: 16777216
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:  Options.max_write_buffer_number: 64
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:          Options.compression: LZ4
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:       Options.prefix_extractor: nullptr
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:             Options.num_levels: 7
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                  Options.compression_opts.level: 32767
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:               Options.compression_opts.strategy: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                  Options.compression_opts.enabled: false
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                        Options.arena_block_size: 1048576
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                Options.disable_auto_compactions: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                   Options.inplace_update_support: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                           Options.bloom_locality: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                    Options.max_successive_merges: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                Options.paranoid_file_checks: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                Options.force_consistency_checks: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                Options.report_bg_io_stats: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                               Options.ttl: 2592000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                       Options.enable_blob_files: false
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                           Options.min_blob_size: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                          Options.blob_file_size: 268435456
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                Options.blob_file_starting_level: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:           Options.merge_operator: None
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:        Options.compaction_filter: None
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:        Options.compaction_filter_factory: None
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:  Options.sst_partitioner_factory: None
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d5962cff60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55d5952838d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:        Options.write_buffer_size: 16777216
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:  Options.max_write_buffer_number: 64
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:          Options.compression: LZ4
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:       Options.prefix_extractor: nullptr
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:             Options.num_levels: 7
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                  Options.compression_opts.level: 32767
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:               Options.compression_opts.strategy: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                  Options.compression_opts.enabled: false
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                        Options.arena_block_size: 1048576
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                Options.disable_auto_compactions: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                   Options.inplace_update_support: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                           Options.bloom_locality: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                    Options.max_successive_merges: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                Options.paranoid_file_checks: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                Options.force_consistency_checks: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                Options.report_bg_io_stats: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                               Options.ttl: 2592000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                       Options.enable_blob_files: false
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                           Options.min_blob_size: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                          Options.blob_file_size: 268435456
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                Options.blob_file_starting_level: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:           Options.merge_operator: None
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:        Options.compaction_filter: None
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:        Options.compaction_filter_factory: None
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:  Options.sst_partitioner_factory: None
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d5962cff60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55d5952838d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:        Options.write_buffer_size: 16777216
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:  Options.max_write_buffer_number: 64
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:          Options.compression: LZ4
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:       Options.prefix_extractor: nullptr
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:             Options.num_levels: 7
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                  Options.compression_opts.level: 32767
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:               Options.compression_opts.strategy: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                  Options.compression_opts.enabled: false
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                        Options.arena_block_size: 1048576
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                Options.disable_auto_compactions: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                   Options.inplace_update_support: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                           Options.bloom_locality: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                    Options.max_successive_merges: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                Options.paranoid_file_checks: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                Options.force_consistency_checks: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                Options.report_bg_io_stats: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                               Options.ttl: 2592000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                       Options.enable_blob_files: false
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                           Options.min_blob_size: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                          Options.blob_file_size: 268435456
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                Options.blob_file_starting_level: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:           Options.merge_operator: None
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:        Options.compaction_filter: None
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:        Options.compaction_filter_factory: None
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:  Options.sst_partitioner_factory: None
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d5962cff60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55d5952838d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:        Options.write_buffer_size: 16777216
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:  Options.max_write_buffer_number: 64
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:          Options.compression: LZ4
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:       Options.prefix_extractor: nullptr
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:             Options.num_levels: 7
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                  Options.compression_opts.level: 32767
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:               Options.compression_opts.strategy: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                  Options.compression_opts.enabled: false
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                        Options.arena_block_size: 1048576
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                Options.disable_auto_compactions: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                   Options.inplace_update_support: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                           Options.bloom_locality: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                    Options.max_successive_merges: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                Options.paranoid_file_checks: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                Options.force_consistency_checks: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                Options.report_bg_io_stats: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                               Options.ttl: 2592000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                       Options.enable_blob_files: false
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                           Options.min_blob_size: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                          Options.blob_file_size: 268435456
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                Options.blob_file_starting_level: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:           Options.merge_operator: None
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:        Options.compaction_filter: None
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:        Options.compaction_filter_factory: None
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:  Options.sst_partitioner_factory: None
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d5962cff60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55d5952838d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:        Options.write_buffer_size: 16777216
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:  Options.max_write_buffer_number: 64
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:          Options.compression: LZ4
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:       Options.prefix_extractor: nullptr
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:             Options.num_levels: 7
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                  Options.compression_opts.level: 32767
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:               Options.compression_opts.strategy: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                  Options.compression_opts.enabled: false
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                        Options.arena_block_size: 1048576
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                Options.disable_auto_compactions: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                   Options.inplace_update_support: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                           Options.bloom_locality: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                    Options.max_successive_merges: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                Options.paranoid_file_checks: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                Options.force_consistency_checks: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                Options.report_bg_io_stats: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                               Options.ttl: 2592000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                       Options.enable_blob_files: false
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                           Options.min_blob_size: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                          Options.blob_file_size: 268435456
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                Options.blob_file_starting_level: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:           Options.merge_operator: None
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:        Options.compaction_filter: None
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:        Options.compaction_filter_factory: None
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:  Options.sst_partitioner_factory: None
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d5962cff80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55d595283a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:        Options.write_buffer_size: 16777216
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:  Options.max_write_buffer_number: 64
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:          Options.compression: LZ4
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:       Options.prefix_extractor: nullptr
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:             Options.num_levels: 7
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                  Options.compression_opts.level: 32767
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:               Options.compression_opts.strategy: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                  Options.compression_opts.enabled: false
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                        Options.arena_block_size: 1048576
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                Options.disable_auto_compactions: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                   Options.inplace_update_support: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                           Options.bloom_locality: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                    Options.max_successive_merges: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                Options.paranoid_file_checks: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                Options.force_consistency_checks: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                Options.report_bg_io_stats: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                               Options.ttl: 2592000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                       Options.enable_blob_files: false
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                           Options.min_blob_size: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                          Options.blob_file_size: 268435456
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                Options.blob_file_starting_level: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:           Options.merge_operator: None
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:        Options.compaction_filter: None
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:        Options.compaction_filter_factory: None
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:  Options.sst_partitioner_factory: None
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d5962cff80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55d595283a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:        Options.write_buffer_size: 16777216
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:  Options.max_write_buffer_number: 64
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:          Options.compression: LZ4
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:       Options.prefix_extractor: nullptr
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:             Options.num_levels: 7
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                  Options.compression_opts.level: 32767
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:               Options.compression_opts.strategy: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                  Options.compression_opts.enabled: false
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                        Options.arena_block_size: 1048576
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                Options.disable_auto_compactions: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                   Options.inplace_update_support: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                           Options.bloom_locality: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                    Options.max_successive_merges: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                Options.paranoid_file_checks: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                Options.force_consistency_checks: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                Options.report_bg_io_stats: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                               Options.ttl: 2592000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                       Options.enable_blob_files: false
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                           Options.min_blob_size: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                          Options.blob_file_size: 268435456
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                Options.blob_file_starting_level: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:           Options.merge_operator: None
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:        Options.compaction_filter: None
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:        Options.compaction_filter_factory: None
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:  Options.sst_partitioner_factory: None
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d5962cff80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55d595283a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:        Options.write_buffer_size: 16777216
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:  Options.max_write_buffer_number: 64
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:          Options.compression: LZ4
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:       Options.prefix_extractor: nullptr
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:             Options.num_levels: 7
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                  Options.compression_opts.level: 32767
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:               Options.compression_opts.strategy: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                  Options.compression_opts.enabled: false
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                        Options.arena_block_size: 1048576
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                Options.disable_auto_compactions: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                   Options.inplace_update_support: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                           Options.bloom_locality: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                    Options.max_successive_merges: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                Options.paranoid_file_checks: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                Options.force_consistency_checks: 1
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                Options.report_bg_io_stats: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                               Options.ttl: 2592000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                       Options.enable_blob_files: false
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                           Options.min_blob_size: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                          Options.blob_file_size: 268435456
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb:                Options.blob_file_starting_level: 0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 14842030-9ecb-4d28-b0e7-76776ffb878c
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768064315217323, "job": 1, "event": "recovery_started", "wal_files": [31]}
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768064315237076, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 131, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768064315, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "14842030-9ecb-4d28-b0e7-76776ffb878c", "db_session_id": "THG7H36IMJBHPS2MRPBM", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768064315242458, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1595, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 469, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 571, "raw_average_value_size": 285, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768064315, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "14842030-9ecb-4d28-b0e7-76776ffb878c", "db_session_id": "THG7H36IMJBHPS2MRPBM", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768064315267274, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768064315, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "14842030-9ecb-4d28-b0e7-76776ffb878c", "db_session_id": "THG7H36IMJBHPS2MRPBM", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768064315270778, "job": 1, "event": "recovery_finished"}
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Jan 10 16:58:35 compute-0 ceph-mgr[75538]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/150683745; not ready for session (expect reconnect)
Jan 10 16:58:35 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 10 16:58:35 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 10 16:58:35 compute-0 ceph-mgr[75538]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55d5964e8000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: DB pointer 0x55d596488000
Jan 10 16:58:35 compute-0 ceph-osd[86809]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Jan 10 16:58:35 compute-0 ceph-osd[86809]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super from 4, latest 4
Jan 10 16:58:35 compute-0 ceph-osd[86809]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super done
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 10 16:58:35 compute-0 ceph-osd[86809]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.019       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.019       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.019       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.019       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d5952838d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d5952838d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d5952838d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d5952838d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.01              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.01              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d5952838d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d5952838d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d5952838d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d595283a30#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d595283a30#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.02              0.00         1    0.025       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.02              0.00         1    0.025       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.02              0.00         1    0.025       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.02              0.00         1    0.025       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d595283a30#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d5952838d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d5952838d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 10 16:58:35 compute-0 ceph-osd[86809]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Jan 10 16:58:35 compute-0 ceph-osd[86809]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/hello/cls_hello.cc:316: loading cls_hello
Jan 10 16:58:35 compute-0 ceph-osd[86809]: _get_class not permitted to load lua
Jan 10 16:58:35 compute-0 ceph-osd[86809]: _get_class not permitted to load sdk
Jan 10 16:58:35 compute-0 ceph-osd[86809]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Jan 10 16:58:35 compute-0 ceph-osd[86809]: osd.1 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Jan 10 16:58:35 compute-0 ceph-osd[86809]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Jan 10 16:58:35 compute-0 ceph-osd[86809]: osd.1 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Jan 10 16:58:35 compute-0 ceph-osd[86809]: osd.1 0 load_pgs
Jan 10 16:58:35 compute-0 ceph-osd[86809]: osd.1 0 load_pgs opened 0 pgs
Jan 10 16:58:35 compute-0 ceph-osd[86809]: osd.1 0 log_to_monitors true
Jan 10 16:58:35 compute-0 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-1[86805]: 2026-01-10T16:58:35.383+0000 7f1fce8078c0 -1 osd.1 0 log_to_monitors true
Jan 10 16:58:35 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} v 0)
Jan 10 16:58:35 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/2762500155,v1:192.168.122.100:6807/2762500155]' entity='osd.1' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} : dispatch
Jan 10 16:58:35 compute-0 podman[87315]: 2026-01-10 16:58:35.463790612 +0000 UTC m=+0.092818153 container create 06751f32c88e4ae843dd95464f117eec884a10f2c234078dd4b57d0b290f6382 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_gagarin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0)
Jan 10 16:58:35 compute-0 podman[87315]: 2026-01-10 16:58:35.416347481 +0000 UTC m=+0.045375092 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 16:58:35 compute-0 systemd[1]: Started libpod-conmon-06751f32c88e4ae843dd95464f117eec884a10f2c234078dd4b57d0b290f6382.scope.
Jan 10 16:58:35 compute-0 systemd[1]: Started libcrun container.
Jan 10 16:58:35 compute-0 podman[87315]: 2026-01-10 16:58:35.60530672 +0000 UTC m=+0.234334271 container init 06751f32c88e4ae843dd95464f117eec884a10f2c234078dd4b57d0b290f6382 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_gagarin, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 16:58:35 compute-0 podman[87315]: 2026-01-10 16:58:35.613996421 +0000 UTC m=+0.243023962 container start 06751f32c88e4ae843dd95464f117eec884a10f2c234078dd4b57d0b290f6382 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_gagarin, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 16:58:35 compute-0 determined_gagarin[87364]: 167 167
Jan 10 16:58:35 compute-0 systemd[1]: libpod-06751f32c88e4ae843dd95464f117eec884a10f2c234078dd4b57d0b290f6382.scope: Deactivated successfully.
Jan 10 16:58:35 compute-0 podman[87315]: 2026-01-10 16:58:35.630306562 +0000 UTC m=+0.259334143 container attach 06751f32c88e4ae843dd95464f117eec884a10f2c234078dd4b57d0b290f6382 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_gagarin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 16:58:35 compute-0 podman[87315]: 2026-01-10 16:58:35.632815724 +0000 UTC m=+0.261843285 container died 06751f32c88e4ae843dd95464f117eec884a10f2c234078dd4b57d0b290f6382 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_gagarin, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 10 16:58:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-dbfe068d21816042f7fc0b6bf446f571fd24a871e572d1debb0fd9dfdb9aa207-merged.mount: Deactivated successfully.
Jan 10 16:58:35 compute-0 podman[87315]: 2026-01-10 16:58:35.711800596 +0000 UTC m=+0.340828137 container remove 06751f32c88e4ae843dd95464f117eec884a10f2c234078dd4b57d0b290f6382 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_gagarin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 16:58:35 compute-0 systemd[1]: libpod-conmon-06751f32c88e4ae843dd95464f117eec884a10f2c234078dd4b57d0b290f6382.scope: Deactivated successfully.
Jan 10 16:58:35 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:58:35 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:58:35 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch
Jan 10 16:58:35 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 16:58:35 compute-0 ceph-mon[75249]: Deploying daemon osd.2 on compute-0
Jan 10 16:58:35 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 10 16:58:35 compute-0 ceph-mon[75249]: from='osd.1 [v2:192.168.122.100:6806/2762500155,v1:192.168.122.100:6807/2762500155]' entity='osd.1' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} : dispatch
Jan 10 16:58:35 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e8 do_prune osdmap full prune enabled
Jan 10 16:58:35 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e8 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 10 16:58:35 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/2762500155,v1:192.168.122.100:6807/2762500155]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Jan 10 16:58:35 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e9 e9: 3 total, 0 up, 3 in
Jan 10 16:58:35 compute-0 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e9: 3 total, 0 up, 3 in
Jan 10 16:58:35 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0)
Jan 10 16:58:35 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/2762500155,v1:192.168.122.100:6807/2762500155]' entity='osd.1' cmd={"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Jan 10 16:58:35 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e9 create-or-move crush item name 'osd.1' initial_weight 0.02 at location {host=compute-0,root=default}
Jan 10 16:58:35 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 10 16:58:35 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 10 16:58:35 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 10 16:58:35 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 10 16:58:35 compute-0 ceph-mgr[75538]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 10 16:58:35 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 10 16:58:35 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 10 16:58:35 compute-0 ceph-mgr[75538]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 10 16:58:35 compute-0 ceph-mgr[75538]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 10 16:58:35 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 10 16:58:36 compute-0 podman[87394]: 2026-01-10 16:58:36.034598501 +0000 UTC m=+0.083779301 container create 19fe0021d6c9a3f88e72fd0b56fd4f8613104b58cb828f8c6d652fcee3976cab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-2-activate-test, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 10 16:58:36 compute-0 podman[87394]: 2026-01-10 16:58:35.990836647 +0000 UTC m=+0.040017527 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 16:58:36 compute-0 systemd[1]: Started libpod-conmon-19fe0021d6c9a3f88e72fd0b56fd4f8613104b58cb828f8c6d652fcee3976cab.scope.
Jan 10 16:58:36 compute-0 systemd[1]: Started libcrun container.
Jan 10 16:58:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/952bd10987bd926e79e48b649f1fc478281861fbf2d9c0c841864d800228bcd4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 16:58:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/952bd10987bd926e79e48b649f1fc478281861fbf2d9c0c841864d800228bcd4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 16:58:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/952bd10987bd926e79e48b649f1fc478281861fbf2d9c0c841864d800228bcd4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 16:58:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/952bd10987bd926e79e48b649f1fc478281861fbf2d9c0c841864d800228bcd4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 16:58:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/952bd10987bd926e79e48b649f1fc478281861fbf2d9c0c841864d800228bcd4/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Jan 10 16:58:36 compute-0 podman[87394]: 2026-01-10 16:58:36.173076141 +0000 UTC m=+0.222256951 container init 19fe0021d6c9a3f88e72fd0b56fd4f8613104b58cb828f8c6d652fcee3976cab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-2-activate-test, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 16:58:36 compute-0 podman[87394]: 2026-01-10 16:58:36.194200041 +0000 UTC m=+0.243380831 container start 19fe0021d6c9a3f88e72fd0b56fd4f8613104b58cb828f8c6d652fcee3976cab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-2-activate-test, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 10 16:58:36 compute-0 podman[87394]: 2026-01-10 16:58:36.20178609 +0000 UTC m=+0.250966890 container attach 19fe0021d6c9a3f88e72fd0b56fd4f8613104b58cb828f8c6d652fcee3976cab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-2-activate-test, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 10 16:58:36 compute-0 ceph-mgr[75538]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/150683745; not ready for session (expect reconnect)
Jan 10 16:58:36 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 10 16:58:36 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 10 16:58:36 compute-0 ceph-mgr[75538]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 10 16:58:36 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Jan 10 16:58:36 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Jan 10 16:58:36 compute-0 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-2-activate-test[87409]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_FSID]
Jan 10 16:58:36 compute-0 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-2-activate-test[87409]:                             [--no-systemd] [--no-tmpfs]
Jan 10 16:58:36 compute-0 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-2-activate-test[87409]: ceph-volume activate: error: unrecognized arguments: --bad-option
Jan 10 16:58:36 compute-0 ceph-osd[85764]: osd.0 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 16.911 iops: 4329.178 elapsed_sec: 0.693
Jan 10 16:58:36 compute-0 ceph-osd[85764]: log_channel(cluster) log [WRN] : OSD bench result of 4329.177571 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 10 16:58:36 compute-0 ceph-osd[85764]: osd.0 0 waiting for initial osdmap
Jan 10 16:58:36 compute-0 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-0[85760]: 2026-01-10T16:58:36.742+0000 7f0516f54640 -1 osd.0 0 waiting for initial osdmap
Jan 10 16:58:36 compute-0 systemd[1]: libpod-19fe0021d6c9a3f88e72fd0b56fd4f8613104b58cb828f8c6d652fcee3976cab.scope: Deactivated successfully.
Jan 10 16:58:36 compute-0 podman[87394]: 2026-01-10 16:58:36.747803972 +0000 UTC m=+0.796984752 container died 19fe0021d6c9a3f88e72fd0b56fd4f8613104b58cb828f8c6d652fcee3976cab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-2-activate-test, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 16:58:36 compute-0 ceph-osd[85764]: osd.0 9 crush map has features 288514050185494528, adjusting msgr requires for clients
Jan 10 16:58:36 compute-0 ceph-osd[85764]: osd.0 9 crush map has features 288514050185494528 was 288232575208792577, adjusting msgr requires for mons
Jan 10 16:58:36 compute-0 ceph-osd[85764]: osd.0 9 crush map has features 3314932999778484224, adjusting msgr requires for osds
Jan 10 16:58:36 compute-0 ceph-osd[85764]: osd.0 9 check_osdmap_features require_osd_release unknown -> tentacle
Jan 10 16:58:36 compute-0 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-0[85760]: 2026-01-10T16:58:36.774+0000 7f0511d59640 -1 osd.0 9 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Jan 10 16:58:36 compute-0 ceph-osd[85764]: osd.0 9 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Jan 10 16:58:36 compute-0 ceph-osd[85764]: osd.0 9 set_numa_affinity not setting numa affinity
Jan 10 16:58:36 compute-0 ceph-osd[85764]: osd.0 9 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial no unique device path for loop3: no symlink to loop3 in /dev/disk/by-path
Jan 10 16:58:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-952bd10987bd926e79e48b649f1fc478281861fbf2d9c0c841864d800228bcd4-merged.mount: Deactivated successfully.
Jan 10 16:58:36 compute-0 podman[87394]: 2026-01-10 16:58:36.796811088 +0000 UTC m=+0.845991868 container remove 19fe0021d6c9a3f88e72fd0b56fd4f8613104b58cb828f8c6d652fcee3976cab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-2-activate-test, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 10 16:58:36 compute-0 systemd[1]: libpod-conmon-19fe0021d6c9a3f88e72fd0b56fd4f8613104b58cb828f8c6d652fcee3976cab.scope: Deactivated successfully.
Jan 10 16:58:36 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e9 do_prune osdmap full prune enabled
Jan 10 16:58:36 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e9 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 10 16:58:36 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/2762500155,v1:192.168.122.100:6807/2762500155]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Jan 10 16:58:36 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e10 e10: 3 total, 1 up, 3 in
Jan 10 16:58:36 compute-0 ceph-osd[86809]: osd.1 0 done with init, starting boot process
Jan 10 16:58:36 compute-0 ceph-osd[86809]: osd.1 0 start_boot
Jan 10 16:58:36 compute-0 ceph-osd[86809]: osd.1 0 maybe_override_options_for_qos osd_max_backfills set to 1
Jan 10 16:58:36 compute-0 ceph-osd[86809]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Jan 10 16:58:36 compute-0 ceph-osd[86809]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Jan 10 16:58:36 compute-0 ceph-osd[86809]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Jan 10 16:58:36 compute-0 ceph-osd[86809]: osd.1 0  bench count 12288000 bsize 4 KiB
Jan 10 16:58:36 compute-0 ceph-mon[75249]: log_channel(cluster) log [INF] : osd.0 [v2:192.168.122.100:6802/150683745,v1:192.168.122.100:6803/150683745] boot
Jan 10 16:58:36 compute-0 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e10: 3 total, 1 up, 3 in
Jan 10 16:58:36 compute-0 ceph-osd[85764]: osd.0 10 state: booting -> active
Jan 10 16:58:36 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 10 16:58:36 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 10 16:58:36 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 10 16:58:36 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 10 16:58:36 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 10 16:58:36 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 10 16:58:36 compute-0 ceph-mgr[75538]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 10 16:58:36 compute-0 ceph-mgr[75538]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 10 16:58:36 compute-0 ceph-mon[75249]: from='osd.1 [v2:192.168.122.100:6806/2762500155,v1:192.168.122.100:6807/2762500155]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Jan 10 16:58:36 compute-0 ceph-mon[75249]: osdmap e9: 3 total, 0 up, 3 in
Jan 10 16:58:36 compute-0 ceph-mon[75249]: from='osd.1 [v2:192.168.122.100:6806/2762500155,v1:192.168.122.100:6807/2762500155]' entity='osd.1' cmd={"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Jan 10 16:58:36 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 10 16:58:36 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 10 16:58:36 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 10 16:58:36 compute-0 ceph-mon[75249]: pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 10 16:58:36 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 10 16:58:36 compute-0 ceph-mon[75249]: OSD bench result of 4329.177571 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 10 16:58:36 compute-0 ceph-mgr[75538]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/2762500155; not ready for session (expect reconnect)
Jan 10 16:58:36 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 10 16:58:36 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 10 16:58:36 compute-0 ceph-mgr[75538]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 10 16:58:36 compute-0 ceph-mgr[75538]: [devicehealth INFO root] creating mgr pool
Jan 10 16:58:36 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} v 0)
Jan 10 16:58:36 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} : dispatch
Jan 10 16:58:37 compute-0 systemd[1]: Reloading.
Jan 10 16:58:37 compute-0 systemd-rc-local-generator[87477]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 16:58:37 compute-0 systemd-sysv-generator[87481]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 10 16:58:37 compute-0 systemd[1]: Reloading.
Jan 10 16:58:37 compute-0 systemd-rc-local-generator[87515]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 16:58:37 compute-0 systemd-sysv-generator[87518]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 10 16:58:37 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e10 do_prune osdmap full prune enabled
Jan 10 16:58:37 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e10 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 10 16:58:37 compute-0 ceph-mgr[75538]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/2762500155; not ready for session (expect reconnect)
Jan 10 16:58:37 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 10 16:58:37 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 10 16:58:37 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Jan 10 16:58:37 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e11 e11: 3 total, 1 up, 3 in
Jan 10 16:58:37 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e11 crush map has features 3314933000852226048, adjusting msgr requires
Jan 10 16:58:37 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e11 crush map has features 288514051259236352, adjusting msgr requires
Jan 10 16:58:37 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e11 crush map has features 288514051259236352, adjusting msgr requires
Jan 10 16:58:37 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e11 crush map has features 288514051259236352, adjusting msgr requires
Jan 10 16:58:37 compute-0 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e11: 3 total, 1 up, 3 in
Jan 10 16:58:37 compute-0 ceph-osd[85764]: osd.0 11 crush map has features 288514051259236352, adjusting msgr requires for clients
Jan 10 16:58:37 compute-0 ceph-osd[85764]: osd.0 11 crush map has features 288514051259236352 was 288514050185503233, adjusting msgr requires for mons
Jan 10 16:58:37 compute-0 ceph-osd[85764]: osd.0 11 crush map has features 3314933000852226048, adjusting msgr requires for osds
Jan 10 16:58:37 compute-0 ceph-mgr[75538]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 10 16:58:37 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v32: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Jan 10 16:58:37 compute-0 ceph-mgr[75538]: [balancer INFO root] Optimize plan auto_2026-01-10_16:58:37
Jan 10 16:58:37 compute-0 ceph-mgr[75538]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 10 16:58:37 compute-0 ceph-mgr[75538]: [balancer INFO root] Some PGs (1.000000) are unknown; try again later
Jan 10 16:58:38 compute-0 systemd[1]: Starting Ceph osd.2 for a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4...
Jan 10 16:58:38 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 10 16:58:38 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 10 16:58:38 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 10 16:58:38 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 10 16:58:38 compute-0 ceph-mgr[75538]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 10 16:58:38 compute-0 ceph-mgr[75538]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 10 16:58:38 compute-0 ceph-mon[75249]: from='osd.1 [v2:192.168.122.100:6806/2762500155,v1:192.168.122.100:6807/2762500155]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Jan 10 16:58:38 compute-0 ceph-mon[75249]: osd.0 [v2:192.168.122.100:6802/150683745,v1:192.168.122.100:6803/150683745] boot
Jan 10 16:58:38 compute-0 ceph-mon[75249]: osdmap e10: 3 total, 1 up, 3 in
Jan 10 16:58:38 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 10 16:58:38 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 10 16:58:38 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 10 16:58:38 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 10 16:58:38 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} : dispatch
Jan 10 16:58:38 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} v 0)
Jan 10 16:58:38 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} : dispatch
Jan 10 16:58:38 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e11 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 16:58:38 compute-0 podman[87571]: 2026-01-10 16:58:38.393645277 +0000 UTC m=+0.045476995 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 16:58:38 compute-0 podman[87571]: 2026-01-10 16:58:38.677392774 +0000 UTC m=+0.329224432 container create 2ffd569d340b7a8e948f9a2dfb8b5f2c14518f0bdbdc1a6b93ae7ff33f5ee155 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-2-activate, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 16:58:38 compute-0 systemd[1]: Started libcrun container.
Jan 10 16:58:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c66e972109d42f348690ddaf51cfcc895c09fcae6c35a28353c39c85cc058f7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 16:58:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c66e972109d42f348690ddaf51cfcc895c09fcae6c35a28353c39c85cc058f7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 16:58:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c66e972109d42f348690ddaf51cfcc895c09fcae6c35a28353c39c85cc058f7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 16:58:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c66e972109d42f348690ddaf51cfcc895c09fcae6c35a28353c39c85cc058f7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 16:58:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c66e972109d42f348690ddaf51cfcc895c09fcae6c35a28353c39c85cc058f7/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Jan 10 16:58:38 compute-0 ceph-mgr[75538]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/2762500155; not ready for session (expect reconnect)
Jan 10 16:58:38 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 10 16:58:38 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 10 16:58:38 compute-0 ceph-mgr[75538]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 10 16:58:38 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] _maybe_adjust
Jan 10 16:58:38 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 21470642176
Jan 10 16:58:38 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 1 (current 1)
Jan 10 16:58:38 compute-0 ceph-mgr[75538]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 10 16:58:38 compute-0 ceph-mgr[75538]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 10 16:58:38 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 16:58:38 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 16:58:38 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 16:58:38 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 16:58:38 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 16:58:38 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 16:58:38 compute-0 podman[87571]: 2026-01-10 16:58:38.958076832 +0000 UTC m=+0.609908510 container init 2ffd569d340b7a8e948f9a2dfb8b5f2c14518f0bdbdc1a6b93ae7ff33f5ee155 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-2-activate, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 10 16:58:38 compute-0 podman[87571]: 2026-01-10 16:58:38.96598409 +0000 UTC m=+0.617815758 container start 2ffd569d340b7a8e948f9a2dfb8b5f2c14518f0bdbdc1a6b93ae7ff33f5ee155 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-2-activate, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 16:58:38 compute-0 podman[87571]: 2026-01-10 16:58:38.982866998 +0000 UTC m=+0.634698656 container attach 2ffd569d340b7a8e948f9a2dfb8b5f2c14518f0bdbdc1a6b93ae7ff33f5ee155 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-2-activate, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 16:58:39 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e11 do_prune osdmap full prune enabled
Jan 10 16:58:39 compute-0 ceph-mon[75249]: purged_snaps scrub starts
Jan 10 16:58:39 compute-0 ceph-mon[75249]: purged_snaps scrub ok
Jan 10 16:58:39 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 10 16:58:39 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Jan 10 16:58:39 compute-0 ceph-mon[75249]: osdmap e11: 3 total, 1 up, 3 in
Jan 10 16:58:39 compute-0 ceph-mon[75249]: pgmap v32: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Jan 10 16:58:39 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 10 16:58:39 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 10 16:58:39 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} : dispatch
Jan 10 16:58:39 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 10 16:58:39 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Jan 10 16:58:39 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e12 e12: 3 total, 1 up, 3 in
Jan 10 16:58:39 compute-0 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e12: 3 total, 1 up, 3 in
Jan 10 16:58:39 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 10 16:58:39 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 10 16:58:39 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 10 16:58:39 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 10 16:58:39 compute-0 ceph-mgr[75538]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 10 16:58:39 compute-0 ceph-mgr[75538]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 10 16:58:39 compute-0 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-2-activate[87586]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 10 16:58:39 compute-0 bash[87571]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 10 16:58:39 compute-0 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-2-activate[87586]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 10 16:58:39 compute-0 bash[87571]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 10 16:58:39 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v34: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Jan 10 16:58:40 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Jan 10 16:58:40 compute-0 ceph-mon[75249]: osdmap e12: 3 total, 1 up, 3 in
Jan 10 16:58:40 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 10 16:58:40 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 10 16:58:40 compute-0 ceph-mgr[75538]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/2762500155; not ready for session (expect reconnect)
Jan 10 16:58:40 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 10 16:58:40 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 10 16:58:40 compute-0 ceph-mgr[75538]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 10 16:58:40 compute-0 lvm[87670]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 10 16:58:40 compute-0 lvm[87670]: VG ceph_vg0 finished
Jan 10 16:58:40 compute-0 lvm[87673]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 10 16:58:40 compute-0 lvm[87673]: VG ceph_vg1 finished
Jan 10 16:58:40 compute-0 lvm[87674]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 10 16:58:40 compute-0 lvm[87674]: VG ceph_vg2 finished
Jan 10 16:58:40 compute-0 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-2-activate[87586]: --> Failed to activate via raw: did not find any matching OSD to activate
Jan 10 16:58:40 compute-0 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-2-activate[87586]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 10 16:58:40 compute-0 bash[87571]: --> Failed to activate via raw: did not find any matching OSD to activate
Jan 10 16:58:40 compute-0 bash[87571]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 10 16:58:40 compute-0 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-2-activate[87586]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 10 16:58:40 compute-0 bash[87571]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 10 16:58:40 compute-0 ceph-mgr[75538]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/2762500155; not ready for session (expect reconnect)
Jan 10 16:58:40 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 10 16:58:40 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 10 16:58:40 compute-0 ceph-mgr[75538]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 10 16:58:41 compute-0 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-2-activate[87586]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Jan 10 16:58:41 compute-0 bash[87571]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Jan 10 16:58:41 compute-0 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-2-activate[87586]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg2/ceph_lv2 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Jan 10 16:58:41 compute-0 bash[87571]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg2/ceph_lv2 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Jan 10 16:58:41 compute-0 ceph-mon[75249]: pgmap v34: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Jan 10 16:58:41 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 10 16:58:41 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 10 16:58:41 compute-0 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-2-activate[87586]: Running command: /usr/bin/ln -snf /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Jan 10 16:58:41 compute-0 bash[87571]: Running command: /usr/bin/ln -snf /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Jan 10 16:58:41 compute-0 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-2-activate[87586]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Jan 10 16:58:41 compute-0 bash[87571]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Jan 10 16:58:41 compute-0 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-2-activate[87586]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Jan 10 16:58:41 compute-0 bash[87571]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Jan 10 16:58:41 compute-0 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-2-activate[87586]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Jan 10 16:58:41 compute-0 bash[87571]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Jan 10 16:58:41 compute-0 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-2-activate[87586]: --> ceph-volume lvm activate successful for osd ID: 2
Jan 10 16:58:41 compute-0 bash[87571]: --> ceph-volume lvm activate successful for osd ID: 2
Jan 10 16:58:41 compute-0 systemd[1]: libpod-2ffd569d340b7a8e948f9a2dfb8b5f2c14518f0bdbdc1a6b93ae7ff33f5ee155.scope: Deactivated successfully.
Jan 10 16:58:41 compute-0 systemd[1]: libpod-2ffd569d340b7a8e948f9a2dfb8b5f2c14518f0bdbdc1a6b93ae7ff33f5ee155.scope: Consumed 3.355s CPU time.
Jan 10 16:58:41 compute-0 podman[87571]: 2026-01-10 16:58:41.373278749 +0000 UTC m=+3.025110438 container died 2ffd569d340b7a8e948f9a2dfb8b5f2c14518f0bdbdc1a6b93ae7ff33f5ee155 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-2-activate, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 10 16:58:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-2c66e972109d42f348690ddaf51cfcc895c09fcae6c35a28353c39c85cc058f7-merged.mount: Deactivated successfully.
Jan 10 16:58:41 compute-0 podman[87571]: 2026-01-10 16:58:41.53634862 +0000 UTC m=+3.188180268 container remove 2ffd569d340b7a8e948f9a2dfb8b5f2c14518f0bdbdc1a6b93ae7ff33f5ee155 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-2-activate, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 10 16:58:41 compute-0 ceph-mgr[75538]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/2762500155; not ready for session (expect reconnect)
Jan 10 16:58:41 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 10 16:58:41 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 10 16:58:41 compute-0 ceph-mgr[75538]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 10 16:58:41 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v35: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Jan 10 16:58:42 compute-0 podman[87846]: 2026-01-10 16:58:42.091850687 +0000 UTC m=+0.075448750 container create d71926618b5142732453f9e9d6aaf6a6a6d47a415c6ee57ff501346a0a585c15 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 10 16:58:42 compute-0 podman[87846]: 2026-01-10 16:58:42.043321025 +0000 UTC m=+0.026919148 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 16:58:42 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 10 16:58:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cbaa4e01d184a4b0e13138d4f16877c4cf4d88fc3fc829c3d4c48fc02df9a1e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 16:58:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cbaa4e01d184a4b0e13138d4f16877c4cf4d88fc3fc829c3d4c48fc02df9a1e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 16:58:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cbaa4e01d184a4b0e13138d4f16877c4cf4d88fc3fc829c3d4c48fc02df9a1e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 16:58:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cbaa4e01d184a4b0e13138d4f16877c4cf4d88fc3fc829c3d4c48fc02df9a1e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 16:58:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cbaa4e01d184a4b0e13138d4f16877c4cf4d88fc3fc829c3d4c48fc02df9a1e/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Jan 10 16:58:42 compute-0 podman[87846]: 2026-01-10 16:58:42.301418051 +0000 UTC m=+0.285016114 container init d71926618b5142732453f9e9d6aaf6a6a6d47a415c6ee57ff501346a0a585c15 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 10 16:58:42 compute-0 podman[87846]: 2026-01-10 16:58:42.309097283 +0000 UTC m=+0.292695336 container start d71926618b5142732453f9e9d6aaf6a6a6d47a415c6ee57ff501346a0a585c15 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 16:58:42 compute-0 bash[87846]: d71926618b5142732453f9e9d6aaf6a6a6d47a415c6ee57ff501346a0a585c15
Jan 10 16:58:42 compute-0 systemd[1]: Started Ceph osd.2 for a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4.
Jan 10 16:58:42 compute-0 ceph-osd[87867]: set uid:gid to 167:167 (ceph:ceph)
Jan 10 16:58:42 compute-0 ceph-osd[87867]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-osd, pid 2
Jan 10 16:58:42 compute-0 ceph-osd[87867]: pidfile_write: ignore empty --pid-file
Jan 10 16:58:42 compute-0 ceph-osd[87867]: bdev(0x5621de014000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 10 16:58:42 compute-0 ceph-osd[87867]: bdev(0x5621de014000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 10 16:58:42 compute-0 ceph-osd[87867]: bdev(0x5621de014000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 10 16:58:42 compute-0 ceph-osd[87867]: bdev(0x5621de014000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 10 16:58:42 compute-0 ceph-osd[87867]: bdev(0x5621de014000 /var/lib/ceph/osd/ceph-2/block) close
Jan 10 16:58:42 compute-0 sudo[86858]: pam_unix(sudo:session): session closed for user root
Jan 10 16:58:42 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 10 16:58:42 compute-0 ceph-osd[87867]: bdev(0x5621de014000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 10 16:58:42 compute-0 ceph-osd[87867]: bdev(0x5621de014000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 10 16:58:42 compute-0 ceph-osd[87867]: bdev(0x5621de014000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 10 16:58:42 compute-0 ceph-osd[87867]: bdev(0x5621de014000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 10 16:58:42 compute-0 ceph-osd[87867]: bdev(0x5621de014000 /var/lib/ceph/osd/ceph-2/block) close
Jan 10 16:58:42 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:58:42 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 10 16:58:42 compute-0 ceph-osd[87867]: bdev(0x5621de014000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 10 16:58:42 compute-0 ceph-osd[87867]: bdev(0x5621de014000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 10 16:58:42 compute-0 ceph-osd[87867]: bdev(0x5621de014000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 10 16:58:42 compute-0 ceph-osd[87867]: bdev(0x5621de014000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 10 16:58:42 compute-0 ceph-osd[87867]: bdev(0x5621de014000 /var/lib/ceph/osd/ceph-2/block) close
Jan 10 16:58:42 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:58:42 compute-0 ceph-osd[87867]: bdev(0x5621de014000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 10 16:58:42 compute-0 ceph-osd[87867]: bdev(0x5621de014000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 10 16:58:42 compute-0 ceph-osd[87867]: bdev(0x5621de014000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 10 16:58:42 compute-0 ceph-osd[87867]: bdev(0x5621de014000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 10 16:58:42 compute-0 ceph-osd[87867]: bdev(0x5621de014000 /var/lib/ceph/osd/ceph-2/block) close
Jan 10 16:58:42 compute-0 ceph-osd[87867]: bdev(0x5621de014000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 10 16:58:42 compute-0 ceph-osd[87867]: bdev(0x5621de014000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 10 16:58:42 compute-0 ceph-osd[87867]: bdev(0x5621de014000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 10 16:58:42 compute-0 ceph-osd[87867]: bdev(0x5621de014000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 10 16:58:42 compute-0 ceph-osd[87867]: bdev(0x5621de014000 /var/lib/ceph/osd/ceph-2/block) close
Jan 10 16:58:42 compute-0 ceph-osd[87867]: bdev(0x5621de014000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 10 16:58:42 compute-0 ceph-osd[87867]: bdev(0x5621de014000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 10 16:58:42 compute-0 ceph-osd[87867]: bdev(0x5621de014000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 10 16:58:42 compute-0 ceph-osd[87867]: bdev(0x5621de014000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 10 16:58:42 compute-0 ceph-osd[87867]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 10 16:58:42 compute-0 ceph-osd[87867]: bdev(0x5621de014400 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 10 16:58:42 compute-0 ceph-osd[87867]: bdev(0x5621de014400 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 10 16:58:42 compute-0 ceph-osd[87867]: bdev(0x5621de014400 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 10 16:58:42 compute-0 ceph-osd[87867]: bdev(0x5621de014400 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 10 16:58:42 compute-0 ceph-osd[87867]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Jan 10 16:58:42 compute-0 ceph-osd[87867]: bdev(0x5621de014400 /var/lib/ceph/osd/ceph-2/block) close
Jan 10 16:58:42 compute-0 ceph-osd[87867]: bdev(0x5621de014000 /var/lib/ceph/osd/ceph-2/block) close
Jan 10 16:58:42 compute-0 sudo[87883]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 16:58:42 compute-0 sudo[87883]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 16:58:42 compute-0 ceph-osd[87867]: starting osd.2 osd_data /var/lib/ceph/osd/ceph-2 /var/lib/ceph/osd/ceph-2/journal
Jan 10 16:58:42 compute-0 sudo[87883]: pam_unix(sudo:session): session closed for user root
Jan 10 16:58:42 compute-0 ceph-osd[87867]: load: jerasure load: lrc 
Jan 10 16:58:42 compute-0 ceph-osd[87867]: bdev(0x5621de015c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 10 16:58:42 compute-0 ceph-osd[87867]: bdev(0x5621de015c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 10 16:58:42 compute-0 ceph-osd[87867]: bdev(0x5621de015c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 10 16:58:42 compute-0 ceph-osd[87867]: bdev(0x5621de015c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 10 16:58:42 compute-0 ceph-osd[87867]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 10 16:58:42 compute-0 ceph-osd[87867]: bdev(0x5621de015c00 /var/lib/ceph/osd/ceph-2/block) close
Jan 10 16:58:42 compute-0 ceph-osd[87867]: bdev(0x5621de015c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 10 16:58:42 compute-0 ceph-osd[87867]: bdev(0x5621de015c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 10 16:58:42 compute-0 ceph-osd[87867]: bdev(0x5621de015c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 10 16:58:42 compute-0 ceph-osd[87867]: bdev(0x5621de015c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 10 16:58:42 compute-0 ceph-osd[87867]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 10 16:58:42 compute-0 ceph-osd[87867]: bdev(0x5621de015c00 /var/lib/ceph/osd/ceph-2/block) close
Jan 10 16:58:42 compute-0 ceph-osd[87867]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Jan 10 16:58:42 compute-0 ceph-osd[87867]: osd.2:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Jan 10 16:58:42 compute-0 ceph-osd[87867]: bdev(0x5621de015c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 10 16:58:42 compute-0 ceph-osd[87867]: bdev(0x5621de015c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 10 16:58:42 compute-0 ceph-osd[87867]: bdev(0x5621de015c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 10 16:58:42 compute-0 ceph-osd[87867]: bdev(0x5621de015c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 10 16:58:42 compute-0 ceph-osd[87867]: bdev(0x5621de015c00 /var/lib/ceph/osd/ceph-2/block) close
Jan 10 16:58:42 compute-0 sudo[87921]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 -- raw list --format json
Jan 10 16:58:42 compute-0 sudo[87921]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 16:58:42 compute-0 ceph-osd[87867]: bdev(0x5621de015c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 10 16:58:42 compute-0 ceph-osd[87867]: bdev(0x5621de015c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 10 16:58:42 compute-0 ceph-osd[87867]: bdev(0x5621de015c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 10 16:58:42 compute-0 ceph-osd[87867]: bdev(0x5621de015c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 10 16:58:42 compute-0 ceph-osd[87867]: bdev(0x5621de015c00 /var/lib/ceph/osd/ceph-2/block) close
Jan 10 16:58:42 compute-0 ceph-osd[87867]: bdev(0x5621de015c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 10 16:58:42 compute-0 ceph-osd[87867]: bdev(0x5621de015c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 10 16:58:42 compute-0 ceph-osd[87867]: bdev(0x5621de015c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 10 16:58:42 compute-0 ceph-osd[87867]: bdev(0x5621de015c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 10 16:58:42 compute-0 ceph-osd[87867]: bdev(0x5621de015c00 /var/lib/ceph/osd/ceph-2/block) close
Jan 10 16:58:42 compute-0 ceph-osd[87867]: bdev(0x5621de015c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 10 16:58:42 compute-0 ceph-osd[87867]: bdev(0x5621de015c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 10 16:58:42 compute-0 ceph-osd[87867]: bdev(0x5621de015c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 10 16:58:42 compute-0 ceph-osd[87867]: bdev(0x5621de015c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 10 16:58:42 compute-0 ceph-osd[87867]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 10 16:58:42 compute-0 ceph-osd[87867]: bdev(0x5621decab800 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 10 16:58:42 compute-0 ceph-osd[87867]: bdev(0x5621decab800 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 10 16:58:42 compute-0 ceph-osd[87867]: bdev(0x5621decab800 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 10 16:58:42 compute-0 ceph-osd[87867]: bdev(0x5621decab800 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 10 16:58:42 compute-0 ceph-osd[87867]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Jan 10 16:58:42 compute-0 ceph-osd[87867]: bluefs mount
Jan 10 16:58:42 compute-0 ceph-osd[87867]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 10 16:58:42 compute-0 ceph-osd[87867]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 10 16:58:42 compute-0 ceph-osd[87867]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 10 16:58:42 compute-0 ceph-osd[87867]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 10 16:58:42 compute-0 ceph-osd[87867]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 10 16:58:42 compute-0 ceph-osd[87867]: bluefs mount shared_bdev_used = 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: RocksDB version: 7.9.2
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Git sha 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Compile date 2025-10-30 15:42:43
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: DB SUMMARY
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: DB Session ID:  D6JZOUT0P79SMS3CMH42
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: CURRENT file:  CURRENT
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: IDENTITY file:  IDENTITY
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                         Options.error_if_exists: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                       Options.create_if_missing: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                         Options.paranoid_checks: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                                     Options.env: 0x5621ddea5ea0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                                      Options.fs: LegacyFileSystem
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                                Options.info_log: 0x5621deef68a0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                Options.max_file_opening_threads: 16
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                              Options.statistics: (nil)
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                               Options.use_fsync: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                       Options.max_log_file_size: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                         Options.allow_fallocate: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                        Options.use_direct_reads: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:          Options.create_missing_column_families: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                              Options.db_log_dir: 
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                                 Options.wal_dir: db.wal
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                   Options.advise_random_on_open: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                    Options.write_buffer_manager: 0x5621ddf0ab40
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                            Options.rate_limiter: (nil)
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                  Options.unordered_write: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                               Options.row_cache: None
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                              Options.wal_filter: None
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:             Options.allow_ingest_behind: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:             Options.two_write_queues: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:             Options.manual_wal_flush: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:             Options.wal_compression: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:             Options.atomic_flush: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                 Options.log_readahead_size: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:             Options.allow_data_in_errors: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:             Options.db_host_id: __hostname__
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:             Options.max_background_jobs: 4
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:             Options.max_background_compactions: -1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:             Options.max_subcompactions: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:           Options.writable_file_max_buffer_size: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:             Options.max_total_wal_size: 1073741824
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                          Options.max_open_files: -1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                          Options.bytes_per_sync: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:       Options.compaction_readahead_size: 2097152
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                  Options.max_background_flushes: -1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Compression algorithms supported:
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         kZSTD supported: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         kXpressCompression supported: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         kBZip2Compression supported: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         kZSTDNotFinalCompression supported: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         kLZ4Compression supported: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         kZlibCompression supported: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         kLZ4HCCompression supported: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         kSnappyCompression supported: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:        Options.compaction_filter: None
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:        Options.compaction_filter_factory: None
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:  Options.sst_partitioner_factory: None
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5621deef6c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5621ddea98d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:        Options.write_buffer_size: 16777216
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:  Options.max_write_buffer_number: 64
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:          Options.compression: LZ4
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:       Options.prefix_extractor: nullptr
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:             Options.num_levels: 7
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                  Options.compression_opts.level: 32767
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:               Options.compression_opts.strategy: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                  Options.compression_opts.enabled: false
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                        Options.arena_block_size: 1048576
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                Options.disable_auto_compactions: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                   Options.inplace_update_support: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                           Options.bloom_locality: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                    Options.max_successive_merges: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                Options.paranoid_file_checks: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                Options.force_consistency_checks: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                Options.report_bg_io_stats: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                               Options.ttl: 2592000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                       Options.enable_blob_files: false
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                           Options.min_blob_size: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                          Options.blob_file_size: 268435456
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                Options.blob_file_starting_level: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:           Options.merge_operator: None
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:        Options.compaction_filter: None
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:        Options.compaction_filter_factory: None
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:  Options.sst_partitioner_factory: None
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5621deef6c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5621ddea98d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:        Options.write_buffer_size: 16777216
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:  Options.max_write_buffer_number: 64
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:          Options.compression: LZ4
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:       Options.prefix_extractor: nullptr
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:             Options.num_levels: 7
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                  Options.compression_opts.level: 32767
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:               Options.compression_opts.strategy: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                  Options.compression_opts.enabled: false
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                        Options.arena_block_size: 1048576
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                Options.disable_auto_compactions: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                   Options.inplace_update_support: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                           Options.bloom_locality: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                    Options.max_successive_merges: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                Options.paranoid_file_checks: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                Options.force_consistency_checks: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                Options.report_bg_io_stats: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                               Options.ttl: 2592000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                       Options.enable_blob_files: false
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                           Options.min_blob_size: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                          Options.blob_file_size: 268435456
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                Options.blob_file_starting_level: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:           Options.merge_operator: None
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:        Options.compaction_filter: None
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:        Options.compaction_filter_factory: None
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:  Options.sst_partitioner_factory: None
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5621deef6c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5621ddea98d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:        Options.write_buffer_size: 16777216
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:  Options.max_write_buffer_number: 64
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:          Options.compression: LZ4
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:       Options.prefix_extractor: nullptr
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:             Options.num_levels: 7
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                  Options.compression_opts.level: 32767
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:               Options.compression_opts.strategy: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                  Options.compression_opts.enabled: false
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                        Options.arena_block_size: 1048576
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                Options.disable_auto_compactions: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                   Options.inplace_update_support: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                           Options.bloom_locality: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                    Options.max_successive_merges: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                Options.paranoid_file_checks: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                Options.force_consistency_checks: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                Options.report_bg_io_stats: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                               Options.ttl: 2592000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                       Options.enable_blob_files: false
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                           Options.min_blob_size: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                          Options.blob_file_size: 268435456
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                Options.blob_file_starting_level: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:           Options.merge_operator: None
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:        Options.compaction_filter: None
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:        Options.compaction_filter_factory: None
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:  Options.sst_partitioner_factory: None
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5621deef6c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5621ddea98d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:        Options.write_buffer_size: 16777216
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:  Options.max_write_buffer_number: 64
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:          Options.compression: LZ4
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:       Options.prefix_extractor: nullptr
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:             Options.num_levels: 7
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                  Options.compression_opts.level: 32767
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:               Options.compression_opts.strategy: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                  Options.compression_opts.enabled: false
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                        Options.arena_block_size: 1048576
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                Options.disable_auto_compactions: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                   Options.inplace_update_support: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                           Options.bloom_locality: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                    Options.max_successive_merges: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                Options.paranoid_file_checks: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                Options.force_consistency_checks: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                Options.report_bg_io_stats: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                               Options.ttl: 2592000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                       Options.enable_blob_files: false
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                           Options.min_blob_size: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                          Options.blob_file_size: 268435456
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                Options.blob_file_starting_level: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:           Options.merge_operator: None
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:        Options.compaction_filter: None
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:        Options.compaction_filter_factory: None
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:  Options.sst_partitioner_factory: None
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5621deef6c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5621ddea98d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:        Options.write_buffer_size: 16777216
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:  Options.max_write_buffer_number: 64
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:          Options.compression: LZ4
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:       Options.prefix_extractor: nullptr
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:             Options.num_levels: 7
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                  Options.compression_opts.level: 32767
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:               Options.compression_opts.strategy: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                  Options.compression_opts.enabled: false
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                        Options.arena_block_size: 1048576
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                Options.disable_auto_compactions: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                   Options.inplace_update_support: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                           Options.bloom_locality: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                    Options.max_successive_merges: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                Options.paranoid_file_checks: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                Options.force_consistency_checks: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                Options.report_bg_io_stats: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                               Options.ttl: 2592000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                       Options.enable_blob_files: false
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                           Options.min_blob_size: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                          Options.blob_file_size: 268435456
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                Options.blob_file_starting_level: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:           Options.merge_operator: None
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:        Options.compaction_filter: None
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:        Options.compaction_filter_factory: None
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:  Options.sst_partitioner_factory: None
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5621deef6c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5621ddea98d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:        Options.write_buffer_size: 16777216
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:  Options.max_write_buffer_number: 64
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:          Options.compression: LZ4
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:       Options.prefix_extractor: nullptr
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:             Options.num_levels: 7
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                  Options.compression_opts.level: 32767
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:               Options.compression_opts.strategy: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                  Options.compression_opts.enabled: false
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                        Options.arena_block_size: 1048576
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                Options.disable_auto_compactions: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                   Options.inplace_update_support: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                           Options.bloom_locality: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                    Options.max_successive_merges: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                Options.paranoid_file_checks: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                Options.force_consistency_checks: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                Options.report_bg_io_stats: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                               Options.ttl: 2592000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                       Options.enable_blob_files: false
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                           Options.min_blob_size: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                          Options.blob_file_size: 268435456
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                Options.blob_file_starting_level: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:           Options.merge_operator: None
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:        Options.compaction_filter: None
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:        Options.compaction_filter_factory: None
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:  Options.sst_partitioner_factory: None
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5621deef6c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5621ddea98d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:        Options.write_buffer_size: 16777216
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:  Options.max_write_buffer_number: 64
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:          Options.compression: LZ4
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:       Options.prefix_extractor: nullptr
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:             Options.num_levels: 7
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                  Options.compression_opts.level: 32767
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:               Options.compression_opts.strategy: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                  Options.compression_opts.enabled: false
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                        Options.arena_block_size: 1048576
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                Options.disable_auto_compactions: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                   Options.inplace_update_support: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                           Options.bloom_locality: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                    Options.max_successive_merges: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                Options.paranoid_file_checks: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                Options.force_consistency_checks: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                Options.report_bg_io_stats: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                               Options.ttl: 2592000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                       Options.enable_blob_files: false
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                           Options.min_blob_size: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                          Options.blob_file_size: 268435456
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                Options.blob_file_starting_level: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:           Options.merge_operator: None
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:        Options.compaction_filter: None
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:        Options.compaction_filter_factory: None
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:  Options.sst_partitioner_factory: None
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5621deef6c80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5621ddea9a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:        Options.write_buffer_size: 16777216
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:  Options.max_write_buffer_number: 64
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:          Options.compression: LZ4
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:       Options.prefix_extractor: nullptr
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:             Options.num_levels: 7
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                  Options.compression_opts.level: 32767
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:               Options.compression_opts.strategy: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                  Options.compression_opts.enabled: false
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                        Options.arena_block_size: 1048576
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                Options.disable_auto_compactions: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                   Options.inplace_update_support: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                           Options.bloom_locality: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                    Options.max_successive_merges: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                Options.paranoid_file_checks: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                Options.force_consistency_checks: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                Options.report_bg_io_stats: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                               Options.ttl: 2592000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                       Options.enable_blob_files: false
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                           Options.min_blob_size: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                          Options.blob_file_size: 268435456
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                Options.blob_file_starting_level: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:           Options.merge_operator: None
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:        Options.compaction_filter: None
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:        Options.compaction_filter_factory: None
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:  Options.sst_partitioner_factory: None
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5621deef6c80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5621ddea9a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:        Options.write_buffer_size: 16777216
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:  Options.max_write_buffer_number: 64
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:          Options.compression: LZ4
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:       Options.prefix_extractor: nullptr
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:             Options.num_levels: 7
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                  Options.compression_opts.level: 32767
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:               Options.compression_opts.strategy: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                  Options.compression_opts.enabled: false
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                        Options.arena_block_size: 1048576
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                Options.disable_auto_compactions: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                   Options.inplace_update_support: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                           Options.bloom_locality: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                    Options.max_successive_merges: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                Options.paranoid_file_checks: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                Options.force_consistency_checks: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                Options.report_bg_io_stats: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                               Options.ttl: 2592000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                       Options.enable_blob_files: false
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                           Options.min_blob_size: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                          Options.blob_file_size: 268435456
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                Options.blob_file_starting_level: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:           Options.merge_operator: None
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:        Options.compaction_filter: None
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:        Options.compaction_filter_factory: None
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:  Options.sst_partitioner_factory: None
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5621deef6c80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5621ddea9a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:        Options.write_buffer_size: 16777216
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:  Options.max_write_buffer_number: 64
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:          Options.compression: LZ4
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:       Options.prefix_extractor: nullptr
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:             Options.num_levels: 7
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                  Options.compression_opts.level: 32767
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:               Options.compression_opts.strategy: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                  Options.compression_opts.enabled: false
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                        Options.arena_block_size: 1048576
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                Options.disable_auto_compactions: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                   Options.inplace_update_support: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                           Options.bloom_locality: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                    Options.max_successive_merges: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                Options.paranoid_file_checks: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                Options.force_consistency_checks: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                Options.report_bg_io_stats: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                               Options.ttl: 2592000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                       Options.enable_blob_files: false
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                           Options.min_blob_size: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                          Options.blob_file_size: 268435456
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                Options.blob_file_starting_level: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: b77ccd9f-4f29-4234-a608-29d54f994fb8
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768064322781962, "job": 1, "event": "recovery_started", "wal_files": [31]}
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768064322784170, "job": 1, "event": "recovery_finished"}
Jan 10 16:58:42 compute-0 ceph-osd[87867]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old nid_max 1025
Jan 10 16:58:42 compute-0 ceph-osd[87867]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old blobid_max 10240
Jan 10 16:58:42 compute-0 ceph-osd[87867]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Jan 10 16:58:42 compute-0 ceph-osd[87867]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta min_alloc_size 0x1000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: freelist init
Jan 10 16:58:42 compute-0 ceph-osd[87867]: freelist _read_cfg
Jan 10 16:58:42 compute-0 ceph-osd[87867]: bluestore(/var/lib/ceph/osd/ceph-2) _open_fm effective freelist_type = bitmap, freelist_alloc_size = 0x1000, min_alloc_size = 0x1000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: bluestore(/var/lib/ceph/osd/ceph-2) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Jan 10 16:58:42 compute-0 ceph-osd[87867]: bluefs umount
Jan 10 16:58:42 compute-0 ceph-osd[87867]: bdev(0x5621decab800 /var/lib/ceph/osd/ceph-2/block) close
Jan 10 16:58:42 compute-0 ceph-osd[87867]: bdev(0x5621decab800 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 10 16:58:42 compute-0 ceph-osd[87867]: bdev(0x5621decab800 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 10 16:58:42 compute-0 ceph-osd[87867]: bdev(0x5621decab800 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 10 16:58:42 compute-0 ceph-osd[87867]: bdev(0x5621decab800 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 10 16:58:42 compute-0 ceph-osd[87867]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Jan 10 16:58:42 compute-0 ceph-osd[87867]: bluefs mount
Jan 10 16:58:42 compute-0 ceph-osd[87867]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 10 16:58:42 compute-0 ceph-osd[87867]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 10 16:58:42 compute-0 ceph-osd[87867]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 10 16:58:42 compute-0 ceph-osd[87867]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 10 16:58:42 compute-0 ceph-osd[87867]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 10 16:58:42 compute-0 ceph-osd[87867]: bluefs mount shared_bdev_used = 27262976
Jan 10 16:58:42 compute-0 ceph-osd[87867]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: RocksDB version: 7.9.2
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Git sha 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Compile date 2025-10-30 15:42:43
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: DB SUMMARY
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: DB Session ID:  D6JZOUT0P79SMS3CMH43
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: CURRENT file:  CURRENT
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: IDENTITY file:  IDENTITY
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                         Options.error_if_exists: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                       Options.create_if_missing: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                         Options.paranoid_checks: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                                     Options.env: 0x5621ddea5d50
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                                      Options.fs: LegacyFileSystem
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                                Options.info_log: 0x5621deef7aa0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                Options.max_file_opening_threads: 16
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                              Options.statistics: (nil)
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                               Options.use_fsync: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                       Options.max_log_file_size: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                         Options.allow_fallocate: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                        Options.use_direct_reads: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:          Options.create_missing_column_families: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                              Options.db_log_dir: 
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                                 Options.wal_dir: db.wal
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                   Options.advise_random_on_open: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                    Options.write_buffer_manager: 0x5621ddf0b900
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                            Options.rate_limiter: (nil)
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                  Options.unordered_write: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                               Options.row_cache: None
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                              Options.wal_filter: None
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:             Options.allow_ingest_behind: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:             Options.two_write_queues: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:             Options.manual_wal_flush: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:             Options.wal_compression: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:             Options.atomic_flush: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                 Options.log_readahead_size: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:             Options.allow_data_in_errors: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:             Options.db_host_id: __hostname__
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:             Options.max_background_jobs: 4
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:             Options.max_background_compactions: -1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:             Options.max_subcompactions: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:           Options.writable_file_max_buffer_size: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:             Options.max_total_wal_size: 1073741824
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                          Options.max_open_files: -1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                          Options.bytes_per_sync: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:       Options.compaction_readahead_size: 2097152
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                  Options.max_background_flushes: -1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Compression algorithms supported:
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         kZSTD supported: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         kXpressCompression supported: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         kBZip2Compression supported: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         kZSTDNotFinalCompression supported: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         kLZ4Compression supported: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         kZlibCompression supported: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         kLZ4HCCompression supported: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         kSnappyCompression supported: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:        Options.compaction_filter: None
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:        Options.compaction_filter_factory: None
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:  Options.sst_partitioner_factory: None
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5621deef7ea0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5621ddea9a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:        Options.write_buffer_size: 16777216
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:  Options.max_write_buffer_number: 64
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:          Options.compression: LZ4
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:       Options.prefix_extractor: nullptr
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:             Options.num_levels: 7
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                  Options.compression_opts.level: 32767
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:               Options.compression_opts.strategy: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                  Options.compression_opts.enabled: false
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                        Options.arena_block_size: 1048576
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                Options.disable_auto_compactions: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                   Options.inplace_update_support: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                           Options.bloom_locality: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                    Options.max_successive_merges: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                Options.paranoid_file_checks: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                Options.force_consistency_checks: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                Options.report_bg_io_stats: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                               Options.ttl: 2592000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                       Options.enable_blob_files: false
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                           Options.min_blob_size: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                          Options.blob_file_size: 268435456
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                Options.blob_file_starting_level: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:           Options.merge_operator: None
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:        Options.compaction_filter: None
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:        Options.compaction_filter_factory: None
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:  Options.sst_partitioner_factory: None
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5621deef7ea0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5621ddea9a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:        Options.write_buffer_size: 16777216
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:  Options.max_write_buffer_number: 64
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:          Options.compression: LZ4
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:       Options.prefix_extractor: nullptr
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:             Options.num_levels: 7
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                  Options.compression_opts.level: 32767
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:               Options.compression_opts.strategy: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                  Options.compression_opts.enabled: false
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                        Options.arena_block_size: 1048576
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                Options.disable_auto_compactions: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                   Options.inplace_update_support: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                           Options.bloom_locality: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                    Options.max_successive_merges: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                Options.paranoid_file_checks: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                Options.force_consistency_checks: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                Options.report_bg_io_stats: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                               Options.ttl: 2592000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                       Options.enable_blob_files: false
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                           Options.min_blob_size: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                          Options.blob_file_size: 268435456
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                Options.blob_file_starting_level: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:           Options.merge_operator: None
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:        Options.compaction_filter: None
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:        Options.compaction_filter_factory: None
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:  Options.sst_partitioner_factory: None
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5621deef7ea0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5621ddea9a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:        Options.write_buffer_size: 16777216
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:  Options.max_write_buffer_number: 64
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:          Options.compression: LZ4
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:       Options.prefix_extractor: nullptr
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:             Options.num_levels: 7
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                  Options.compression_opts.level: 32767
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:               Options.compression_opts.strategy: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                  Options.compression_opts.enabled: false
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                        Options.arena_block_size: 1048576
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                Options.disable_auto_compactions: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                   Options.inplace_update_support: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                           Options.bloom_locality: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                    Options.max_successive_merges: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                Options.paranoid_file_checks: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                Options.force_consistency_checks: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                Options.report_bg_io_stats: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                               Options.ttl: 2592000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                       Options.enable_blob_files: false
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                           Options.min_blob_size: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                          Options.blob_file_size: 268435456
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                Options.blob_file_starting_level: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:           Options.merge_operator: None
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:        Options.compaction_filter: None
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:        Options.compaction_filter_factory: None
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:  Options.sst_partitioner_factory: None
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5621deef7ea0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5621ddea9a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:        Options.write_buffer_size: 16777216
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:  Options.max_write_buffer_number: 64
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:          Options.compression: LZ4
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:       Options.prefix_extractor: nullptr
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:             Options.num_levels: 7
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                  Options.compression_opts.level: 32767
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:               Options.compression_opts.strategy: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                  Options.compression_opts.enabled: false
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                        Options.arena_block_size: 1048576
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                Options.disable_auto_compactions: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                   Options.inplace_update_support: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                           Options.bloom_locality: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                    Options.max_successive_merges: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                Options.paranoid_file_checks: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                Options.force_consistency_checks: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                Options.report_bg_io_stats: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                               Options.ttl: 2592000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                       Options.enable_blob_files: false
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                           Options.min_blob_size: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                          Options.blob_file_size: 268435456
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                Options.blob_file_starting_level: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:           Options.merge_operator: None
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:        Options.compaction_filter: None
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:        Options.compaction_filter_factory: None
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:  Options.sst_partitioner_factory: None
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5621deef7ea0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5621ddea9a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:        Options.write_buffer_size: 16777216
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:  Options.max_write_buffer_number: 64
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:          Options.compression: LZ4
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:       Options.prefix_extractor: nullptr
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:             Options.num_levels: 7
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                  Options.compression_opts.level: 32767
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:               Options.compression_opts.strategy: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                  Options.compression_opts.enabled: false
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                        Options.arena_block_size: 1048576
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                Options.disable_auto_compactions: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                   Options.inplace_update_support: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                           Options.bloom_locality: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                    Options.max_successive_merges: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                Options.paranoid_file_checks: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                Options.force_consistency_checks: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                Options.report_bg_io_stats: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                               Options.ttl: 2592000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                       Options.enable_blob_files: false
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                           Options.min_blob_size: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                          Options.blob_file_size: 268435456
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                Options.blob_file_starting_level: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:           Options.merge_operator: None
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:        Options.compaction_filter: None
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:        Options.compaction_filter_factory: None
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:  Options.sst_partitioner_factory: None
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5621deef7ea0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5621ddea9a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:        Options.write_buffer_size: 16777216
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:  Options.max_write_buffer_number: 64
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:          Options.compression: LZ4
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:       Options.prefix_extractor: nullptr
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:             Options.num_levels: 7
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                  Options.compression_opts.level: 32767
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:               Options.compression_opts.strategy: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                  Options.compression_opts.enabled: false
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                        Options.arena_block_size: 1048576
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                Options.disable_auto_compactions: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                   Options.inplace_update_support: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                           Options.bloom_locality: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                    Options.max_successive_merges: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                Options.paranoid_file_checks: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                Options.force_consistency_checks: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                Options.report_bg_io_stats: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                               Options.ttl: 2592000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                       Options.enable_blob_files: false
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                           Options.min_blob_size: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                          Options.blob_file_size: 268435456
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                Options.blob_file_starting_level: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:           Options.merge_operator: None
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:        Options.compaction_filter: None
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:        Options.compaction_filter_factory: None
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:  Options.sst_partitioner_factory: None
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5621deef7ea0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5621ddea9a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:        Options.write_buffer_size: 16777216
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:  Options.max_write_buffer_number: 64
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:          Options.compression: LZ4
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:       Options.prefix_extractor: nullptr
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:             Options.num_levels: 7
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                  Options.compression_opts.level: 32767
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:               Options.compression_opts.strategy: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                  Options.compression_opts.enabled: false
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                        Options.arena_block_size: 1048576
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                Options.disable_auto_compactions: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                   Options.inplace_update_support: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                           Options.bloom_locality: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                    Options.max_successive_merges: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                Options.paranoid_file_checks: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                Options.force_consistency_checks: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                Options.report_bg_io_stats: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                               Options.ttl: 2592000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                       Options.enable_blob_files: false
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                           Options.min_blob_size: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                          Options.blob_file_size: 268435456
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                Options.blob_file_starting_level: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:           Options.merge_operator: None
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:        Options.compaction_filter: None
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:        Options.compaction_filter_factory: None
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:  Options.sst_partitioner_factory: None
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5621deef7ec0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5621ddea94b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:        Options.write_buffer_size: 16777216
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:  Options.max_write_buffer_number: 64
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:          Options.compression: LZ4
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:       Options.prefix_extractor: nullptr
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:             Options.num_levels: 7
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                  Options.compression_opts.level: 32767
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:               Options.compression_opts.strategy: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                  Options.compression_opts.enabled: false
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                        Options.arena_block_size: 1048576
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                Options.disable_auto_compactions: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                   Options.inplace_update_support: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                           Options.bloom_locality: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                    Options.max_successive_merges: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                Options.paranoid_file_checks: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                Options.force_consistency_checks: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                Options.report_bg_io_stats: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                               Options.ttl: 2592000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                       Options.enable_blob_files: false
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                           Options.min_blob_size: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                          Options.blob_file_size: 268435456
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                Options.blob_file_starting_level: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:           Options.merge_operator: None
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:        Options.compaction_filter: None
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:        Options.compaction_filter_factory: None
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:  Options.sst_partitioner_factory: None
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5621deef7ec0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5621ddea94b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:        Options.write_buffer_size: 16777216
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:  Options.max_write_buffer_number: 64
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:          Options.compression: LZ4
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:       Options.prefix_extractor: nullptr
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:             Options.num_levels: 7
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                  Options.compression_opts.level: 32767
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:               Options.compression_opts.strategy: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                  Options.compression_opts.enabled: false
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                        Options.arena_block_size: 1048576
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                Options.disable_auto_compactions: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                   Options.inplace_update_support: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                           Options.bloom_locality: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                    Options.max_successive_merges: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                Options.paranoid_file_checks: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                Options.force_consistency_checks: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                Options.report_bg_io_stats: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                               Options.ttl: 2592000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                       Options.enable_blob_files: false
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                           Options.min_blob_size: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                          Options.blob_file_size: 268435456
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                Options.blob_file_starting_level: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:           Options.merge_operator: None
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:        Options.compaction_filter: None
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:        Options.compaction_filter_factory: None
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:  Options.sst_partitioner_factory: None
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5621deef7ec0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5621ddea94b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:        Options.write_buffer_size: 16777216
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:  Options.max_write_buffer_number: 64
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:          Options.compression: LZ4
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:       Options.prefix_extractor: nullptr
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:             Options.num_levels: 7
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                  Options.compression_opts.level: 32767
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:               Options.compression_opts.strategy: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                  Options.compression_opts.enabled: false
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                        Options.arena_block_size: 1048576
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                Options.disable_auto_compactions: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                   Options.inplace_update_support: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                           Options.bloom_locality: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                    Options.max_successive_merges: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                Options.paranoid_file_checks: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                Options.force_consistency_checks: 1
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                Options.report_bg_io_stats: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                               Options.ttl: 2592000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                       Options.enable_blob_files: false
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                           Options.min_blob_size: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                          Options.blob_file_size: 268435456
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb:                Options.blob_file_starting_level: 0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: b77ccd9f-4f29-4234-a608-29d54f994fb8
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768064322840883, "job": 1, "event": "recovery_started", "wal_files": [31]}
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768064322845090, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 131, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768064322, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b77ccd9f-4f29-4234-a608-29d54f994fb8", "db_session_id": "D6JZOUT0P79SMS3CMH43", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768064322847893, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1595, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 469, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 571, "raw_average_value_size": 285, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768064322, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b77ccd9f-4f29-4234-a608-29d54f994fb8", "db_session_id": "D6JZOUT0P79SMS3CMH43", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768064322850789, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768064322, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b77ccd9f-4f29-4234-a608-29d54f994fb8", "db_session_id": "D6JZOUT0P79SMS3CMH43", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768064322852236, "job": 1, "event": "recovery_finished"}
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Jan 10 16:58:42 compute-0 ceph-mgr[75538]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/2762500155; not ready for session (expect reconnect)
Jan 10 16:58:42 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 10 16:58:42 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 10 16:58:42 compute-0 ceph-mgr[75538]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x5621df0ffc00
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: DB pointer 0x5621df0b0000
Jan 10 16:58:42 compute-0 ceph-osd[87867]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Jan 10 16:58:42 compute-0 ceph-osd[87867]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super from 4, latest 4
Jan 10 16:58:42 compute-0 ceph-osd[87867]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super done
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 10 16:58:42 compute-0 ceph-osd[87867]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5621ddea9a30#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5621ddea9a30#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5621ddea9a30#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5621ddea9a30#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5621ddea9a30#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5621ddea9a30#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5621ddea9a30#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5621ddea94b0#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5621ddea94b0#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5621ddea94b0#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5621ddea9a30#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5621ddea9a30#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 10 16:58:42 compute-0 ceph-osd[87867]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Jan 10 16:58:42 compute-0 ceph-osd[87867]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/hello/cls_hello.cc:316: loading cls_hello
Jan 10 16:58:42 compute-0 ceph-osd[87867]: _get_class not permitted to load lua
Jan 10 16:58:42 compute-0 ceph-osd[87867]: _get_class not permitted to load sdk
Jan 10 16:58:42 compute-0 ceph-osd[87867]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Jan 10 16:58:42 compute-0 ceph-osd[87867]: osd.2 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Jan 10 16:58:42 compute-0 ceph-osd[87867]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Jan 10 16:58:42 compute-0 ceph-osd[87867]: osd.2 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Jan 10 16:58:42 compute-0 ceph-osd[87867]: osd.2 0 load_pgs
Jan 10 16:58:42 compute-0 ceph-osd[87867]: osd.2 0 load_pgs opened 0 pgs
Jan 10 16:58:42 compute-0 ceph-osd[87867]: osd.2 0 log_to_monitors true
Jan 10 16:58:42 compute-0 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-2[87861]: 2026-01-10T16:58:42.952+0000 7f9711cfc8c0 -1 osd.2 0 log_to_monitors true
Jan 10 16:58:42 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} v 0)
Jan 10 16:58:42 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/4214371536,v1:192.168.122.100:6811/4214371536]' entity='osd.2' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} : dispatch
Jan 10 16:58:43 compute-0 podman[88377]: 2026-01-10 16:58:43.051453538 +0000 UTC m=+0.060045746 container create c2c95228d259496a3a2d0b6085239e309b4c395cc2f104041f0ff060488fa8ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_swartz, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 16:58:43 compute-0 podman[88377]: 2026-01-10 16:58:43.025248841 +0000 UTC m=+0.033841049 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 16:58:43 compute-0 systemd[1]: Started libpod-conmon-c2c95228d259496a3a2d0b6085239e309b4c395cc2f104041f0ff060488fa8ff.scope.
Jan 10 16:58:43 compute-0 systemd[1]: Started libcrun container.
Jan 10 16:58:43 compute-0 ceph-mon[75249]: pgmap v35: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Jan 10 16:58:43 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:58:43 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:58:43 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 10 16:58:43 compute-0 ceph-mon[75249]: from='osd.2 [v2:192.168.122.100:6810/4214371536,v1:192.168.122.100:6811/4214371536]' entity='osd.2' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} : dispatch
Jan 10 16:58:43 compute-0 podman[88377]: 2026-01-10 16:58:43.210592005 +0000 UTC m=+0.219184203 container init c2c95228d259496a3a2d0b6085239e309b4c395cc2f104041f0ff060488fa8ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_swartz, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 10 16:58:43 compute-0 podman[88377]: 2026-01-10 16:58:43.220906083 +0000 UTC m=+0.229498281 container start c2c95228d259496a3a2d0b6085239e309b4c395cc2f104041f0ff060488fa8ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_swartz, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 10 16:58:43 compute-0 zealous_swartz[88393]: 167 167
Jan 10 16:58:43 compute-0 systemd[1]: libpod-c2c95228d259496a3a2d0b6085239e309b4c395cc2f104041f0ff060488fa8ff.scope: Deactivated successfully.
Jan 10 16:58:43 compute-0 podman[88377]: 2026-01-10 16:58:43.241405575 +0000 UTC m=+0.249997813 container attach c2c95228d259496a3a2d0b6085239e309b4c395cc2f104041f0ff060488fa8ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_swartz, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 10 16:58:43 compute-0 podman[88377]: 2026-01-10 16:58:43.242256339 +0000 UTC m=+0.250848577 container died c2c95228d259496a3a2d0b6085239e309b4c395cc2f104041f0ff060488fa8ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_swartz, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 16:58:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-3b2905c7d292de9b1c5b403bde5a4d5c8d4a427e52b431aa7ee3a18b1c4ab687-merged.mount: Deactivated successfully.
Jan 10 16:58:43 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e12 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 16:58:43 compute-0 podman[88377]: 2026-01-10 16:58:43.316171355 +0000 UTC m=+0.324763593 container remove c2c95228d259496a3a2d0b6085239e309b4c395cc2f104041f0ff060488fa8ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_swartz, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 10 16:58:43 compute-0 systemd[1]: libpod-conmon-c2c95228d259496a3a2d0b6085239e309b4c395cc2f104041f0ff060488fa8ff.scope: Deactivated successfully.
Jan 10 16:58:43 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e12 do_prune osdmap full prune enabled
Jan 10 16:58:43 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/4214371536,v1:192.168.122.100:6811/4214371536]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Jan 10 16:58:43 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e13 e13: 3 total, 1 up, 3 in
Jan 10 16:58:43 compute-0 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e13: 3 total, 1 up, 3 in
Jan 10 16:58:43 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 10 16:58:43 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 10 16:58:43 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 10 16:58:43 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 10 16:58:43 compute-0 ceph-mgr[75538]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 10 16:58:43 compute-0 ceph-mgr[75538]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 10 16:58:43 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0)
Jan 10 16:58:43 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/4214371536,v1:192.168.122.100:6811/4214371536]' entity='osd.2' cmd={"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Jan 10 16:58:43 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e13 create-or-move crush item name 'osd.2' initial_weight 0.02 at location {host=compute-0,root=default}
Jan 10 16:58:43 compute-0 podman[88418]: 2026-01-10 16:58:43.584348981 +0000 UTC m=+0.065359869 container create 4eec8c557589389180d975d1167ed696676ccb0974c3883d247a3ecbd9a1edaf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_wilson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 10 16:58:43 compute-0 podman[88418]: 2026-01-10 16:58:43.546485607 +0000 UTC m=+0.027496575 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 16:58:43 compute-0 systemd[1]: Started libpod-conmon-4eec8c557589389180d975d1167ed696676ccb0974c3883d247a3ecbd9a1edaf.scope.
Jan 10 16:58:43 compute-0 systemd[1]: Started libcrun container.
Jan 10 16:58:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cde86d6acb9b2e6f1a661c8ca32fdec413f2e4425ad066d50fa16aa28c40775/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 16:58:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cde86d6acb9b2e6f1a661c8ca32fdec413f2e4425ad066d50fa16aa28c40775/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 16:58:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cde86d6acb9b2e6f1a661c8ca32fdec413f2e4425ad066d50fa16aa28c40775/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 16:58:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cde86d6acb9b2e6f1a661c8ca32fdec413f2e4425ad066d50fa16aa28c40775/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 16:58:43 compute-0 podman[88418]: 2026-01-10 16:58:43.866241464 +0000 UTC m=+0.347252342 container init 4eec8c557589389180d975d1167ed696676ccb0974c3883d247a3ecbd9a1edaf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_wilson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 10 16:58:43 compute-0 podman[88418]: 2026-01-10 16:58:43.879415914 +0000 UTC m=+0.360426762 container start 4eec8c557589389180d975d1167ed696676ccb0974c3883d247a3ecbd9a1edaf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_wilson, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 16:58:43 compute-0 podman[88418]: 2026-01-10 16:58:43.883146232 +0000 UTC m=+0.364157120 container attach 4eec8c557589389180d975d1167ed696676ccb0974c3883d247a3ecbd9a1edaf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_wilson, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 16:58:43 compute-0 ceph-mgr[75538]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/2762500155; not ready for session (expect reconnect)
Jan 10 16:58:43 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 10 16:58:43 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 10 16:58:43 compute-0 ceph-mgr[75538]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 10 16:58:43 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v37: 1 pgs: 1 unknown; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail
Jan 10 16:58:43 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Jan 10 16:58:43 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Jan 10 16:58:43 compute-0 sshd-session[87864]: Connection closed by authenticating user rpc 216.36.124.133 port 40136 [preauth]
Jan 10 16:58:44 compute-0 sudo[88464]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tfuspfyrwtjebsfcvnecmqavwwsiegai ; /usr/bin/python3'
Jan 10 16:58:44 compute-0 sudo[88464]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:58:44 compute-0 ceph-osd[86809]: osd.1 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 11.559 iops: 2959.228 elapsed_sec: 1.014
Jan 10 16:58:44 compute-0 ceph-osd[86809]: log_channel(cluster) log [WRN] : OSD bench result of 2959.227629 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 10 16:58:44 compute-0 ceph-osd[86809]: osd.1 0 waiting for initial osdmap
Jan 10 16:58:44 compute-0 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-1[86805]: 2026-01-10T16:58:44.318+0000 7f1fcaf9b640 -1 osd.1 0 waiting for initial osdmap
Jan 10 16:58:44 compute-0 python3[88466]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 16:58:44 compute-0 ceph-osd[86809]: osd.1 13 crush map has features 288514051259236352, adjusting msgr requires for clients
Jan 10 16:58:44 compute-0 ceph-osd[86809]: osd.1 13 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Jan 10 16:58:44 compute-0 ceph-osd[86809]: osd.1 13 crush map has features 3314933000852226048, adjusting msgr requires for osds
Jan 10 16:58:44 compute-0 ceph-osd[86809]: osd.1 13 check_osdmap_features require_osd_release unknown -> tentacle
Jan 10 16:58:44 compute-0 ceph-osd[86809]: osd.1 13 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Jan 10 16:58:44 compute-0 ceph-osd[86809]: osd.1 13 set_numa_affinity not setting numa affinity
Jan 10 16:58:44 compute-0 ceph-osd[86809]: osd.1 13 _collect_metadata loop4:  no unique device id for loop4: fallback method has no model nor serial no unique device path for loop4: no symlink to loop4 in /dev/disk/by-path
Jan 10 16:58:44 compute-0 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-1[86805]: 2026-01-10T16:58:44.351+0000 7f1fc558e640 -1 osd.1 13 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Jan 10 16:58:44 compute-0 podman[88480]: 2026-01-10 16:58:44.415856391 +0000 UTC m=+0.063801044 container create 7e0e74b231e5d34edc44606d8e871521035a998d83a0faf865f23439f519f18a (image=quay.io/ceph/ceph:v20, name=hopeful_ride, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 16:58:44 compute-0 systemd[1]: Started libpod-conmon-7e0e74b231e5d34edc44606d8e871521035a998d83a0faf865f23439f519f18a.scope.
Jan 10 16:58:44 compute-0 podman[88480]: 2026-01-10 16:58:44.385804583 +0000 UTC m=+0.033749256 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 16:58:44 compute-0 systemd[1]: Started libcrun container.
Jan 10 16:58:44 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e13 do_prune osdmap full prune enabled
Jan 10 16:58:44 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/4214371536,v1:192.168.122.100:6811/4214371536]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Jan 10 16:58:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b54a87512fc726cf6b9881bcfb93fbab535826607717cd1a93ef9bbca2832cd3/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 16:58:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b54a87512fc726cf6b9881bcfb93fbab535826607717cd1a93ef9bbca2832cd3/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 10 16:58:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b54a87512fc726cf6b9881bcfb93fbab535826607717cd1a93ef9bbca2832cd3/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 16:58:44 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e14 e14: 3 total, 2 up, 3 in
Jan 10 16:58:44 compute-0 ceph-osd[87867]: osd.2 0 done with init, starting boot process
Jan 10 16:58:44 compute-0 ceph-osd[87867]: osd.2 0 start_boot
Jan 10 16:58:44 compute-0 ceph-osd[87867]: osd.2 0 maybe_override_options_for_qos osd_max_backfills set to 1
Jan 10 16:58:44 compute-0 ceph-osd[87867]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Jan 10 16:58:44 compute-0 ceph-osd[87867]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Jan 10 16:58:44 compute-0 ceph-osd[87867]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Jan 10 16:58:44 compute-0 ceph-osd[87867]: osd.2 0  bench count 12288000 bsize 4 KiB
Jan 10 16:58:44 compute-0 ceph-mon[75249]: log_channel(cluster) log [INF] : osd.1 [v2:192.168.122.100:6806/2762500155,v1:192.168.122.100:6807/2762500155] boot
Jan 10 16:58:44 compute-0 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e14: 3 total, 2 up, 3 in
Jan 10 16:58:44 compute-0 podman[88480]: 2026-01-10 16:58:44.513279535 +0000 UTC m=+0.161224188 container init 7e0e74b231e5d34edc44606d8e871521035a998d83a0faf865f23439f519f18a (image=quay.io/ceph/ceph:v20, name=hopeful_ride, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 16:58:44 compute-0 podman[88480]: 2026-01-10 16:58:44.522739488 +0000 UTC m=+0.170684141 container start 7e0e74b231e5d34edc44606d8e871521035a998d83a0faf865f23439f519f18a (image=quay.io/ceph/ceph:v20, name=hopeful_ride, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 16:58:44 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 10 16:58:44 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 10 16:58:44 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 10 16:58:44 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 10 16:58:44 compute-0 ceph-mgr[75538]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 10 16:58:44 compute-0 ceph-mon[75249]: from='osd.2 [v2:192.168.122.100:6810/4214371536,v1:192.168.122.100:6811/4214371536]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Jan 10 16:58:44 compute-0 ceph-mon[75249]: osdmap e13: 3 total, 1 up, 3 in
Jan 10 16:58:44 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 10 16:58:44 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 10 16:58:44 compute-0 ceph-mon[75249]: from='osd.2 [v2:192.168.122.100:6810/4214371536,v1:192.168.122.100:6811/4214371536]' entity='osd.2' cmd={"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Jan 10 16:58:44 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 10 16:58:44 compute-0 ceph-mon[75249]: pgmap v37: 1 pgs: 1 unknown; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail
Jan 10 16:58:44 compute-0 ceph-mon[75249]: OSD bench result of 2959.227629 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 10 16:58:44 compute-0 podman[88480]: 2026-01-10 16:58:44.546204036 +0000 UTC m=+0.194148719 container attach 7e0e74b231e5d34edc44606d8e871521035a998d83a0faf865f23439f519f18a (image=quay.io/ceph/ceph:v20, name=hopeful_ride, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 16:58:44 compute-0 ceph-mgr[75538]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/4214371536; not ready for session (expect reconnect)
Jan 10 16:58:44 compute-0 ceph-osd[86809]: osd.1 14 state: booting -> active
Jan 10 16:58:44 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 14 pg[1.0( empty local-lis/les=0/0 n=0 ec=11/11 lis/c=0/0 les/c/f=0/0/0 sis=14) [1] r=0 lpr=14 pi=[11,14)/0 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:58:44 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 10 16:58:44 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 10 16:58:44 compute-0 ceph-mgr[75538]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 10 16:58:44 compute-0 lvm[88581]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 10 16:58:44 compute-0 lvm[88581]: VG ceph_vg1 finished
Jan 10 16:58:44 compute-0 lvm[88580]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 10 16:58:44 compute-0 lvm[88580]: VG ceph_vg0 finished
Jan 10 16:58:44 compute-0 lvm[88583]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 10 16:58:44 compute-0 lvm[88583]: VG ceph_vg2 finished
Jan 10 16:58:45 compute-0 objective_wilson[88436]: {}
Jan 10 16:58:45 compute-0 systemd[1]: libpod-4eec8c557589389180d975d1167ed696676ccb0974c3883d247a3ecbd9a1edaf.scope: Deactivated successfully.
Jan 10 16:58:45 compute-0 podman[88418]: 2026-01-10 16:58:45.065291291 +0000 UTC m=+1.546302169 container died 4eec8c557589389180d975d1167ed696676ccb0974c3883d247a3ecbd9a1edaf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_wilson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 16:58:45 compute-0 systemd[1]: libpod-4eec8c557589389180d975d1167ed696676ccb0974c3883d247a3ecbd9a1edaf.scope: Consumed 1.755s CPU time.
Jan 10 16:58:45 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Jan 10 16:58:45 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2904687390' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Jan 10 16:58:45 compute-0 hopeful_ride[88511]: 
Jan 10 16:58:45 compute-0 hopeful_ride[88511]: {"fsid":"a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":86,"monmap":{"epoch":1,"min_mon_release_name":"tentacle","num_mons":1},"osdmap":{"epoch":14,"num_osds":3,"num_up_osds":2,"osd_up_since":1768064324,"num_in_osds":3,"osd_in_since":1768064301,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"unknown","count":1}],"num_pgs":1,"num_pools":1,"num_objects":0,"data_bytes":0,"bytes_used":27611136,"bytes_avail":21443031040,"bytes_total":21470642176,"unknown_pgs_ratio":1},"fsmap":{"epoch":1,"btime":"2026-01-10T16:57:15:771836+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs"],"services":{}},"servicemap":{"epoch":2,"modified":"2026-01-10T16:58:41.970835+0000","services":{"osd":{"daemons":{"summary":"","0":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{}}
Jan 10 16:58:45 compute-0 systemd[1]: libpod-7e0e74b231e5d34edc44606d8e871521035a998d83a0faf865f23439f519f18a.scope: Deactivated successfully.
Jan 10 16:58:45 compute-0 conmon[88511]: conmon 7e0e74b231e5d34edc44 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7e0e74b231e5d34edc44606d8e871521035a998d83a0faf865f23439f519f18a.scope/container/memory.events
Jan 10 16:58:45 compute-0 podman[88480]: 2026-01-10 16:58:45.11511295 +0000 UTC m=+0.763057613 container died 7e0e74b231e5d34edc44606d8e871521035a998d83a0faf865f23439f519f18a (image=quay.io/ceph/ceph:v20, name=hopeful_ride, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 10 16:58:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-0cde86d6acb9b2e6f1a661c8ca32fdec413f2e4425ad066d50fa16aa28c40775-merged.mount: Deactivated successfully.
Jan 10 16:58:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-b54a87512fc726cf6b9881bcfb93fbab535826607717cd1a93ef9bbca2832cd3-merged.mount: Deactivated successfully.
Jan 10 16:58:45 compute-0 podman[88418]: 2026-01-10 16:58:45.370098417 +0000 UTC m=+1.851109265 container remove 4eec8c557589389180d975d1167ed696676ccb0974c3883d247a3ecbd9a1edaf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_wilson, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 10 16:58:45 compute-0 systemd[1]: libpod-conmon-4eec8c557589389180d975d1167ed696676ccb0974c3883d247a3ecbd9a1edaf.scope: Deactivated successfully.
Jan 10 16:58:45 compute-0 sudo[87921]: pam_unix(sudo:session): session closed for user root
Jan 10 16:58:45 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 10 16:58:45 compute-0 podman[88480]: 2026-01-10 16:58:45.457022127 +0000 UTC m=+1.104966820 container remove 7e0e74b231e5d34edc44606d8e871521035a998d83a0faf865f23439f519f18a (image=quay.io/ceph/ceph:v20, name=hopeful_ride, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True)
Jan 10 16:58:45 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:58:45 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 10 16:58:45 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:58:45 compute-0 sudo[88464]: pam_unix(sudo:session): session closed for user root
Jan 10 16:58:45 compute-0 systemd[1]: libpod-conmon-7e0e74b231e5d34edc44606d8e871521035a998d83a0faf865f23439f519f18a.scope: Deactivated successfully.
Jan 10 16:58:45 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e14 do_prune osdmap full prune enabled
Jan 10 16:58:45 compute-0 ceph-mgr[75538]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/4214371536; not ready for session (expect reconnect)
Jan 10 16:58:45 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 10 16:58:45 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 10 16:58:45 compute-0 ceph-mgr[75538]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 10 16:58:45 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e15 e15: 3 total, 2 up, 3 in
Jan 10 16:58:45 compute-0 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e15: 3 total, 2 up, 3 in
Jan 10 16:58:45 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 10 16:58:45 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 10 16:58:45 compute-0 ceph-mgr[75538]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 10 16:58:45 compute-0 ceph-mon[75249]: from='osd.2 [v2:192.168.122.100:6810/4214371536,v1:192.168.122.100:6811/4214371536]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Jan 10 16:58:45 compute-0 ceph-mon[75249]: osd.1 [v2:192.168.122.100:6806/2762500155,v1:192.168.122.100:6807/2762500155] boot
Jan 10 16:58:45 compute-0 ceph-mon[75249]: osdmap e14: 3 total, 2 up, 3 in
Jan 10 16:58:45 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 10 16:58:45 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 10 16:58:45 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 10 16:58:45 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/2904687390' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Jan 10 16:58:45 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:58:45 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:58:45 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 15 pg[1.0( empty local-lis/les=14/15 n=0 ec=11/11 lis/c=0/0 les/c/f=0/0/0 sis=14) [1] r=0 lpr=14 pi=[11,14)/0 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:58:45 compute-0 sudo[88608]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 10 16:58:45 compute-0 sudo[88608]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 16:58:45 compute-0 sudo[88608]: pam_unix(sudo:session): session closed for user root
Jan 10 16:58:45 compute-0 sudo[88633]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 16:58:45 compute-0 sudo[88633]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 16:58:45 compute-0 sudo[88633]: pam_unix(sudo:session): session closed for user root
Jan 10 16:58:45 compute-0 sudo[88658]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Jan 10 16:58:45 compute-0 sudo[88658]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 16:58:45 compute-0 sudo[88706]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qjyucfkaqfcgvlmwyawlktwqtimxelxu ; /usr/bin/python3'
Jan 10 16:58:45 compute-0 sudo[88706]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:58:45 compute-0 python3[88708]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create vms  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 16:58:45 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v40: 1 pgs: 1 creating+peering; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail
Jan 10 16:58:46 compute-0 podman[88709]: 2026-01-10 16:58:46.106682384 +0000 UTC m=+0.119420851 container create c1f53533bdb10eab5e17fa128a38158a6aa0cfa80b7111a59b0776fc73cb10cf (image=quay.io/ceph/ceph:v20, name=charming_bhabha, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 16:58:46 compute-0 systemd[1]: Started libpod-conmon-c1f53533bdb10eab5e17fa128a38158a6aa0cfa80b7111a59b0776fc73cb10cf.scope.
Jan 10 16:58:46 compute-0 podman[88709]: 2026-01-10 16:58:46.058197004 +0000 UTC m=+0.070935531 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 16:58:46 compute-0 systemd[1]: Started libcrun container.
Jan 10 16:58:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78afe3dfa6e7d2135243521975c05c701590b688e4526b493731b2251ea9a328/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 16:58:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78afe3dfa6e7d2135243521975c05c701590b688e4526b493731b2251ea9a328/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 16:58:46 compute-0 podman[88709]: 2026-01-10 16:58:46.224108637 +0000 UTC m=+0.236847114 container init c1f53533bdb10eab5e17fa128a38158a6aa0cfa80b7111a59b0776fc73cb10cf (image=quay.io/ceph/ceph:v20, name=charming_bhabha, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 16:58:46 compute-0 podman[88709]: 2026-01-10 16:58:46.237929006 +0000 UTC m=+0.250667493 container start c1f53533bdb10eab5e17fa128a38158a6aa0cfa80b7111a59b0776fc73cb10cf (image=quay.io/ceph/ceph:v20, name=charming_bhabha, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 10 16:58:46 compute-0 podman[88709]: 2026-01-10 16:58:46.264877194 +0000 UTC m=+0.277615661 container attach c1f53533bdb10eab5e17fa128a38158a6aa0cfa80b7111a59b0776fc73cb10cf (image=quay.io/ceph/ceph:v20, name=charming_bhabha, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 10 16:58:46 compute-0 podman[88768]: 2026-01-10 16:58:46.343906277 +0000 UTC m=+0.083839183 container exec 69622407e4b336ab6e593d34ac16bfb19f7f8835a32ed22c7a89e50ee8c8d8e7 (image=quay.io/ceph/ceph:v20, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 16:58:46 compute-0 podman[88768]: 2026-01-10 16:58:46.441209338 +0000 UTC m=+0.181142214 container exec_died 69622407e4b336ab6e593d34ac16bfb19f7f8835a32ed22c7a89e50ee8c8d8e7 (image=quay.io/ceph/ceph:v20, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-mon-compute-0, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 16:58:46 compute-0 ceph-mgr[75538]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/4214371536; not ready for session (expect reconnect)
Jan 10 16:58:46 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 10 16:58:46 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 10 16:58:46 compute-0 ceph-mgr[75538]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 10 16:58:46 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e15 do_prune osdmap full prune enabled
Jan 10 16:58:46 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e16 e16: 3 total, 2 up, 3 in
Jan 10 16:58:46 compute-0 ceph-mon[75249]: purged_snaps scrub starts
Jan 10 16:58:46 compute-0 ceph-mon[75249]: purged_snaps scrub ok
Jan 10 16:58:46 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 10 16:58:46 compute-0 ceph-mon[75249]: osdmap e15: 3 total, 2 up, 3 in
Jan 10 16:58:46 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 10 16:58:46 compute-0 ceph-mon[75249]: pgmap v40: 1 pgs: 1 creating+peering; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail
Jan 10 16:58:46 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 10 16:58:46 compute-0 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e16: 3 total, 2 up, 3 in
Jan 10 16:58:46 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 10 16:58:46 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 10 16:58:46 compute-0 ceph-mgr[75538]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 10 16:58:47 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Jan 10 16:58:47 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3048395201' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 10 16:58:47 compute-0 ceph-mgr[75538]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/4214371536; not ready for session (expect reconnect)
Jan 10 16:58:47 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 10 16:58:47 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 10 16:58:47 compute-0 ceph-mgr[75538]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 10 16:58:47 compute-0 sudo[88658]: pam_unix(sudo:session): session closed for user root
Jan 10 16:58:47 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 10 16:58:47 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e16 do_prune osdmap full prune enabled
Jan 10 16:58:47 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:58:47 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 10 16:58:47 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3048395201' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 10 16:58:47 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e17 e17: 3 total, 2 up, 3 in
Jan 10 16:58:47 compute-0 charming_bhabha[88752]: pool 'vms' created
Jan 10 16:58:47 compute-0 ceph-mon[75249]: osdmap e16: 3 total, 2 up, 3 in
Jan 10 16:58:47 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 10 16:58:47 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/3048395201' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 10 16:58:47 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 10 16:58:47 compute-0 systemd[1]: libpod-c1f53533bdb10eab5e17fa128a38158a6aa0cfa80b7111a59b0776fc73cb10cf.scope: Deactivated successfully.
Jan 10 16:58:47 compute-0 podman[88709]: 2026-01-10 16:58:47.677816939 +0000 UTC m=+1.690555476 container died c1f53533bdb10eab5e17fa128a38158a6aa0cfa80b7111a59b0776fc73cb10cf (image=quay.io/ceph/ceph:v20, name=charming_bhabha, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 10 16:58:47 compute-0 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e17: 3 total, 2 up, 3 in
Jan 10 16:58:47 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 10 16:58:47 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 10 16:58:47 compute-0 ceph-mgr[75538]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 10 16:58:47 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:58:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-78afe3dfa6e7d2135243521975c05c701590b688e4526b493731b2251ea9a328-merged.mount: Deactivated successfully.
Jan 10 16:58:47 compute-0 sudo[88954]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 16:58:47 compute-0 sudo[88954]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 16:58:47 compute-0 sudo[88954]: pam_unix(sudo:session): session closed for user root
Jan 10 16:58:47 compute-0 podman[88709]: 2026-01-10 16:58:47.845108832 +0000 UTC m=+1.857847409 container remove c1f53533bdb10eab5e17fa128a38158a6aa0cfa80b7111a59b0776fc73cb10cf (image=quay.io/ceph/ceph:v20, name=charming_bhabha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 10 16:58:47 compute-0 systemd[1]: libpod-conmon-c1f53533bdb10eab5e17fa128a38158a6aa0cfa80b7111a59b0776fc73cb10cf.scope: Deactivated successfully.
Jan 10 16:58:47 compute-0 sudo[88979]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 -- inventory --format=json-pretty --filter-for-batch
Jan 10 16:58:47 compute-0 sudo[88979]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 16:58:47 compute-0 sudo[88706]: pam_unix(sudo:session): session closed for user root
Jan 10 16:58:47 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v43: 2 pgs: 1 unknown, 1 creating+peering; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail
Jan 10 16:58:48 compute-0 sudo[89027]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-imxccddyjlugpshmdygkkqlmlygfgbva ; /usr/bin/python3'
Jan 10 16:58:48 compute-0 sudo[89027]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:58:48 compute-0 ceph-mgr[75538]: [devicehealth INFO root] creating main.db for devicehealth
Jan 10 16:58:48 compute-0 python3[89029]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create volumes  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 16:58:48 compute-0 podman[89042]: 2026-01-10 16:58:48.264159697 +0000 UTC m=+0.095850949 container create a68429e359b0b272559bd933bec517b9dc670f97169238bf0fd19281b08d366a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_jackson, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 16:58:48 compute-0 ceph-mgr[75538]: [devicehealth INFO root] Check health
Jan 10 16:58:48 compute-0 podman[89042]: 2026-01-10 16:58:48.208428667 +0000 UTC m=+0.040119919 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 16:58:48 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e17 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 16:58:48 compute-0 ceph-mgr[75538]: [devicehealth ERROR root] Fail to parse JSON result from daemon osd.2 ()
Jan 10 16:58:48 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Jan 10 16:58:48 compute-0 systemd[1]: Started libpod-conmon-a68429e359b0b272559bd933bec517b9dc670f97169238bf0fd19281b08d366a.scope.
Jan 10 16:58:48 compute-0 podman[89059]: 2026-01-10 16:58:48.337427644 +0000 UTC m=+0.089629220 container create c82627a7b8af607a35c344ef6ef1f4b3d34e2f7db07c2878009eb6783f992f3d (image=quay.io/ceph/ceph:v20, name=sad_jemison, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 16:58:48 compute-0 sudo[89081]:     ceph : PWD=/ ; USER=root ; COMMAND=/usr/sbin/smartctl -x --json=o /dev/vda
Jan 10 16:58:48 compute-0 sudo[89081]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Jan 10 16:58:48 compute-0 sudo[89081]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=167)
Jan 10 16:58:48 compute-0 systemd[1]: Started libcrun container.
Jan 10 16:58:48 compute-0 podman[89059]: 2026-01-10 16:58:48.275840525 +0000 UTC m=+0.028042121 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 16:58:48 compute-0 sudo[89081]: pam_unix(sudo:session): session closed for user root
Jan 10 16:58:48 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Jan 10 16:58:48 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Jan 10 16:58:48 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "mon metadata", "id": "compute-0"} : dispatch
Jan 10 16:58:48 compute-0 podman[89042]: 2026-01-10 16:58:48.385338408 +0000 UTC m=+0.217029710 container init a68429e359b0b272559bd933bec517b9dc670f97169238bf0fd19281b08d366a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_jackson, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 16:58:48 compute-0 podman[89042]: 2026-01-10 16:58:48.39302643 +0000 UTC m=+0.224717702 container start a68429e359b0b272559bd933bec517b9dc670f97169238bf0fd19281b08d366a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_jackson, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 10 16:58:48 compute-0 jovial_jackson[89082]: 167 167
Jan 10 16:58:48 compute-0 systemd[1]: Started libpod-conmon-c82627a7b8af607a35c344ef6ef1f4b3d34e2f7db07c2878009eb6783f992f3d.scope.
Jan 10 16:58:48 compute-0 systemd[1]: libpod-a68429e359b0b272559bd933bec517b9dc670f97169238bf0fd19281b08d366a.scope: Deactivated successfully.
Jan 10 16:58:48 compute-0 podman[89042]: 2026-01-10 16:58:48.413172182 +0000 UTC m=+0.244863524 container attach a68429e359b0b272559bd933bec517b9dc670f97169238bf0fd19281b08d366a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_jackson, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 16:58:48 compute-0 podman[89042]: 2026-01-10 16:58:48.413585494 +0000 UTC m=+0.245276746 container died a68429e359b0b272559bd933bec517b9dc670f97169238bf0fd19281b08d366a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_jackson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 16:58:48 compute-0 systemd[1]: Started libcrun container.
Jan 10 16:58:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a71be2d527c3ec85baf8661916f2a932b2a9742091f6118d2abd7267b41b04d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 16:58:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a71be2d527c3ec85baf8661916f2a932b2a9742091f6118d2abd7267b41b04d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 16:58:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-bc322d6120f1e5e94c933b7a3fba9ccb529281495a09e47565c7006214a252c3-merged.mount: Deactivated successfully.
Jan 10 16:58:48 compute-0 podman[89042]: 2026-01-10 16:58:48.542288532 +0000 UTC m=+0.373979794 container remove a68429e359b0b272559bd933bec517b9dc670f97169238bf0fd19281b08d366a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_jackson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default)
Jan 10 16:58:48 compute-0 ceph-mgr[75538]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/4214371536; not ready for session (expect reconnect)
Jan 10 16:58:48 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 10 16:58:48 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 10 16:58:48 compute-0 ceph-mgr[75538]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 10 16:58:48 compute-0 systemd[1]: libpod-conmon-a68429e359b0b272559bd933bec517b9dc670f97169238bf0fd19281b08d366a.scope: Deactivated successfully.
Jan 10 16:58:48 compute-0 podman[89059]: 2026-01-10 16:58:48.575659446 +0000 UTC m=+0.327861052 container init c82627a7b8af607a35c344ef6ef1f4b3d34e2f7db07c2878009eb6783f992f3d (image=quay.io/ceph/ceph:v20, name=sad_jemison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 10 16:58:48 compute-0 podman[89059]: 2026-01-10 16:58:48.590071732 +0000 UTC m=+0.342273318 container start c82627a7b8af607a35c344ef6ef1f4b3d34e2f7db07c2878009eb6783f992f3d (image=quay.io/ceph/ceph:v20, name=sad_jemison, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 16:58:48 compute-0 podman[89059]: 2026-01-10 16:58:48.59381453 +0000 UTC m=+0.346016116 container attach c82627a7b8af607a35c344ef6ef1f4b3d34e2f7db07c2878009eb6783f992f3d (image=quay.io/ceph/ceph:v20, name=sad_jemison, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 16:58:48 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:58:48 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/3048395201' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 10 16:58:48 compute-0 ceph-mon[75249]: osdmap e17: 3 total, 2 up, 3 in
Jan 10 16:58:48 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 10 16:58:48 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:58:48 compute-0 ceph-mon[75249]: pgmap v43: 2 pgs: 1 unknown, 1 creating+peering; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail
Jan 10 16:58:48 compute-0 ceph-mon[75249]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Jan 10 16:58:48 compute-0 ceph-mon[75249]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Jan 10 16:58:48 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "mon metadata", "id": "compute-0"} : dispatch
Jan 10 16:58:48 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 10 16:58:48 compute-0 podman[89115]: 2026-01-10 16:58:48.75858238 +0000 UTC m=+0.032775948 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 16:58:48 compute-0 podman[89115]: 2026-01-10 16:58:48.941763711 +0000 UTC m=+0.215957279 container create aa8f8aa280dd13ffa02f12c570d095c07914d642d5dc7faf32366d808735ff69 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_sinoussi, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 16:58:48 compute-0 systemd[1]: Started libpod-conmon-aa8f8aa280dd13ffa02f12c570d095c07914d642d5dc7faf32366d808735ff69.scope.
Jan 10 16:58:49 compute-0 systemd[1]: Started libcrun container.
Jan 10 16:58:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77ae59c8e4cd34ad6fb938fc9f9dca5d83295225c825805d657b2b187bcd47d0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 16:58:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77ae59c8e4cd34ad6fb938fc9f9dca5d83295225c825805d657b2b187bcd47d0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 16:58:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77ae59c8e4cd34ad6fb938fc9f9dca5d83295225c825805d657b2b187bcd47d0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 16:58:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77ae59c8e4cd34ad6fb938fc9f9dca5d83295225c825805d657b2b187bcd47d0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 16:58:49 compute-0 podman[89115]: 2026-01-10 16:58:49.034003756 +0000 UTC m=+0.308197314 container init aa8f8aa280dd13ffa02f12c570d095c07914d642d5dc7faf32366d808735ff69 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_sinoussi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 10 16:58:49 compute-0 podman[89115]: 2026-01-10 16:58:49.040978158 +0000 UTC m=+0.315171696 container start aa8f8aa280dd13ffa02f12c570d095c07914d642d5dc7faf32366d808735ff69 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_sinoussi, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True)
Jan 10 16:58:49 compute-0 podman[89115]: 2026-01-10 16:58:49.050726799 +0000 UTC m=+0.324920357 container attach aa8f8aa280dd13ffa02f12c570d095c07914d642d5dc7faf32366d808735ff69 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_sinoussi, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 16:58:49 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Jan 10 16:58:49 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/709839503' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 10 16:58:49 compute-0 ceph-mgr[75538]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/4214371536; not ready for session (expect reconnect)
Jan 10 16:58:49 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 10 16:58:49 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 10 16:58:49 compute-0 ceph-mgr[75538]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 10 16:58:49 compute-0 ceph-osd[87867]: osd.2 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 19.611 iops: 5020.469 elapsed_sec: 0.598
Jan 10 16:58:49 compute-0 ceph-osd[87867]: log_channel(cluster) log [WRN] : OSD bench result of 5020.468677 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 10 16:58:49 compute-0 ceph-osd[87867]: osd.2 0 waiting for initial osdmap
Jan 10 16:58:49 compute-0 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-2[87861]: 2026-01-10T16:58:49.547+0000 7f970e490640 -1 osd.2 0 waiting for initial osdmap
Jan 10 16:58:49 compute-0 ceph-osd[87867]: osd.2 17 crush map has features 288514051259236352, adjusting msgr requires for clients
Jan 10 16:58:49 compute-0 ceph-osd[87867]: osd.2 17 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Jan 10 16:58:49 compute-0 ceph-osd[87867]: osd.2 17 crush map has features 3314933000852226048, adjusting msgr requires for osds
Jan 10 16:58:49 compute-0 ceph-osd[87867]: osd.2 17 check_osdmap_features require_osd_release unknown -> tentacle
Jan 10 16:58:49 compute-0 ceph-osd[87867]: osd.2 17 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Jan 10 16:58:49 compute-0 ceph-osd[87867]: osd.2 17 set_numa_affinity not setting numa affinity
Jan 10 16:58:49 compute-0 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-2[87861]: 2026-01-10T16:58:49.579+0000 7f9708a83640 -1 osd.2 17 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Jan 10 16:58:49 compute-0 ceph-osd[87867]: osd.2 17 _collect_metadata loop5:  no unique device id for loop5: fallback method has no model nor serial no unique device path for loop5: no symlink to loop5 in /dev/disk/by-path
Jan 10 16:58:49 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e17 do_prune osdmap full prune enabled
Jan 10 16:58:49 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/709839503' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 10 16:58:49 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 10 16:58:49 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/709839503' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 10 16:58:49 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e18 e18: 3 total, 3 up, 3 in
Jan 10 16:58:49 compute-0 sad_jemison[89093]: pool 'volumes' created
Jan 10 16:58:49 compute-0 ceph-mon[75249]: log_channel(cluster) log [DBG] : mgrmap e10: compute-0.mkxlpr(active, since 72s)
Jan 10 16:58:49 compute-0 ceph-mon[75249]: log_channel(cluster) log [INF] : osd.2 [v2:192.168.122.100:6810/4214371536,v1:192.168.122.100:6811/4214371536] boot
Jan 10 16:58:49 compute-0 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e18: 3 total, 3 up, 3 in
Jan 10 16:58:49 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 10 16:58:49 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 10 16:58:49 compute-0 systemd[1]: libpod-c82627a7b8af607a35c344ef6ef1f4b3d34e2f7db07c2878009eb6783f992f3d.scope: Deactivated successfully.
Jan 10 16:58:49 compute-0 ceph-osd[87867]: osd.2 18 state: booting -> active
Jan 10 16:58:49 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 18 pg[2.0( empty local-lis/les=0/0 n=0 ec=17/17 lis/c=0/0 les/c/f=0/0/0 sis=18) [2] r=0 lpr=18 pi=[17,18)/0 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:58:49 compute-0 podman[89565]: 2026-01-10 16:58:49.815401479 +0000 UTC m=+0.030598715 container died c82627a7b8af607a35c344ef6ef1f4b3d34e2f7db07c2878009eb6783f992f3d (image=quay.io/ceph/ceph:v20, name=sad_jemison, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 10 16:58:49 compute-0 blissful_sinoussi[89150]: [
Jan 10 16:58:49 compute-0 blissful_sinoussi[89150]:     {
Jan 10 16:58:49 compute-0 blissful_sinoussi[89150]:         "available": false,
Jan 10 16:58:49 compute-0 blissful_sinoussi[89150]:         "being_replaced": false,
Jan 10 16:58:49 compute-0 blissful_sinoussi[89150]:         "ceph_device_lvm": false,
Jan 10 16:58:49 compute-0 blissful_sinoussi[89150]:         "device_id": "QEMU_DVD-ROM_QM00001",
Jan 10 16:58:49 compute-0 blissful_sinoussi[89150]:         "lsm_data": {},
Jan 10 16:58:49 compute-0 blissful_sinoussi[89150]:         "lvs": [],
Jan 10 16:58:49 compute-0 blissful_sinoussi[89150]:         "path": "/dev/sr0",
Jan 10 16:58:49 compute-0 blissful_sinoussi[89150]:         "rejected_reasons": [
Jan 10 16:58:49 compute-0 blissful_sinoussi[89150]:             "Has a FileSystem",
Jan 10 16:58:49 compute-0 blissful_sinoussi[89150]:             "Insufficient space (<5GB)"
Jan 10 16:58:49 compute-0 blissful_sinoussi[89150]:         ],
Jan 10 16:58:49 compute-0 blissful_sinoussi[89150]:         "sys_api": {
Jan 10 16:58:49 compute-0 blissful_sinoussi[89150]:             "actuators": null,
Jan 10 16:58:49 compute-0 blissful_sinoussi[89150]:             "device_nodes": [
Jan 10 16:58:49 compute-0 blissful_sinoussi[89150]:                 "sr0"
Jan 10 16:58:49 compute-0 blissful_sinoussi[89150]:             ],
Jan 10 16:58:49 compute-0 blissful_sinoussi[89150]:             "devname": "sr0",
Jan 10 16:58:49 compute-0 blissful_sinoussi[89150]:             "human_readable_size": "482.00 KB",
Jan 10 16:58:49 compute-0 blissful_sinoussi[89150]:             "id_bus": "ata",
Jan 10 16:58:49 compute-0 blissful_sinoussi[89150]:             "model": "QEMU DVD-ROM",
Jan 10 16:58:49 compute-0 blissful_sinoussi[89150]:             "nr_requests": "2",
Jan 10 16:58:49 compute-0 blissful_sinoussi[89150]:             "parent": "/dev/sr0",
Jan 10 16:58:49 compute-0 blissful_sinoussi[89150]:             "partitions": {},
Jan 10 16:58:49 compute-0 blissful_sinoussi[89150]:             "path": "/dev/sr0",
Jan 10 16:58:49 compute-0 blissful_sinoussi[89150]:             "removable": "1",
Jan 10 16:58:49 compute-0 blissful_sinoussi[89150]:             "rev": "2.5+",
Jan 10 16:58:49 compute-0 blissful_sinoussi[89150]:             "ro": "0",
Jan 10 16:58:49 compute-0 blissful_sinoussi[89150]:             "rotational": "1",
Jan 10 16:58:49 compute-0 blissful_sinoussi[89150]:             "sas_address": "",
Jan 10 16:58:49 compute-0 blissful_sinoussi[89150]:             "sas_device_handle": "",
Jan 10 16:58:49 compute-0 blissful_sinoussi[89150]:             "scheduler_mode": "mq-deadline",
Jan 10 16:58:49 compute-0 blissful_sinoussi[89150]:             "sectors": 0,
Jan 10 16:58:49 compute-0 blissful_sinoussi[89150]:             "sectorsize": "2048",
Jan 10 16:58:49 compute-0 blissful_sinoussi[89150]:             "size": 493568.0,
Jan 10 16:58:49 compute-0 blissful_sinoussi[89150]:             "support_discard": "2048",
Jan 10 16:58:49 compute-0 blissful_sinoussi[89150]:             "type": "disk",
Jan 10 16:58:49 compute-0 blissful_sinoussi[89150]:             "vendor": "QEMU"
Jan 10 16:58:49 compute-0 blissful_sinoussi[89150]:         }
Jan 10 16:58:49 compute-0 blissful_sinoussi[89150]:     }
Jan 10 16:58:49 compute-0 blissful_sinoussi[89150]: ]
Jan 10 16:58:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-0a71be2d527c3ec85baf8661916f2a932b2a9742091f6118d2abd7267b41b04d-merged.mount: Deactivated successfully.
Jan 10 16:58:49 compute-0 systemd[1]: libpod-aa8f8aa280dd13ffa02f12c570d095c07914d642d5dc7faf32366d808735ff69.scope: Deactivated successfully.
Jan 10 16:58:49 compute-0 podman[89565]: 2026-01-10 16:58:49.864676462 +0000 UTC m=+0.079873698 container remove c82627a7b8af607a35c344ef6ef1f4b3d34e2f7db07c2878009eb6783f992f3d (image=quay.io/ceph/ceph:v20, name=sad_jemison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 10 16:58:49 compute-0 podman[89115]: 2026-01-10 16:58:49.868638987 +0000 UTC m=+1.142832545 container died aa8f8aa280dd13ffa02f12c570d095c07914d642d5dc7faf32366d808735ff69 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_sinoussi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 10 16:58:49 compute-0 systemd[1]: libpod-conmon-c82627a7b8af607a35c344ef6ef1f4b3d34e2f7db07c2878009eb6783f992f3d.scope: Deactivated successfully.
Jan 10 16:58:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-77ae59c8e4cd34ad6fb938fc9f9dca5d83295225c825805d657b2b187bcd47d0-merged.mount: Deactivated successfully.
Jan 10 16:58:49 compute-0 sudo[89027]: pam_unix(sudo:session): session closed for user root
Jan 10 16:58:49 compute-0 podman[89115]: 2026-01-10 16:58:49.91166544 +0000 UTC m=+1.185858988 container remove aa8f8aa280dd13ffa02f12c570d095c07914d642d5dc7faf32366d808735ff69 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_sinoussi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 10 16:58:49 compute-0 systemd[1]: libpod-conmon-aa8f8aa280dd13ffa02f12c570d095c07914d642d5dc7faf32366d808735ff69.scope: Deactivated successfully.
Jan 10 16:58:49 compute-0 sudo[88979]: pam_unix(sudo:session): session closed for user root
Jan 10 16:58:49 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 10 16:58:49 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:58:49 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v45: 3 pgs: 1 active+clean, 2 unknown; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 10 16:58:49 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 10 16:58:49 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:58:49 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0)
Jan 10 16:58:49 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch
Jan 10 16:58:49 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0)
Jan 10 16:58:49 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch
Jan 10 16:58:49 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0)
Jan 10 16:58:49 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch
Jan 10 16:58:49 compute-0 ceph-mgr[75538]: [cephadm INFO root] Adjusting osd_memory_target on compute-0 to 43688k
Jan 10 16:58:49 compute-0 ceph-mgr[75538]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-0 to 43688k
Jan 10 16:58:49 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Jan 10 16:58:49 compute-0 ceph-mgr[75538]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-0 to 44737331: error parsing value: Value '44737331' is below minimum 939524096
Jan 10 16:58:49 compute-0 ceph-mgr[75538]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-0 to 44737331: error parsing value: Value '44737331' is below minimum 939524096
Jan 10 16:58:49 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 16:58:49 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 16:58:49 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 10 16:58:49 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 10 16:58:49 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 10 16:58:50 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:58:50 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 10 16:58:50 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 10 16:58:50 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 10 16:58:50 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 10 16:58:50 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 16:58:50 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 16:58:50 compute-0 sudo[89872]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-euhgxbsbiabjpqhbzyxyqjqruuvvezef ; /usr/bin/python3'
Jan 10 16:58:50 compute-0 sudo[89872]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:58:50 compute-0 sudo[89854]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 16:58:50 compute-0 sudo[89854]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 16:58:50 compute-0 sudo[89854]: pam_unix(sudo:session): session closed for user root
Jan 10 16:58:50 compute-0 sudo[89892]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 10 16:58:50 compute-0 sudo[89892]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 16:58:50 compute-0 python3[89889]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create backups  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 16:58:50 compute-0 podman[89917]: 2026-01-10 16:58:50.314044234 +0000 UTC m=+0.077426148 container create 8bdb8c3f6f63cf59425e2305f62b2d93f007819fc5b09211f4bcbeedde247bdc (image=quay.io/ceph/ceph:v20, name=beautiful_mcnulty, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 16:58:50 compute-0 systemd[1]: Started libpod-conmon-8bdb8c3f6f63cf59425e2305f62b2d93f007819fc5b09211f4bcbeedde247bdc.scope.
Jan 10 16:58:50 compute-0 podman[89917]: 2026-01-10 16:58:50.286204429 +0000 UTC m=+0.049586383 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 16:58:50 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 18 pg[3.0( empty local-lis/les=0/0 n=0 ec=18/18 lis/c=0/0 les/c/f=0/0/0 sis=18) [1] r=0 lpr=18 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:58:50 compute-0 systemd[1]: Started libcrun container.
Jan 10 16:58:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9a692eeb6b3eb84981408646517b84bf0fedc8cd66fb92e3938493a51e85eb2/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 16:58:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9a692eeb6b3eb84981408646517b84bf0fedc8cd66fb92e3938493a51e85eb2/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 16:58:50 compute-0 podman[89917]: 2026-01-10 16:58:50.419558832 +0000 UTC m=+0.182940806 container init 8bdb8c3f6f63cf59425e2305f62b2d93f007819fc5b09211f4bcbeedde247bdc (image=quay.io/ceph/ceph:v20, name=beautiful_mcnulty, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 10 16:58:50 compute-0 podman[89917]: 2026-01-10 16:58:50.426850073 +0000 UTC m=+0.190231947 container start 8bdb8c3f6f63cf59425e2305f62b2d93f007819fc5b09211f4bcbeedde247bdc (image=quay.io/ceph/ceph:v20, name=beautiful_mcnulty, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 16:58:50 compute-0 podman[89917]: 2026-01-10 16:58:50.430095626 +0000 UTC m=+0.193477590 container attach 8bdb8c3f6f63cf59425e2305f62b2d93f007819fc5b09211f4bcbeedde247bdc (image=quay.io/ceph/ceph:v20, name=beautiful_mcnulty, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 16:58:50 compute-0 podman[89946]: 2026-01-10 16:58:50.469136524 +0000 UTC m=+0.075781570 container create 3f2057eae82e69604512f6b2d31250f4010d73700e61a8569e5a06a27c2d3957 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_wozniak, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 16:58:50 compute-0 systemd[1]: Started libpod-conmon-3f2057eae82e69604512f6b2d31250f4010d73700e61a8569e5a06a27c2d3957.scope.
Jan 10 16:58:50 compute-0 systemd[1]: Started libcrun container.
Jan 10 16:58:50 compute-0 podman[89946]: 2026-01-10 16:58:50.434568816 +0000 UTC m=+0.041213912 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 16:58:50 compute-0 podman[89946]: 2026-01-10 16:58:50.53441678 +0000 UTC m=+0.141061836 container init 3f2057eae82e69604512f6b2d31250f4010d73700e61a8569e5a06a27c2d3957 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_wozniak, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default)
Jan 10 16:58:50 compute-0 podman[89946]: 2026-01-10 16:58:50.545187121 +0000 UTC m=+0.151832147 container start 3f2057eae82e69604512f6b2d31250f4010d73700e61a8569e5a06a27c2d3957 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_wozniak, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 16:58:50 compute-0 sad_wozniak[89964]: 167 167
Jan 10 16:58:50 compute-0 systemd[1]: libpod-3f2057eae82e69604512f6b2d31250f4010d73700e61a8569e5a06a27c2d3957.scope: Deactivated successfully.
Jan 10 16:58:50 compute-0 podman[89946]: 2026-01-10 16:58:50.549753683 +0000 UTC m=+0.156398709 container attach 3f2057eae82e69604512f6b2d31250f4010d73700e61a8569e5a06a27c2d3957 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_wozniak, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 10 16:58:50 compute-0 conmon[89964]: conmon 3f2057eae82e69604512 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3f2057eae82e69604512f6b2d31250f4010d73700e61a8569e5a06a27c2d3957.scope/container/memory.events
Jan 10 16:58:50 compute-0 podman[89946]: 2026-01-10 16:58:50.550590577 +0000 UTC m=+0.157235603 container died 3f2057eae82e69604512f6b2d31250f4010d73700e61a8569e5a06a27c2d3957 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_wozniak, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 10 16:58:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-54968204b5744bff9e10cc19f78007e9ef5bd5d51398262992f6a6347f12dc79-merged.mount: Deactivated successfully.
Jan 10 16:58:50 compute-0 podman[89946]: 2026-01-10 16:58:50.602855306 +0000 UTC m=+0.209500312 container remove 3f2057eae82e69604512f6b2d31250f4010d73700e61a8569e5a06a27c2d3957 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_wozniak, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 16:58:50 compute-0 systemd[1]: libpod-conmon-3f2057eae82e69604512f6b2d31250f4010d73700e61a8569e5a06a27c2d3957.scope: Deactivated successfully.
Jan 10 16:58:50 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e18 do_prune osdmap full prune enabled
Jan 10 16:58:50 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e19 e19: 3 total, 3 up, 3 in
Jan 10 16:58:50 compute-0 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e19: 3 total, 3 up, 3 in
Jan 10 16:58:50 compute-0 ceph-mon[75249]: OSD bench result of 5020.468677 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 10 16:58:50 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/709839503' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 10 16:58:50 compute-0 ceph-mon[75249]: mgrmap e10: compute-0.mkxlpr(active, since 72s)
Jan 10 16:58:50 compute-0 ceph-mon[75249]: osd.2 [v2:192.168.122.100:6810/4214371536,v1:192.168.122.100:6811/4214371536] boot
Jan 10 16:58:50 compute-0 ceph-mon[75249]: osdmap e18: 3 total, 3 up, 3 in
Jan 10 16:58:50 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 10 16:58:50 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:58:50 compute-0 ceph-mon[75249]: pgmap v45: 3 pgs: 1 active+clean, 2 unknown; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 10 16:58:50 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:58:50 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch
Jan 10 16:58:50 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch
Jan 10 16:58:50 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch
Jan 10 16:58:50 compute-0 ceph-mon[75249]: Adjusting osd_memory_target on compute-0 to 43688k
Jan 10 16:58:50 compute-0 ceph-mon[75249]: Unable to set osd_memory_target on compute-0 to 44737331: error parsing value: Value '44737331' is below minimum 939524096
Jan 10 16:58:50 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 16:58:50 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 10 16:58:50 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:58:50 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 10 16:58:50 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 10 16:58:50 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 19 pg[2.0( empty local-lis/les=18/19 n=0 ec=17/17 lis/c=0/0 les/c/f=0/0/0 sis=18) [2] r=0 lpr=18 pi=[17,18)/0 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:58:50 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 16:58:50 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 19 pg[3.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=0/0 les/c/f=0/0/0 sis=18) [1] r=0 lpr=18 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:58:50 compute-0 podman[90007]: 2026-01-10 16:58:50.775643317 +0000 UTC m=+0.040199902 container create 26aaeb6c554cddd0827474cbce707be348d176bf07c5e914dd8ba777f811a51c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_mccarthy, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 10 16:58:50 compute-0 systemd[1]: Started libpod-conmon-26aaeb6c554cddd0827474cbce707be348d176bf07c5e914dd8ba777f811a51c.scope.
Jan 10 16:58:50 compute-0 systemd[1]: Started libcrun container.
Jan 10 16:58:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3644d3cac0e5e37fa4f21bfbcf9809c1c232747985294003468894bdfe34a46/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 16:58:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3644d3cac0e5e37fa4f21bfbcf9809c1c232747985294003468894bdfe34a46/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 16:58:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3644d3cac0e5e37fa4f21bfbcf9809c1c232747985294003468894bdfe34a46/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 16:58:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3644d3cac0e5e37fa4f21bfbcf9809c1c232747985294003468894bdfe34a46/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 16:58:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3644d3cac0e5e37fa4f21bfbcf9809c1c232747985294003468894bdfe34a46/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 10 16:58:50 compute-0 podman[90007]: 2026-01-10 16:58:50.758195503 +0000 UTC m=+0.022752118 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 16:58:50 compute-0 podman[90007]: 2026-01-10 16:58:50.886095998 +0000 UTC m=+0.150652633 container init 26aaeb6c554cddd0827474cbce707be348d176bf07c5e914dd8ba777f811a51c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_mccarthy, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 16:58:50 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Jan 10 16:58:50 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4292381526' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 10 16:58:50 compute-0 podman[90007]: 2026-01-10 16:58:50.892809592 +0000 UTC m=+0.157366177 container start 26aaeb6c554cddd0827474cbce707be348d176bf07c5e914dd8ba777f811a51c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_mccarthy, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 16:58:50 compute-0 podman[90007]: 2026-01-10 16:58:50.896029805 +0000 UTC m=+0.160586390 container attach 26aaeb6c554cddd0827474cbce707be348d176bf07c5e914dd8ba777f811a51c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_mccarthy, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 16:58:51 compute-0 hardcore_mccarthy[90025]: --> passed data devices: 0 physical, 3 LVM
Jan 10 16:58:51 compute-0 hardcore_mccarthy[90025]: --> All data devices are unavailable
Jan 10 16:58:51 compute-0 systemd[1]: libpod-26aaeb6c554cddd0827474cbce707be348d176bf07c5e914dd8ba777f811a51c.scope: Deactivated successfully.
Jan 10 16:58:51 compute-0 conmon[90025]: conmon 26aaeb6c554cddd08274 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-26aaeb6c554cddd0827474cbce707be348d176bf07c5e914dd8ba777f811a51c.scope/container/memory.events
Jan 10 16:58:51 compute-0 podman[90007]: 2026-01-10 16:58:51.453428227 +0000 UTC m=+0.717984822 container died 26aaeb6c554cddd0827474cbce707be348d176bf07c5e914dd8ba777f811a51c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_mccarthy, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 16:58:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-a3644d3cac0e5e37fa4f21bfbcf9809c1c232747985294003468894bdfe34a46-merged.mount: Deactivated successfully.
Jan 10 16:58:51 compute-0 podman[90007]: 2026-01-10 16:58:51.537532056 +0000 UTC m=+0.802088641 container remove 26aaeb6c554cddd0827474cbce707be348d176bf07c5e914dd8ba777f811a51c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_mccarthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 16:58:51 compute-0 systemd[1]: libpod-conmon-26aaeb6c554cddd0827474cbce707be348d176bf07c5e914dd8ba777f811a51c.scope: Deactivated successfully.
Jan 10 16:58:51 compute-0 sudo[89892]: pam_unix(sudo:session): session closed for user root
Jan 10 16:58:51 compute-0 sudo[90061]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 16:58:51 compute-0 sudo[90061]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 16:58:51 compute-0 sudo[90061]: pam_unix(sudo:session): session closed for user root
Jan 10 16:58:51 compute-0 sudo[90086]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 -- lvm list --format json
Jan 10 16:58:51 compute-0 sudo[90086]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 16:58:51 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e19 do_prune osdmap full prune enabled
Jan 10 16:58:51 compute-0 ceph-mon[75249]: osdmap e19: 3 total, 3 up, 3 in
Jan 10 16:58:51 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/4292381526' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 10 16:58:51 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4292381526' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 10 16:58:51 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e20 e20: 3 total, 3 up, 3 in
Jan 10 16:58:51 compute-0 beautiful_mcnulty[89944]: pool 'backups' created
Jan 10 16:58:51 compute-0 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e20: 3 total, 3 up, 3 in
Jan 10 16:58:51 compute-0 systemd[1]: libpod-8bdb8c3f6f63cf59425e2305f62b2d93f007819fc5b09211f4bcbeedde247bdc.scope: Deactivated successfully.
Jan 10 16:58:51 compute-0 podman[89917]: 2026-01-10 16:58:51.796744274 +0000 UTC m=+1.560126148 container died 8bdb8c3f6f63cf59425e2305f62b2d93f007819fc5b09211f4bcbeedde247bdc (image=quay.io/ceph/ceph:v20, name=beautiful_mcnulty, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 16:58:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-c9a692eeb6b3eb84981408646517b84bf0fedc8cd66fb92e3938493a51e85eb2-merged.mount: Deactivated successfully.
Jan 10 16:58:51 compute-0 podman[89917]: 2026-01-10 16:58:51.844893675 +0000 UTC m=+1.608275549 container remove 8bdb8c3f6f63cf59425e2305f62b2d93f007819fc5b09211f4bcbeedde247bdc (image=quay.io/ceph/ceph:v20, name=beautiful_mcnulty, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 10 16:58:51 compute-0 systemd[1]: libpod-conmon-8bdb8c3f6f63cf59425e2305f62b2d93f007819fc5b09211f4bcbeedde247bdc.scope: Deactivated successfully.
Jan 10 16:58:51 compute-0 sudo[89872]: pam_unix(sudo:session): session closed for user root
Jan 10 16:58:51 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v48: 4 pgs: 1 creating+peering, 1 active+clean, 2 unknown; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 10 16:58:52 compute-0 sudo[90161]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wtmjmeucohrcidgdegyoxcvvsepjvpnj ; /usr/bin/python3'
Jan 10 16:58:52 compute-0 sudo[90161]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:58:52 compute-0 podman[90157]: 2026-01-10 16:58:52.076005192 +0000 UTC m=+0.057236925 container create 431392fee8a297f6a65413d61ff67c93a1660628f8ef4c82f315ea0cb1350935 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_mahavira, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 16:58:52 compute-0 systemd[1]: Started libpod-conmon-431392fee8a297f6a65413d61ff67c93a1660628f8ef4c82f315ea0cb1350935.scope.
Jan 10 16:58:52 compute-0 podman[90157]: 2026-01-10 16:58:52.044281135 +0000 UTC m=+0.025512968 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 16:58:52 compute-0 systemd[1]: Started libcrun container.
Jan 10 16:58:52 compute-0 podman[90157]: 2026-01-10 16:58:52.185252587 +0000 UTC m=+0.166484350 container init 431392fee8a297f6a65413d61ff67c93a1660628f8ef4c82f315ea0cb1350935 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_mahavira, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 10 16:58:52 compute-0 podman[90157]: 2026-01-10 16:58:52.193413023 +0000 UTC m=+0.174644756 container start 431392fee8a297f6a65413d61ff67c93a1660628f8ef4c82f315ea0cb1350935 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_mahavira, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 10 16:58:52 compute-0 podman[90157]: 2026-01-10 16:58:52.196956496 +0000 UTC m=+0.178188259 container attach 431392fee8a297f6a65413d61ff67c93a1660628f8ef4c82f315ea0cb1350935 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_mahavira, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 10 16:58:52 compute-0 modest_mahavira[90179]: 167 167
Jan 10 16:58:52 compute-0 systemd[1]: libpod-431392fee8a297f6a65413d61ff67c93a1660628f8ef4c82f315ea0cb1350935.scope: Deactivated successfully.
Jan 10 16:58:52 compute-0 conmon[90179]: conmon 431392fee8a297f6a654 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-431392fee8a297f6a65413d61ff67c93a1660628f8ef4c82f315ea0cb1350935.scope/container/memory.events
Jan 10 16:58:52 compute-0 podman[90157]: 2026-01-10 16:58:52.201289741 +0000 UTC m=+0.182521474 container died 431392fee8a297f6a65413d61ff67c93a1660628f8ef4c82f315ea0cb1350935 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_mahavira, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 10 16:58:52 compute-0 python3[90173]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create images  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 16:58:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-73f35c8eb60c0cb4e529a67725ef81b0207d2b85bd0b4e552de1ba37cc544a1c-merged.mount: Deactivated successfully.
Jan 10 16:58:52 compute-0 podman[90157]: 2026-01-10 16:58:52.241481872 +0000 UTC m=+0.222713605 container remove 431392fee8a297f6a65413d61ff67c93a1660628f8ef4c82f315ea0cb1350935 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_mahavira, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 10 16:58:52 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 20 pg[4.0( empty local-lis/les=0/0 n=0 ec=20/20 lis/c=0/0 les/c/f=0/0/0 sis=20) [0] r=0 lpr=20 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:58:52 compute-0 systemd[1]: libpod-conmon-431392fee8a297f6a65413d61ff67c93a1660628f8ef4c82f315ea0cb1350935.scope: Deactivated successfully.
Jan 10 16:58:52 compute-0 podman[90186]: 2026-01-10 16:58:52.283273189 +0000 UTC m=+0.052611251 container create c4043445b7ed4953de6504c483fd19fa67fb7ecb14e2c7a75b7c0ac0728f5efe (image=quay.io/ceph/ceph:v20, name=pensive_chaplygin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 10 16:58:52 compute-0 systemd[1]: Started libpod-conmon-c4043445b7ed4953de6504c483fd19fa67fb7ecb14e2c7a75b7c0ac0728f5efe.scope.
Jan 10 16:58:52 compute-0 systemd[1]: Started libcrun container.
Jan 10 16:58:52 compute-0 podman[90186]: 2026-01-10 16:58:52.266070762 +0000 UTC m=+0.035408854 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 16:58:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e64b13baeb0b7ee933361c5bd52c3e76b6afca1889412c05fe53f5c4950e9ebe/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 16:58:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e64b13baeb0b7ee933361c5bd52c3e76b6afca1889412c05fe53f5c4950e9ebe/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 16:58:52 compute-0 podman[90186]: 2026-01-10 16:58:52.378175801 +0000 UTC m=+0.147513883 container init c4043445b7ed4953de6504c483fd19fa67fb7ecb14e2c7a75b7c0ac0728f5efe (image=quay.io/ceph/ceph:v20, name=pensive_chaplygin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 16:58:52 compute-0 podman[90186]: 2026-01-10 16:58:52.386934984 +0000 UTC m=+0.156273096 container start c4043445b7ed4953de6504c483fd19fa67fb7ecb14e2c7a75b7c0ac0728f5efe (image=quay.io/ceph/ceph:v20, name=pensive_chaplygin, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 16:58:52 compute-0 podman[90186]: 2026-01-10 16:58:52.391413363 +0000 UTC m=+0.160751435 container attach c4043445b7ed4953de6504c483fd19fa67fb7ecb14e2c7a75b7c0ac0728f5efe (image=quay.io/ceph/ceph:v20, name=pensive_chaplygin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 10 16:58:52 compute-0 podman[90220]: 2026-01-10 16:58:52.426176907 +0000 UTC m=+0.047270186 container create 9c66f10c8c865c36ebfd06387b00d915582c43a996b02a73105ae878bd3a6ff4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_shannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 16:58:52 compute-0 systemd[1]: Started libpod-conmon-9c66f10c8c865c36ebfd06387b00d915582c43a996b02a73105ae878bd3a6ff4.scope.
Jan 10 16:58:52 compute-0 systemd[1]: Started libcrun container.
Jan 10 16:58:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6815ae859d545b08a4450b8d43ff1e9499bb15f93b1f05670cdc62d61a4dcbe/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 16:58:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6815ae859d545b08a4450b8d43ff1e9499bb15f93b1f05670cdc62d61a4dcbe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 16:58:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6815ae859d545b08a4450b8d43ff1e9499bb15f93b1f05670cdc62d61a4dcbe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 16:58:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6815ae859d545b08a4450b8d43ff1e9499bb15f93b1f05670cdc62d61a4dcbe/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 16:58:52 compute-0 podman[90220]: 2026-01-10 16:58:52.406648313 +0000 UTC m=+0.027741602 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 16:58:52 compute-0 podman[90220]: 2026-01-10 16:58:52.506881479 +0000 UTC m=+0.127974768 container init 9c66f10c8c865c36ebfd06387b00d915582c43a996b02a73105ae878bd3a6ff4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_shannon, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default)
Jan 10 16:58:52 compute-0 podman[90220]: 2026-01-10 16:58:52.521105829 +0000 UTC m=+0.142199108 container start 9c66f10c8c865c36ebfd06387b00d915582c43a996b02a73105ae878bd3a6ff4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_shannon, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 10 16:58:52 compute-0 podman[90220]: 2026-01-10 16:58:52.525023973 +0000 UTC m=+0.146117302 container attach 9c66f10c8c865c36ebfd06387b00d915582c43a996b02a73105ae878bd3a6ff4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_shannon, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 16:58:52 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e20 do_prune osdmap full prune enabled
Jan 10 16:58:52 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/4292381526' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 10 16:58:52 compute-0 ceph-mon[75249]: osdmap e20: 3 total, 3 up, 3 in
Jan 10 16:58:52 compute-0 ceph-mon[75249]: pgmap v48: 4 pgs: 1 creating+peering, 1 active+clean, 2 unknown; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 10 16:58:52 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e21 e21: 3 total, 3 up, 3 in
Jan 10 16:58:52 compute-0 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e21: 3 total, 3 up, 3 in
Jan 10 16:58:52 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 21 pg[4.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=0/0 les/c/f=0/0/0 sis=20) [0] r=0 lpr=20 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:58:52 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Jan 10 16:58:52 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/366930870' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 10 16:58:52 compute-0 wonderful_shannon[90237]: {
Jan 10 16:58:52 compute-0 wonderful_shannon[90237]:     "0": [
Jan 10 16:58:52 compute-0 wonderful_shannon[90237]:         {
Jan 10 16:58:52 compute-0 wonderful_shannon[90237]:             "devices": [
Jan 10 16:58:52 compute-0 wonderful_shannon[90237]:                 "/dev/loop3"
Jan 10 16:58:52 compute-0 wonderful_shannon[90237]:             ],
Jan 10 16:58:52 compute-0 wonderful_shannon[90237]:             "lv_name": "ceph_lv0",
Jan 10 16:58:52 compute-0 wonderful_shannon[90237]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 10 16:58:52 compute-0 wonderful_shannon[90237]:             "lv_size": "21470642176",
Jan 10 16:58:52 compute-0 wonderful_shannon[90237]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9aa1dcc9-88f4-49c0-be40-744313964d3e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 16:58:52 compute-0 wonderful_shannon[90237]:             "lv_uuid": "fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2",
Jan 10 16:58:52 compute-0 wonderful_shannon[90237]:             "name": "ceph_lv0",
Jan 10 16:58:52 compute-0 wonderful_shannon[90237]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 10 16:58:52 compute-0 wonderful_shannon[90237]:             "tags": {
Jan 10 16:58:52 compute-0 wonderful_shannon[90237]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 10 16:58:52 compute-0 wonderful_shannon[90237]:                 "ceph.block_uuid": "fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2",
Jan 10 16:58:52 compute-0 wonderful_shannon[90237]:                 "ceph.cephx_lockbox_secret": "",
Jan 10 16:58:52 compute-0 wonderful_shannon[90237]:                 "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 16:58:52 compute-0 wonderful_shannon[90237]:                 "ceph.cluster_name": "ceph",
Jan 10 16:58:52 compute-0 wonderful_shannon[90237]:                 "ceph.crush_device_class": "",
Jan 10 16:58:52 compute-0 wonderful_shannon[90237]:                 "ceph.encrypted": "0",
Jan 10 16:58:52 compute-0 wonderful_shannon[90237]:                 "ceph.objectstore": "bluestore",
Jan 10 16:58:52 compute-0 wonderful_shannon[90237]:                 "ceph.osd_fsid": "9aa1dcc9-88f4-49c0-be40-744313964d3e",
Jan 10 16:58:52 compute-0 wonderful_shannon[90237]:                 "ceph.osd_id": "0",
Jan 10 16:58:52 compute-0 wonderful_shannon[90237]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 10 16:58:52 compute-0 wonderful_shannon[90237]:                 "ceph.type": "block",
Jan 10 16:58:52 compute-0 wonderful_shannon[90237]:                 "ceph.vdo": "0",
Jan 10 16:58:52 compute-0 wonderful_shannon[90237]:                 "ceph.with_tpm": "0"
Jan 10 16:58:52 compute-0 wonderful_shannon[90237]:             },
Jan 10 16:58:52 compute-0 wonderful_shannon[90237]:             "type": "block",
Jan 10 16:58:52 compute-0 wonderful_shannon[90237]:             "vg_name": "ceph_vg0"
Jan 10 16:58:52 compute-0 wonderful_shannon[90237]:         }
Jan 10 16:58:52 compute-0 wonderful_shannon[90237]:     ],
Jan 10 16:58:52 compute-0 wonderful_shannon[90237]:     "1": [
Jan 10 16:58:52 compute-0 wonderful_shannon[90237]:         {
Jan 10 16:58:52 compute-0 wonderful_shannon[90237]:             "devices": [
Jan 10 16:58:52 compute-0 wonderful_shannon[90237]:                 "/dev/loop4"
Jan 10 16:58:52 compute-0 wonderful_shannon[90237]:             ],
Jan 10 16:58:52 compute-0 wonderful_shannon[90237]:             "lv_name": "ceph_lv1",
Jan 10 16:58:52 compute-0 wonderful_shannon[90237]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 10 16:58:52 compute-0 wonderful_shannon[90237]:             "lv_size": "21470642176",
Jan 10 16:58:52 compute-0 wonderful_shannon[90237]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e8e31518-65ae-476c-891c-e2fc550d0a1c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 16:58:52 compute-0 wonderful_shannon[90237]:             "lv_uuid": "jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE",
Jan 10 16:58:52 compute-0 wonderful_shannon[90237]:             "name": "ceph_lv1",
Jan 10 16:58:52 compute-0 wonderful_shannon[90237]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 10 16:58:52 compute-0 wonderful_shannon[90237]:             "tags": {
Jan 10 16:58:52 compute-0 wonderful_shannon[90237]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 10 16:58:52 compute-0 wonderful_shannon[90237]:                 "ceph.block_uuid": "jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE",
Jan 10 16:58:52 compute-0 wonderful_shannon[90237]:                 "ceph.cephx_lockbox_secret": "",
Jan 10 16:58:52 compute-0 wonderful_shannon[90237]:                 "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 16:58:52 compute-0 wonderful_shannon[90237]:                 "ceph.cluster_name": "ceph",
Jan 10 16:58:52 compute-0 wonderful_shannon[90237]:                 "ceph.crush_device_class": "",
Jan 10 16:58:52 compute-0 wonderful_shannon[90237]:                 "ceph.encrypted": "0",
Jan 10 16:58:52 compute-0 wonderful_shannon[90237]:                 "ceph.objectstore": "bluestore",
Jan 10 16:58:52 compute-0 wonderful_shannon[90237]:                 "ceph.osd_fsid": "e8e31518-65ae-476c-891c-e2fc550d0a1c",
Jan 10 16:58:52 compute-0 wonderful_shannon[90237]:                 "ceph.osd_id": "1",
Jan 10 16:58:52 compute-0 wonderful_shannon[90237]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 10 16:58:52 compute-0 wonderful_shannon[90237]:                 "ceph.type": "block",
Jan 10 16:58:52 compute-0 wonderful_shannon[90237]:                 "ceph.vdo": "0",
Jan 10 16:58:52 compute-0 wonderful_shannon[90237]:                 "ceph.with_tpm": "0"
Jan 10 16:58:52 compute-0 wonderful_shannon[90237]:             },
Jan 10 16:58:52 compute-0 wonderful_shannon[90237]:             "type": "block",
Jan 10 16:58:52 compute-0 wonderful_shannon[90237]:             "vg_name": "ceph_vg1"
Jan 10 16:58:52 compute-0 wonderful_shannon[90237]:         }
Jan 10 16:58:52 compute-0 wonderful_shannon[90237]:     ],
Jan 10 16:58:52 compute-0 wonderful_shannon[90237]:     "2": [
Jan 10 16:58:52 compute-0 wonderful_shannon[90237]:         {
Jan 10 16:58:52 compute-0 wonderful_shannon[90237]:             "devices": [
Jan 10 16:58:52 compute-0 wonderful_shannon[90237]:                 "/dev/loop5"
Jan 10 16:58:52 compute-0 wonderful_shannon[90237]:             ],
Jan 10 16:58:52 compute-0 wonderful_shannon[90237]:             "lv_name": "ceph_lv2",
Jan 10 16:58:52 compute-0 wonderful_shannon[90237]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 10 16:58:52 compute-0 wonderful_shannon[90237]:             "lv_size": "21470642176",
Jan 10 16:58:52 compute-0 wonderful_shannon[90237]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=87473727-6468-4f68-8371-e0bf60edaa43,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 16:58:52 compute-0 wonderful_shannon[90237]:             "lv_uuid": "47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL",
Jan 10 16:58:52 compute-0 wonderful_shannon[90237]:             "name": "ceph_lv2",
Jan 10 16:58:52 compute-0 wonderful_shannon[90237]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 10 16:58:52 compute-0 wonderful_shannon[90237]:             "tags": {
Jan 10 16:58:52 compute-0 wonderful_shannon[90237]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 10 16:58:52 compute-0 wonderful_shannon[90237]:                 "ceph.block_uuid": "47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL",
Jan 10 16:58:52 compute-0 wonderful_shannon[90237]:                 "ceph.cephx_lockbox_secret": "",
Jan 10 16:58:52 compute-0 wonderful_shannon[90237]:                 "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 16:58:52 compute-0 wonderful_shannon[90237]:                 "ceph.cluster_name": "ceph",
Jan 10 16:58:52 compute-0 wonderful_shannon[90237]:                 "ceph.crush_device_class": "",
Jan 10 16:58:52 compute-0 wonderful_shannon[90237]:                 "ceph.encrypted": "0",
Jan 10 16:58:52 compute-0 wonderful_shannon[90237]:                 "ceph.objectstore": "bluestore",
Jan 10 16:58:52 compute-0 wonderful_shannon[90237]:                 "ceph.osd_fsid": "87473727-6468-4f68-8371-e0bf60edaa43",
Jan 10 16:58:52 compute-0 wonderful_shannon[90237]:                 "ceph.osd_id": "2",
Jan 10 16:58:52 compute-0 wonderful_shannon[90237]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 10 16:58:52 compute-0 wonderful_shannon[90237]:                 "ceph.type": "block",
Jan 10 16:58:52 compute-0 wonderful_shannon[90237]:                 "ceph.vdo": "0",
Jan 10 16:58:52 compute-0 wonderful_shannon[90237]:                 "ceph.with_tpm": "0"
Jan 10 16:58:52 compute-0 wonderful_shannon[90237]:             },
Jan 10 16:58:52 compute-0 wonderful_shannon[90237]:             "type": "block",
Jan 10 16:58:52 compute-0 wonderful_shannon[90237]:             "vg_name": "ceph_vg2"
Jan 10 16:58:52 compute-0 wonderful_shannon[90237]:         }
Jan 10 16:58:52 compute-0 wonderful_shannon[90237]:     ]
Jan 10 16:58:52 compute-0 wonderful_shannon[90237]: }
Jan 10 16:58:52 compute-0 systemd[1]: libpod-9c66f10c8c865c36ebfd06387b00d915582c43a996b02a73105ae878bd3a6ff4.scope: Deactivated successfully.
Jan 10 16:58:52 compute-0 podman[90220]: 2026-01-10 16:58:52.886152195 +0000 UTC m=+0.507245494 container died 9c66f10c8c865c36ebfd06387b00d915582c43a996b02a73105ae878bd3a6ff4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_shannon, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 16:58:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-c6815ae859d545b08a4450b8d43ff1e9499bb15f93b1f05670cdc62d61a4dcbe-merged.mount: Deactivated successfully.
Jan 10 16:58:52 compute-0 podman[90220]: 2026-01-10 16:58:52.936671754 +0000 UTC m=+0.557765043 container remove 9c66f10c8c865c36ebfd06387b00d915582c43a996b02a73105ae878bd3a6ff4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_shannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 16:58:52 compute-0 systemd[1]: libpod-conmon-9c66f10c8c865c36ebfd06387b00d915582c43a996b02a73105ae878bd3a6ff4.scope: Deactivated successfully.
Jan 10 16:58:53 compute-0 sudo[90086]: pam_unix(sudo:session): session closed for user root
Jan 10 16:58:53 compute-0 sudo[90280]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 16:58:53 compute-0 sudo[90280]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 16:58:53 compute-0 sudo[90280]: pam_unix(sudo:session): session closed for user root
Jan 10 16:58:53 compute-0 sudo[90305]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 -- raw list --format json
Jan 10 16:58:53 compute-0 sudo[90305]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 16:58:53 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e21 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 16:58:53 compute-0 podman[90342]: 2026-01-10 16:58:53.467967482 +0000 UTC m=+0.043914450 container create 648c22547ea7c05c2fe77848fa1722264939d7f099a7488c890d263fcabdca37 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_hugle, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 16:58:53 compute-0 systemd[1]: Started libpod-conmon-648c22547ea7c05c2fe77848fa1722264939d7f099a7488c890d263fcabdca37.scope.
Jan 10 16:58:53 compute-0 systemd[1]: Started libcrun container.
Jan 10 16:58:53 compute-0 podman[90342]: 2026-01-10 16:58:53.546436848 +0000 UTC m=+0.122383856 container init 648c22547ea7c05c2fe77848fa1722264939d7f099a7488c890d263fcabdca37 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_hugle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True)
Jan 10 16:58:53 compute-0 podman[90342]: 2026-01-10 16:58:53.45232304 +0000 UTC m=+0.028270018 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 16:58:53 compute-0 podman[90342]: 2026-01-10 16:58:53.555643254 +0000 UTC m=+0.131590242 container start 648c22547ea7c05c2fe77848fa1722264939d7f099a7488c890d263fcabdca37 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_hugle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 10 16:58:53 compute-0 podman[90342]: 2026-01-10 16:58:53.559479645 +0000 UTC m=+0.135426623 container attach 648c22547ea7c05c2fe77848fa1722264939d7f099a7488c890d263fcabdca37 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_hugle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 10 16:58:53 compute-0 friendly_hugle[90359]: 167 167
Jan 10 16:58:53 compute-0 systemd[1]: libpod-648c22547ea7c05c2fe77848fa1722264939d7f099a7488c890d263fcabdca37.scope: Deactivated successfully.
Jan 10 16:58:53 compute-0 podman[90342]: 2026-01-10 16:58:53.561442342 +0000 UTC m=+0.137389320 container died 648c22547ea7c05c2fe77848fa1722264939d7f099a7488c890d263fcabdca37 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_hugle, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 16:58:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-b879e2ace163a989734e21f582848ce298ee764a608067259bd0f0c4ee8a8f60-merged.mount: Deactivated successfully.
Jan 10 16:58:53 compute-0 podman[90342]: 2026-01-10 16:58:53.608076549 +0000 UTC m=+0.184023517 container remove 648c22547ea7c05c2fe77848fa1722264939d7f099a7488c890d263fcabdca37 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_hugle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True)
Jan 10 16:58:53 compute-0 systemd[1]: libpod-conmon-648c22547ea7c05c2fe77848fa1722264939d7f099a7488c890d263fcabdca37.scope: Deactivated successfully.
Jan 10 16:58:53 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e21 do_prune osdmap full prune enabled
Jan 10 16:58:53 compute-0 ceph-mon[75249]: osdmap e21: 3 total, 3 up, 3 in
Jan 10 16:58:53 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/366930870' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 10 16:58:53 compute-0 podman[90383]: 2026-01-10 16:58:53.794871095 +0000 UTC m=+0.045403432 container create 0c7eab68423bad68e6c6f32827d26045380b9f7fba851fefb1ce98ba31ae2c76 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_hermann, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True)
Jan 10 16:58:53 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/366930870' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 10 16:58:53 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e22 e22: 3 total, 3 up, 3 in
Jan 10 16:58:53 compute-0 pensive_chaplygin[90212]: pool 'images' created
Jan 10 16:58:53 compute-0 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e22: 3 total, 3 up, 3 in
Jan 10 16:58:53 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 22 pg[5.0( empty local-lis/les=0/0 n=0 ec=22/22 lis/c=0/0 les/c/f=0/0/0 sis=22) [2] r=0 lpr=22 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:58:53 compute-0 systemd[1]: Started libpod-conmon-0c7eab68423bad68e6c6f32827d26045380b9f7fba851fefb1ce98ba31ae2c76.scope.
Jan 10 16:58:53 compute-0 systemd[1]: libpod-c4043445b7ed4953de6504c483fd19fa67fb7ecb14e2c7a75b7c0ac0728f5efe.scope: Deactivated successfully.
Jan 10 16:58:53 compute-0 podman[90186]: 2026-01-10 16:58:53.840862454 +0000 UTC m=+1.610200636 container died c4043445b7ed4953de6504c483fd19fa67fb7ecb14e2c7a75b7c0ac0728f5efe (image=quay.io/ceph/ceph:v20, name=pensive_chaplygin, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 10 16:58:53 compute-0 systemd[1]: Started libcrun container.
Jan 10 16:58:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b68243fc54be9dcca46c4eb77e1668c9afc4b8dc8d11fa20059c3d1d9240604b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 16:58:53 compute-0 podman[90383]: 2026-01-10 16:58:53.775207877 +0000 UTC m=+0.025740214 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 16:58:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b68243fc54be9dcca46c4eb77e1668c9afc4b8dc8d11fa20059c3d1d9240604b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 16:58:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b68243fc54be9dcca46c4eb77e1668c9afc4b8dc8d11fa20059c3d1d9240604b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 16:58:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b68243fc54be9dcca46c4eb77e1668c9afc4b8dc8d11fa20059c3d1d9240604b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 16:58:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-e64b13baeb0b7ee933361c5bd52c3e76b6afca1889412c05fe53f5c4950e9ebe-merged.mount: Deactivated successfully.
Jan 10 16:58:53 compute-0 podman[90383]: 2026-01-10 16:58:53.882254349 +0000 UTC m=+0.132786716 container init 0c7eab68423bad68e6c6f32827d26045380b9f7fba851fefb1ce98ba31ae2c76 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_hermann, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 16:58:53 compute-0 podman[90383]: 2026-01-10 16:58:53.892001661 +0000 UTC m=+0.142533978 container start 0c7eab68423bad68e6c6f32827d26045380b9f7fba851fefb1ce98ba31ae2c76 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_hermann, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0)
Jan 10 16:58:53 compute-0 podman[90383]: 2026-01-10 16:58:53.897128239 +0000 UTC m=+0.147660576 container attach 0c7eab68423bad68e6c6f32827d26045380b9f7fba851fefb1ce98ba31ae2c76 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_hermann, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 10 16:58:53 compute-0 podman[90186]: 2026-01-10 16:58:53.902663999 +0000 UTC m=+1.672002081 container remove c4043445b7ed4953de6504c483fd19fa67fb7ecb14e2c7a75b7c0ac0728f5efe (image=quay.io/ceph/ceph:v20, name=pensive_chaplygin, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle)
Jan 10 16:58:53 compute-0 systemd[1]: libpod-conmon-c4043445b7ed4953de6504c483fd19fa67fb7ecb14e2c7a75b7c0ac0728f5efe.scope: Deactivated successfully.
Jan 10 16:58:53 compute-0 sudo[90161]: pam_unix(sudo:session): session closed for user root
Jan 10 16:58:53 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v51: 5 pgs: 2 creating+peering, 1 active+clean, 2 unknown; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 10 16:58:54 compute-0 sudo[90440]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ztyitvkaodppbmtjnkgcvxpslmnbfbti ; /usr/bin/python3'
Jan 10 16:58:54 compute-0 sudo[90440]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:58:54 compute-0 python3[90442]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.meta  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 16:58:54 compute-0 podman[90453]: 2026-01-10 16:58:54.261932316 +0000 UTC m=+0.048391929 container create f03d1b05fb82988e7de19255124aad5349fba952d952291d76c34278c9c7a92a (image=quay.io/ceph/ceph:v20, name=jovial_ritchie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 16:58:54 compute-0 systemd[1]: Started libpod-conmon-f03d1b05fb82988e7de19255124aad5349fba952d952291d76c34278c9c7a92a.scope.
Jan 10 16:58:54 compute-0 systemd[1]: Started libcrun container.
Jan 10 16:58:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9e545852a62a056df8ceb555dc2e81c2076abec15ecc3529ebf8bf77b7ed033/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 16:58:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9e545852a62a056df8ceb555dc2e81c2076abec15ecc3529ebf8bf77b7ed033/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 16:58:54 compute-0 podman[90453]: 2026-01-10 16:58:54.241313761 +0000 UTC m=+0.027773394 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 16:58:54 compute-0 podman[90453]: 2026-01-10 16:58:54.345982484 +0000 UTC m=+0.132442117 container init f03d1b05fb82988e7de19255124aad5349fba952d952291d76c34278c9c7a92a (image=quay.io/ceph/ceph:v20, name=jovial_ritchie, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 10 16:58:54 compute-0 podman[90453]: 2026-01-10 16:58:54.354379187 +0000 UTC m=+0.140838800 container start f03d1b05fb82988e7de19255124aad5349fba952d952291d76c34278c9c7a92a (image=quay.io/ceph/ceph:v20, name=jovial_ritchie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 10 16:58:54 compute-0 podman[90453]: 2026-01-10 16:58:54.357897839 +0000 UTC m=+0.144357452 container attach f03d1b05fb82988e7de19255124aad5349fba952d952291d76c34278c9c7a92a (image=quay.io/ceph/ceph:v20, name=jovial_ritchie, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 16:58:54 compute-0 lvm[90553]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 10 16:58:54 compute-0 lvm[90553]: VG ceph_vg0 finished
Jan 10 16:58:54 compute-0 lvm[90556]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 10 16:58:54 compute-0 lvm[90556]: VG ceph_vg1 finished
Jan 10 16:58:54 compute-0 lvm[90558]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 10 16:58:54 compute-0 lvm[90558]: VG ceph_vg2 finished
Jan 10 16:58:54 compute-0 lvm[90559]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 10 16:58:54 compute-0 lvm[90559]: VG ceph_vg1 finished
Jan 10 16:58:54 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Jan 10 16:58:54 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1969852647' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 10 16:58:54 compute-0 lvm[90562]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 10 16:58:54 compute-0 lvm[90562]: VG ceph_vg1 finished
Jan 10 16:58:54 compute-0 serene_hermann[90400]: {}
Jan 10 16:58:54 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e22 do_prune osdmap full prune enabled
Jan 10 16:58:54 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1969852647' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 10 16:58:54 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e23 e23: 3 total, 3 up, 3 in
Jan 10 16:58:54 compute-0 jovial_ritchie[90477]: pool 'cephfs.cephfs.meta' created
Jan 10 16:58:54 compute-0 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e23: 3 total, 3 up, 3 in
Jan 10 16:58:54 compute-0 systemd[1]: libpod-0c7eab68423bad68e6c6f32827d26045380b9f7fba851fefb1ce98ba31ae2c76.scope: Deactivated successfully.
Jan 10 16:58:54 compute-0 systemd[1]: libpod-0c7eab68423bad68e6c6f32827d26045380b9f7fba851fefb1ce98ba31ae2c76.scope: Consumed 1.489s CPU time.
Jan 10 16:58:54 compute-0 podman[90383]: 2026-01-10 16:58:54.822904381 +0000 UTC m=+1.073436778 container died 0c7eab68423bad68e6c6f32827d26045380b9f7fba851fefb1ce98ba31ae2c76 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_hermann, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 10 16:58:54 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/366930870' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 10 16:58:54 compute-0 ceph-mon[75249]: osdmap e22: 3 total, 3 up, 3 in
Jan 10 16:58:54 compute-0 ceph-mon[75249]: pgmap v51: 5 pgs: 2 creating+peering, 1 active+clean, 2 unknown; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 10 16:58:54 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/1969852647' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 10 16:58:54 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 23 pg[5.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=0/0 les/c/f=0/0/0 sis=22) [2] r=0 lpr=22 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:58:54 compute-0 systemd[1]: libpod-f03d1b05fb82988e7de19255124aad5349fba952d952291d76c34278c9c7a92a.scope: Deactivated successfully.
Jan 10 16:58:54 compute-0 podman[90453]: 2026-01-10 16:58:54.833312092 +0000 UTC m=+0.619771745 container died f03d1b05fb82988e7de19255124aad5349fba952d952291d76c34278c9c7a92a (image=quay.io/ceph/ceph:v20, name=jovial_ritchie, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 16:58:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-b68243fc54be9dcca46c4eb77e1668c9afc4b8dc8d11fa20059c3d1d9240604b-merged.mount: Deactivated successfully.
Jan 10 16:58:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-d9e545852a62a056df8ceb555dc2e81c2076abec15ecc3529ebf8bf77b7ed033-merged.mount: Deactivated successfully.
Jan 10 16:58:54 compute-0 podman[90383]: 2026-01-10 16:58:54.880568497 +0000 UTC m=+1.131100804 container remove 0c7eab68423bad68e6c6f32827d26045380b9f7fba851fefb1ce98ba31ae2c76 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_hermann, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 10 16:58:54 compute-0 podman[90453]: 2026-01-10 16:58:54.902571323 +0000 UTC m=+0.689030976 container remove f03d1b05fb82988e7de19255124aad5349fba952d952291d76c34278c9c7a92a (image=quay.io/ceph/ceph:v20, name=jovial_ritchie, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 16:58:54 compute-0 systemd[1]: libpod-conmon-0c7eab68423bad68e6c6f32827d26045380b9f7fba851fefb1ce98ba31ae2c76.scope: Deactivated successfully.
Jan 10 16:58:54 compute-0 systemd[1]: libpod-conmon-f03d1b05fb82988e7de19255124aad5349fba952d952291d76c34278c9c7a92a.scope: Deactivated successfully.
Jan 10 16:58:54 compute-0 sudo[90305]: pam_unix(sudo:session): session closed for user root
Jan 10 16:58:54 compute-0 sudo[90440]: pam_unix(sudo:session): session closed for user root
Jan 10 16:58:54 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 10 16:58:54 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:58:54 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 10 16:58:54 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:58:55 compute-0 sudo[90589]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 10 16:58:55 compute-0 sudo[90589]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 16:58:55 compute-0 sudo[90589]: pam_unix(sudo:session): session closed for user root
Jan 10 16:58:55 compute-0 sudo[90638]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lgteaistiaaaombhmyzqmedvojrctrii ; /usr/bin/python3'
Jan 10 16:58:55 compute-0 sudo[90638]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:58:55 compute-0 sudo[90637]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 16:58:55 compute-0 sudo[90637]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 16:58:55 compute-0 sudo[90637]: pam_unix(sudo:session): session closed for user root
Jan 10 16:58:55 compute-0 sudo[90665]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Jan 10 16:58:55 compute-0 sudo[90665]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 16:58:55 compute-0 python3[90651]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.data  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 16:58:55 compute-0 podman[90690]: 2026-01-10 16:58:55.290303333 +0000 UTC m=+0.042170449 container create 28a1da4453d64d2ef05c3d558d640bf2dc4d1f8570a982aa651315d6728cdd0e (image=quay.io/ceph/ceph:v20, name=magical_blackwell, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 16:58:55 compute-0 systemd[1]: Started libpod-conmon-28a1da4453d64d2ef05c3d558d640bf2dc4d1f8570a982aa651315d6728cdd0e.scope.
Jan 10 16:58:55 compute-0 systemd[1]: Started libcrun container.
Jan 10 16:58:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5094dc658af8368797ae236f72a654aab1b484ebab6a33d82ec391f65cfabadd/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 16:58:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5094dc658af8368797ae236f72a654aab1b484ebab6a33d82ec391f65cfabadd/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 16:58:55 compute-0 podman[90690]: 2026-01-10 16:58:55.364733383 +0000 UTC m=+0.116600509 container init 28a1da4453d64d2ef05c3d558d640bf2dc4d1f8570a982aa651315d6728cdd0e (image=quay.io/ceph/ceph:v20, name=magical_blackwell, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 10 16:58:55 compute-0 podman[90690]: 2026-01-10 16:58:55.272393306 +0000 UTC m=+0.024260442 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 16:58:55 compute-0 podman[90690]: 2026-01-10 16:58:55.373119616 +0000 UTC m=+0.124986722 container start 28a1da4453d64d2ef05c3d558d640bf2dc4d1f8570a982aa651315d6728cdd0e (image=quay.io/ceph/ceph:v20, name=magical_blackwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 10 16:58:55 compute-0 podman[90690]: 2026-01-10 16:58:55.376338009 +0000 UTC m=+0.128205135 container attach 28a1da4453d64d2ef05c3d558d640bf2dc4d1f8570a982aa651315d6728cdd0e (image=quay.io/ceph/ceph:v20, name=magical_blackwell, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 10 16:58:55 compute-0 podman[90771]: 2026-01-10 16:58:55.653636159 +0000 UTC m=+0.060288342 container exec 69622407e4b336ab6e593d34ac16bfb19f7f8835a32ed22c7a89e50ee8c8d8e7 (image=quay.io/ceph/ceph:v20, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 10 16:58:55 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 23 pg[6.0( empty local-lis/les=0/0 n=0 ec=23/23 lis/c=0/0 les/c/f=0/0/0 sis=23) [0] r=0 lpr=23 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:58:55 compute-0 podman[90771]: 2026-01-10 16:58:55.759414245 +0000 UTC m=+0.166066408 container exec_died 69622407e4b336ab6e593d34ac16bfb19f7f8835a32ed22c7a89e50ee8c8d8e7 (image=quay.io/ceph/ceph:v20, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 16:58:55 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e23 do_prune osdmap full prune enabled
Jan 10 16:58:55 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e24 e24: 3 total, 3 up, 3 in
Jan 10 16:58:55 compute-0 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e24: 3 total, 3 up, 3 in
Jan 10 16:58:55 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Jan 10 16:58:55 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1756795060' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 10 16:58:55 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 24 pg[6.0( empty local-lis/les=23/24 n=0 ec=23/23 lis/c=0/0 les/c/f=0/0/0 sis=23) [0] r=0 lpr=23 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:58:55 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/1969852647' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 10 16:58:55 compute-0 ceph-mon[75249]: osdmap e23: 3 total, 3 up, 3 in
Jan 10 16:58:55 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:58:55 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:58:55 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v54: 6 pgs: 4 active+clean, 1 creating+peering, 1 unknown; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 10 16:58:56 compute-0 sudo[90665]: pam_unix(sudo:session): session closed for user root
Jan 10 16:58:56 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 10 16:58:56 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:58:56 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 10 16:58:56 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:58:56 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 16:58:56 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 16:58:56 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 10 16:58:56 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 10 16:58:56 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 10 16:58:56 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:58:56 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 10 16:58:56 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 10 16:58:56 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 10 16:58:56 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 10 16:58:56 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 16:58:56 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 16:58:56 compute-0 sudo[90926]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 16:58:56 compute-0 sudo[90926]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 16:58:56 compute-0 sudo[90926]: pam_unix(sudo:session): session closed for user root
Jan 10 16:58:56 compute-0 sudo[90951]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 10 16:58:56 compute-0 sudo[90951]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 16:58:56 compute-0 podman[90987]: 2026-01-10 16:58:56.799256303 +0000 UTC m=+0.037694080 container create 5a9627f0a795cab30b1425b1b97ff035a3eb28f53a48fec49eebe3c7e570a1a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_thompson, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 16:58:56 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e24 do_prune osdmap full prune enabled
Jan 10 16:58:56 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1756795060' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 10 16:58:56 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e25 e25: 3 total, 3 up, 3 in
Jan 10 16:58:56 compute-0 magical_blackwell[90705]: pool 'cephfs.cephfs.data' created
Jan 10 16:58:56 compute-0 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e25: 3 total, 3 up, 3 in
Jan 10 16:58:56 compute-0 systemd[1]: Started libpod-conmon-5a9627f0a795cab30b1425b1b97ff035a3eb28f53a48fec49eebe3c7e570a1a6.scope.
Jan 10 16:58:56 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 25 pg[7.0( empty local-lis/les=0/0 n=0 ec=25/25 lis/c=0/0 les/c/f=0/0/0 sis=25) [1] r=0 lpr=25 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:58:56 compute-0 ceph-mon[75249]: osdmap e24: 3 total, 3 up, 3 in
Jan 10 16:58:56 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/1756795060' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 10 16:58:56 compute-0 ceph-mon[75249]: pgmap v54: 6 pgs: 4 active+clean, 1 creating+peering, 1 unknown; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 10 16:58:56 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:58:56 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:58:56 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 16:58:56 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 10 16:58:56 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:58:56 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 10 16:58:56 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 10 16:58:56 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 16:58:56 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/1756795060' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 10 16:58:56 compute-0 ceph-mon[75249]: osdmap e25: 3 total, 3 up, 3 in
Jan 10 16:58:56 compute-0 podman[90690]: 2026-01-10 16:58:56.866059673 +0000 UTC m=+1.617926789 container died 28a1da4453d64d2ef05c3d558d640bf2dc4d1f8570a982aa651315d6728cdd0e (image=quay.io/ceph/ceph:v20, name=magical_blackwell, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 10 16:58:56 compute-0 systemd[1]: Started libcrun container.
Jan 10 16:58:56 compute-0 systemd[1]: libpod-28a1da4453d64d2ef05c3d558d640bf2dc4d1f8570a982aa651315d6728cdd0e.scope: Deactivated successfully.
Jan 10 16:58:56 compute-0 podman[90987]: 2026-01-10 16:58:56.783279382 +0000 UTC m=+0.021717179 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 16:58:56 compute-0 podman[90987]: 2026-01-10 16:58:56.881843239 +0000 UTC m=+0.120281066 container init 5a9627f0a795cab30b1425b1b97ff035a3eb28f53a48fec49eebe3c7e570a1a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_thompson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 10 16:58:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-5094dc658af8368797ae236f72a654aab1b484ebab6a33d82ec391f65cfabadd-merged.mount: Deactivated successfully.
Jan 10 16:58:56 compute-0 podman[90987]: 2026-01-10 16:58:56.889870971 +0000 UTC m=+0.128308748 container start 5a9627f0a795cab30b1425b1b97ff035a3eb28f53a48fec49eebe3c7e570a1a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_thompson, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 16:58:56 compute-0 epic_thompson[91004]: 167 167
Jan 10 16:58:56 compute-0 podman[90987]: 2026-01-10 16:58:56.893474705 +0000 UTC m=+0.131912482 container attach 5a9627f0a795cab30b1425b1b97ff035a3eb28f53a48fec49eebe3c7e570a1a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_thompson, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 10 16:58:56 compute-0 podman[90690]: 2026-01-10 16:58:56.918103086 +0000 UTC m=+1.669970192 container remove 28a1da4453d64d2ef05c3d558d640bf2dc4d1f8570a982aa651315d6728cdd0e (image=quay.io/ceph/ceph:v20, name=magical_blackwell, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 10 16:58:56 compute-0 systemd[1]: libpod-5a9627f0a795cab30b1425b1b97ff035a3eb28f53a48fec49eebe3c7e570a1a6.scope: Deactivated successfully.
Jan 10 16:58:56 compute-0 podman[90987]: 2026-01-10 16:58:56.921537136 +0000 UTC m=+0.159974923 container died 5a9627f0a795cab30b1425b1b97ff035a3eb28f53a48fec49eebe3c7e570a1a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_thompson, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 16:58:56 compute-0 systemd[1]: libpod-conmon-28a1da4453d64d2ef05c3d558d640bf2dc4d1f8570a982aa651315d6728cdd0e.scope: Deactivated successfully.
Jan 10 16:58:56 compute-0 sudo[90638]: pam_unix(sudo:session): session closed for user root
Jan 10 16:58:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-a402c461c307c871f8ba25c5ebb63bb8ab5085d27beb329c4e12e39e56048cdc-merged.mount: Deactivated successfully.
Jan 10 16:58:56 compute-0 podman[90987]: 2026-01-10 16:58:56.960480571 +0000 UTC m=+0.198918348 container remove 5a9627f0a795cab30b1425b1b97ff035a3eb28f53a48fec49eebe3c7e570a1a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_thompson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 10 16:58:56 compute-0 systemd[1]: libpod-conmon-5a9627f0a795cab30b1425b1b97ff035a3eb28f53a48fec49eebe3c7e570a1a6.scope: Deactivated successfully.
Jan 10 16:58:57 compute-0 sudo[91064]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sgjqogjfdfmmybwnsrtxjqnynakkwldz ; /usr/bin/python3'
Jan 10 16:58:57 compute-0 sudo[91064]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:58:57 compute-0 podman[91065]: 2026-01-10 16:58:57.150803268 +0000 UTC m=+0.047442631 container create d3cf5e933ea98439b68e79e51b337385a203e601eae8505421761cb9df389be2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_kowalevski, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 10 16:58:57 compute-0 systemd[1]: Started libpod-conmon-d3cf5e933ea98439b68e79e51b337385a203e601eae8505421761cb9df389be2.scope.
Jan 10 16:58:57 compute-0 systemd[1]: Started libcrun container.
Jan 10 16:58:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e70cf13c009cb2247ab01c31fea7177d4596e0f34a7770f60fecd7b36af8e321/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 16:58:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e70cf13c009cb2247ab01c31fea7177d4596e0f34a7770f60fecd7b36af8e321/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 16:58:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e70cf13c009cb2247ab01c31fea7177d4596e0f34a7770f60fecd7b36af8e321/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 16:58:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e70cf13c009cb2247ab01c31fea7177d4596e0f34a7770f60fecd7b36af8e321/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 16:58:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e70cf13c009cb2247ab01c31fea7177d4596e0f34a7770f60fecd7b36af8e321/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 10 16:58:57 compute-0 podman[91065]: 2026-01-10 16:58:57.131742058 +0000 UTC m=+0.028381441 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 16:58:57 compute-0 podman[91065]: 2026-01-10 16:58:57.254200285 +0000 UTC m=+0.150839658 container init d3cf5e933ea98439b68e79e51b337385a203e601eae8505421761cb9df389be2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_kowalevski, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 10 16:58:57 compute-0 podman[91065]: 2026-01-10 16:58:57.262772813 +0000 UTC m=+0.159412186 container start d3cf5e933ea98439b68e79e51b337385a203e601eae8505421761cb9df389be2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_kowalevski, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 10 16:58:57 compute-0 python3[91073]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable vms rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 16:58:57 compute-0 podman[91065]: 2026-01-10 16:58:57.266565603 +0000 UTC m=+0.163204976 container attach d3cf5e933ea98439b68e79e51b337385a203e601eae8505421761cb9df389be2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_kowalevski, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 16:58:57 compute-0 podman[91089]: 2026-01-10 16:58:57.350310322 +0000 UTC m=+0.067167502 container create b0e8e61c8920648fda3ab431ee0fa2438fd017990a7e4d378eaaf9aa17242a99 (image=quay.io/ceph/ceph:v20, name=heuristic_goodall, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 10 16:58:57 compute-0 systemd[1]: Started libpod-conmon-b0e8e61c8920648fda3ab431ee0fa2438fd017990a7e4d378eaaf9aa17242a99.scope.
Jan 10 16:58:57 compute-0 podman[91089]: 2026-01-10 16:58:57.324270569 +0000 UTC m=+0.041127729 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 16:58:57 compute-0 systemd[1]: Started libcrun container.
Jan 10 16:58:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7aaaec85c07836d384510f92b9a44557a3a6ace1808cacb9ccf71c9f336e2ff8/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 16:58:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7aaaec85c07836d384510f92b9a44557a3a6ace1808cacb9ccf71c9f336e2ff8/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 16:58:57 compute-0 podman[91089]: 2026-01-10 16:58:57.467167997 +0000 UTC m=+0.184025147 container init b0e8e61c8920648fda3ab431ee0fa2438fd017990a7e4d378eaaf9aa17242a99 (image=quay.io/ceph/ceph:v20, name=heuristic_goodall, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 16:58:57 compute-0 podman[91089]: 2026-01-10 16:58:57.479084002 +0000 UTC m=+0.195941182 container start b0e8e61c8920648fda3ab431ee0fa2438fd017990a7e4d378eaaf9aa17242a99 (image=quay.io/ceph/ceph:v20, name=heuristic_goodall, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 16:58:57 compute-0 podman[91089]: 2026-01-10 16:58:57.483472318 +0000 UTC m=+0.200329468 container attach b0e8e61c8920648fda3ab431ee0fa2438fd017990a7e4d378eaaf9aa17242a99 (image=quay.io/ceph/ceph:v20, name=heuristic_goodall, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 16:58:57 compute-0 elated_kowalevski[91083]: --> passed data devices: 0 physical, 3 LVM
Jan 10 16:58:57 compute-0 elated_kowalevski[91083]: --> All data devices are unavailable
Jan 10 16:58:57 compute-0 systemd[1]: libpod-d3cf5e933ea98439b68e79e51b337385a203e601eae8505421761cb9df389be2.scope: Deactivated successfully.
Jan 10 16:58:57 compute-0 podman[91065]: 2026-01-10 16:58:57.795653136 +0000 UTC m=+0.692292489 container died d3cf5e933ea98439b68e79e51b337385a203e601eae8505421761cb9df389be2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_kowalevski, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 10 16:58:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-e70cf13c009cb2247ab01c31fea7177d4596e0f34a7770f60fecd7b36af8e321-merged.mount: Deactivated successfully.
Jan 10 16:58:57 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e25 do_prune osdmap full prune enabled
Jan 10 16:58:57 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e26 e26: 3 total, 3 up, 3 in
Jan 10 16:58:57 compute-0 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e26: 3 total, 3 up, 3 in
Jan 10 16:58:57 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 26 pg[7.0( empty local-lis/les=25/26 n=0 ec=25/25 lis/c=0/0 les/c/f=0/0/0 sis=25) [1] r=0 lpr=25 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:58:57 compute-0 podman[91065]: 2026-01-10 16:58:57.872346861 +0000 UTC m=+0.768986214 container remove d3cf5e933ea98439b68e79e51b337385a203e601eae8505421761cb9df389be2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_kowalevski, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 10 16:58:57 compute-0 systemd[1]: libpod-conmon-d3cf5e933ea98439b68e79e51b337385a203e601eae8505421761cb9df389be2.scope: Deactivated successfully.
Jan 10 16:58:57 compute-0 sudo[90951]: pam_unix(sudo:session): session closed for user root
Jan 10 16:58:57 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} v 0)
Jan 10 16:58:57 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3426479788' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} : dispatch
Jan 10 16:58:57 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v57: 7 pgs: 6 active+clean, 1 unknown; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 10 16:58:58 compute-0 sudo[91155]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 16:58:58 compute-0 sudo[91155]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 16:58:58 compute-0 sudo[91155]: pam_unix(sudo:session): session closed for user root
Jan 10 16:58:58 compute-0 sudo[91180]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 -- lvm list --format json
Jan 10 16:58:58 compute-0 sudo[91180]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 16:58:58 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e26 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 16:58:58 compute-0 podman[91217]: 2026-01-10 16:58:58.377552705 +0000 UTC m=+0.046462253 container create 40de89d9fb3ec64d471c7eac240b5484a308654fa8b42e2e6d05e571e9940767 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_elgamal, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 16:58:58 compute-0 systemd[1]: Started libpod-conmon-40de89d9fb3ec64d471c7eac240b5484a308654fa8b42e2e6d05e571e9940767.scope.
Jan 10 16:58:58 compute-0 podman[91217]: 2026-01-10 16:58:58.356650961 +0000 UTC m=+0.025560599 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 16:58:58 compute-0 systemd[1]: Started libcrun container.
Jan 10 16:58:58 compute-0 podman[91217]: 2026-01-10 16:58:58.47360463 +0000 UTC m=+0.142514178 container init 40de89d9fb3ec64d471c7eac240b5484a308654fa8b42e2e6d05e571e9940767 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_elgamal, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 10 16:58:58 compute-0 podman[91217]: 2026-01-10 16:58:58.480660294 +0000 UTC m=+0.149569842 container start 40de89d9fb3ec64d471c7eac240b5484a308654fa8b42e2e6d05e571e9940767 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_elgamal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 10 16:58:58 compute-0 podman[91217]: 2026-01-10 16:58:58.486000388 +0000 UTC m=+0.154909956 container attach 40de89d9fb3ec64d471c7eac240b5484a308654fa8b42e2e6d05e571e9940767 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_elgamal, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 16:58:58 compute-0 vibrant_elgamal[91234]: 167 167
Jan 10 16:58:58 compute-0 systemd[1]: libpod-40de89d9fb3ec64d471c7eac240b5484a308654fa8b42e2e6d05e571e9940767.scope: Deactivated successfully.
Jan 10 16:58:58 compute-0 podman[91217]: 2026-01-10 16:58:58.490039064 +0000 UTC m=+0.158948612 container died 40de89d9fb3ec64d471c7eac240b5484a308654fa8b42e2e6d05e571e9940767 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_elgamal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 10 16:58:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-e3304e86ffa87b6ad6f9802d49d4acd04954fd9e27c74307caa839f1032a6919-merged.mount: Deactivated successfully.
Jan 10 16:58:58 compute-0 podman[91217]: 2026-01-10 16:58:58.547426832 +0000 UTC m=+0.216336390 container remove 40de89d9fb3ec64d471c7eac240b5484a308654fa8b42e2e6d05e571e9940767 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_elgamal, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 10 16:58:58 compute-0 systemd[1]: libpod-conmon-40de89d9fb3ec64d471c7eac240b5484a308654fa8b42e2e6d05e571e9940767.scope: Deactivated successfully.
Jan 10 16:58:58 compute-0 podman[91257]: 2026-01-10 16:58:58.712056448 +0000 UTC m=+0.041431428 container create 2898db25555d9bbba5f35e356277d82d11d0a88700583de3c7c634aee78bb4fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_babbage, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 16:58:58 compute-0 systemd[1]: Started libpod-conmon-2898db25555d9bbba5f35e356277d82d11d0a88700583de3c7c634aee78bb4fa.scope.
Jan 10 16:58:58 compute-0 systemd[1]: Started libcrun container.
Jan 10 16:58:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/955656c6bbd7df3dc3fa281cb507e1a80f7a9a94d01355dbe37ef8cb5542655f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 16:58:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/955656c6bbd7df3dc3fa281cb507e1a80f7a9a94d01355dbe37ef8cb5542655f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 16:58:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/955656c6bbd7df3dc3fa281cb507e1a80f7a9a94d01355dbe37ef8cb5542655f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 16:58:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/955656c6bbd7df3dc3fa281cb507e1a80f7a9a94d01355dbe37ef8cb5542655f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 16:58:58 compute-0 podman[91257]: 2026-01-10 16:58:58.696128788 +0000 UTC m=+0.025503818 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 16:58:58 compute-0 podman[91257]: 2026-01-10 16:58:58.800477262 +0000 UTC m=+0.129852262 container init 2898db25555d9bbba5f35e356277d82d11d0a88700583de3c7c634aee78bb4fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_babbage, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 10 16:58:58 compute-0 podman[91257]: 2026-01-10 16:58:58.808820043 +0000 UTC m=+0.138195023 container start 2898db25555d9bbba5f35e356277d82d11d0a88700583de3c7c634aee78bb4fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_babbage, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 10 16:58:58 compute-0 podman[91257]: 2026-01-10 16:58:58.812725036 +0000 UTC m=+0.142100046 container attach 2898db25555d9bbba5f35e356277d82d11d0a88700583de3c7c634aee78bb4fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_babbage, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 16:58:58 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e26 do_prune osdmap full prune enabled
Jan 10 16:58:58 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3426479788' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Jan 10 16:58:58 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e27 e27: 3 total, 3 up, 3 in
Jan 10 16:58:58 compute-0 heuristic_goodall[91104]: enabled application 'rbd' on pool 'vms'
Jan 10 16:58:58 compute-0 ceph-mon[75249]: osdmap e26: 3 total, 3 up, 3 in
Jan 10 16:58:58 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/3426479788' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} : dispatch
Jan 10 16:58:58 compute-0 ceph-mon[75249]: pgmap v57: 7 pgs: 6 active+clean, 1 unknown; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 10 16:58:58 compute-0 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e27: 3 total, 3 up, 3 in
Jan 10 16:58:58 compute-0 systemd[1]: libpod-b0e8e61c8920648fda3ab431ee0fa2438fd017990a7e4d378eaaf9aa17242a99.scope: Deactivated successfully.
Jan 10 16:58:58 compute-0 conmon[91104]: conmon b0e8e61c8920648fda3a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b0e8e61c8920648fda3ab431ee0fa2438fd017990a7e4d378eaaf9aa17242a99.scope/container/memory.events
Jan 10 16:58:58 compute-0 podman[91089]: 2026-01-10 16:58:58.884092818 +0000 UTC m=+1.600949978 container died b0e8e61c8920648fda3ab431ee0fa2438fd017990a7e4d378eaaf9aa17242a99 (image=quay.io/ceph/ceph:v20, name=heuristic_goodall, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 10 16:58:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-7aaaec85c07836d384510f92b9a44557a3a6ace1808cacb9ccf71c9f336e2ff8-merged.mount: Deactivated successfully.
Jan 10 16:58:58 compute-0 podman[91089]: 2026-01-10 16:58:58.926861063 +0000 UTC m=+1.643718193 container remove b0e8e61c8920648fda3ab431ee0fa2438fd017990a7e4d378eaaf9aa17242a99 (image=quay.io/ceph/ceph:v20, name=heuristic_goodall, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 10 16:58:58 compute-0 systemd[1]: libpod-conmon-b0e8e61c8920648fda3ab431ee0fa2438fd017990a7e4d378eaaf9aa17242a99.scope: Deactivated successfully.
Jan 10 16:58:58 compute-0 sudo[91064]: pam_unix(sudo:session): session closed for user root
Jan 10 16:58:59 compute-0 sudo[91317]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qgiauxpkesctwcrswfswclrbpyegwdjp ; /usr/bin/python3'
Jan 10 16:58:59 compute-0 exciting_babbage[91274]: {
Jan 10 16:58:59 compute-0 exciting_babbage[91274]:     "0": [
Jan 10 16:58:59 compute-0 exciting_babbage[91274]:         {
Jan 10 16:58:59 compute-0 exciting_babbage[91274]:             "devices": [
Jan 10 16:58:59 compute-0 exciting_babbage[91274]:                 "/dev/loop3"
Jan 10 16:58:59 compute-0 exciting_babbage[91274]:             ],
Jan 10 16:58:59 compute-0 exciting_babbage[91274]:             "lv_name": "ceph_lv0",
Jan 10 16:58:59 compute-0 exciting_babbage[91274]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 10 16:58:59 compute-0 exciting_babbage[91274]:             "lv_size": "21470642176",
Jan 10 16:58:59 compute-0 exciting_babbage[91274]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9aa1dcc9-88f4-49c0-be40-744313964d3e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 16:58:59 compute-0 exciting_babbage[91274]:             "lv_uuid": "fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2",
Jan 10 16:58:59 compute-0 exciting_babbage[91274]:             "name": "ceph_lv0",
Jan 10 16:58:59 compute-0 exciting_babbage[91274]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 10 16:58:59 compute-0 exciting_babbage[91274]:             "tags": {
Jan 10 16:58:59 compute-0 exciting_babbage[91274]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 10 16:58:59 compute-0 exciting_babbage[91274]:                 "ceph.block_uuid": "fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2",
Jan 10 16:58:59 compute-0 exciting_babbage[91274]:                 "ceph.cephx_lockbox_secret": "",
Jan 10 16:58:59 compute-0 exciting_babbage[91274]:                 "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 16:58:59 compute-0 exciting_babbage[91274]:                 "ceph.cluster_name": "ceph",
Jan 10 16:58:59 compute-0 exciting_babbage[91274]:                 "ceph.crush_device_class": "",
Jan 10 16:58:59 compute-0 exciting_babbage[91274]:                 "ceph.encrypted": "0",
Jan 10 16:58:59 compute-0 exciting_babbage[91274]:                 "ceph.objectstore": "bluestore",
Jan 10 16:58:59 compute-0 exciting_babbage[91274]:                 "ceph.osd_fsid": "9aa1dcc9-88f4-49c0-be40-744313964d3e",
Jan 10 16:58:59 compute-0 exciting_babbage[91274]:                 "ceph.osd_id": "0",
Jan 10 16:58:59 compute-0 exciting_babbage[91274]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 10 16:58:59 compute-0 exciting_babbage[91274]:                 "ceph.type": "block",
Jan 10 16:58:59 compute-0 exciting_babbage[91274]:                 "ceph.vdo": "0",
Jan 10 16:58:59 compute-0 exciting_babbage[91274]:                 "ceph.with_tpm": "0"
Jan 10 16:58:59 compute-0 exciting_babbage[91274]:             },
Jan 10 16:58:59 compute-0 exciting_babbage[91274]:             "type": "block",
Jan 10 16:58:59 compute-0 exciting_babbage[91274]:             "vg_name": "ceph_vg0"
Jan 10 16:58:59 compute-0 exciting_babbage[91274]:         }
Jan 10 16:58:59 compute-0 exciting_babbage[91274]:     ],
Jan 10 16:58:59 compute-0 exciting_babbage[91274]:     "1": [
Jan 10 16:58:59 compute-0 exciting_babbage[91274]:         {
Jan 10 16:58:59 compute-0 exciting_babbage[91274]:             "devices": [
Jan 10 16:58:59 compute-0 exciting_babbage[91274]:                 "/dev/loop4"
Jan 10 16:58:59 compute-0 exciting_babbage[91274]:             ],
Jan 10 16:58:59 compute-0 exciting_babbage[91274]:             "lv_name": "ceph_lv1",
Jan 10 16:58:59 compute-0 exciting_babbage[91274]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 10 16:58:59 compute-0 exciting_babbage[91274]:             "lv_size": "21470642176",
Jan 10 16:58:59 compute-0 exciting_babbage[91274]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e8e31518-65ae-476c-891c-e2fc550d0a1c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 16:58:59 compute-0 exciting_babbage[91274]:             "lv_uuid": "jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE",
Jan 10 16:58:59 compute-0 exciting_babbage[91274]:             "name": "ceph_lv1",
Jan 10 16:58:59 compute-0 exciting_babbage[91274]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 10 16:58:59 compute-0 exciting_babbage[91274]:             "tags": {
Jan 10 16:58:59 compute-0 exciting_babbage[91274]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 10 16:58:59 compute-0 sudo[91317]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:58:59 compute-0 exciting_babbage[91274]:                 "ceph.block_uuid": "jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE",
Jan 10 16:58:59 compute-0 exciting_babbage[91274]:                 "ceph.cephx_lockbox_secret": "",
Jan 10 16:58:59 compute-0 exciting_babbage[91274]:                 "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 16:58:59 compute-0 exciting_babbage[91274]:                 "ceph.cluster_name": "ceph",
Jan 10 16:58:59 compute-0 exciting_babbage[91274]:                 "ceph.crush_device_class": "",
Jan 10 16:58:59 compute-0 exciting_babbage[91274]:                 "ceph.encrypted": "0",
Jan 10 16:58:59 compute-0 exciting_babbage[91274]:                 "ceph.objectstore": "bluestore",
Jan 10 16:58:59 compute-0 exciting_babbage[91274]:                 "ceph.osd_fsid": "e8e31518-65ae-476c-891c-e2fc550d0a1c",
Jan 10 16:58:59 compute-0 exciting_babbage[91274]:                 "ceph.osd_id": "1",
Jan 10 16:58:59 compute-0 exciting_babbage[91274]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 10 16:58:59 compute-0 exciting_babbage[91274]:                 "ceph.type": "block",
Jan 10 16:58:59 compute-0 exciting_babbage[91274]:                 "ceph.vdo": "0",
Jan 10 16:58:59 compute-0 exciting_babbage[91274]:                 "ceph.with_tpm": "0"
Jan 10 16:58:59 compute-0 exciting_babbage[91274]:             },
Jan 10 16:58:59 compute-0 exciting_babbage[91274]:             "type": "block",
Jan 10 16:58:59 compute-0 exciting_babbage[91274]:             "vg_name": "ceph_vg1"
Jan 10 16:58:59 compute-0 exciting_babbage[91274]:         }
Jan 10 16:58:59 compute-0 exciting_babbage[91274]:     ],
Jan 10 16:58:59 compute-0 exciting_babbage[91274]:     "2": [
Jan 10 16:58:59 compute-0 exciting_babbage[91274]:         {
Jan 10 16:58:59 compute-0 exciting_babbage[91274]:             "devices": [
Jan 10 16:58:59 compute-0 exciting_babbage[91274]:                 "/dev/loop5"
Jan 10 16:58:59 compute-0 exciting_babbage[91274]:             ],
Jan 10 16:58:59 compute-0 exciting_babbage[91274]:             "lv_name": "ceph_lv2",
Jan 10 16:58:59 compute-0 exciting_babbage[91274]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 10 16:58:59 compute-0 exciting_babbage[91274]:             "lv_size": "21470642176",
Jan 10 16:58:59 compute-0 exciting_babbage[91274]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=87473727-6468-4f68-8371-e0bf60edaa43,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 16:58:59 compute-0 exciting_babbage[91274]:             "lv_uuid": "47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL",
Jan 10 16:58:59 compute-0 exciting_babbage[91274]:             "name": "ceph_lv2",
Jan 10 16:58:59 compute-0 exciting_babbage[91274]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 10 16:58:59 compute-0 exciting_babbage[91274]:             "tags": {
Jan 10 16:58:59 compute-0 exciting_babbage[91274]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 10 16:58:59 compute-0 exciting_babbage[91274]:                 "ceph.block_uuid": "47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL",
Jan 10 16:58:59 compute-0 exciting_babbage[91274]:                 "ceph.cephx_lockbox_secret": "",
Jan 10 16:58:59 compute-0 exciting_babbage[91274]:                 "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 16:58:59 compute-0 exciting_babbage[91274]:                 "ceph.cluster_name": "ceph",
Jan 10 16:58:59 compute-0 exciting_babbage[91274]:                 "ceph.crush_device_class": "",
Jan 10 16:58:59 compute-0 exciting_babbage[91274]:                 "ceph.encrypted": "0",
Jan 10 16:58:59 compute-0 exciting_babbage[91274]:                 "ceph.objectstore": "bluestore",
Jan 10 16:58:59 compute-0 exciting_babbage[91274]:                 "ceph.osd_fsid": "87473727-6468-4f68-8371-e0bf60edaa43",
Jan 10 16:58:59 compute-0 exciting_babbage[91274]:                 "ceph.osd_id": "2",
Jan 10 16:58:59 compute-0 exciting_babbage[91274]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 10 16:58:59 compute-0 exciting_babbage[91274]:                 "ceph.type": "block",
Jan 10 16:58:59 compute-0 exciting_babbage[91274]:                 "ceph.vdo": "0",
Jan 10 16:58:59 compute-0 exciting_babbage[91274]:                 "ceph.with_tpm": "0"
Jan 10 16:58:59 compute-0 exciting_babbage[91274]:             },
Jan 10 16:58:59 compute-0 exciting_babbage[91274]:             "type": "block",
Jan 10 16:58:59 compute-0 exciting_babbage[91274]:             "vg_name": "ceph_vg2"
Jan 10 16:58:59 compute-0 exciting_babbage[91274]:         }
Jan 10 16:58:59 compute-0 exciting_babbage[91274]:     ]
Jan 10 16:58:59 compute-0 exciting_babbage[91274]: }
Jan 10 16:58:59 compute-0 systemd[1]: libpod-2898db25555d9bbba5f35e356277d82d11d0a88700583de3c7c634aee78bb4fa.scope: Deactivated successfully.
Jan 10 16:58:59 compute-0 podman[91257]: 2026-01-10 16:58:59.139552647 +0000 UTC m=+0.468927637 container died 2898db25555d9bbba5f35e356277d82d11d0a88700583de3c7c634aee78bb4fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_babbage, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 16:58:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-955656c6bbd7df3dc3fa281cb507e1a80f7a9a94d01355dbe37ef8cb5542655f-merged.mount: Deactivated successfully.
Jan 10 16:58:59 compute-0 podman[91257]: 2026-01-10 16:58:59.187253045 +0000 UTC m=+0.516628035 container remove 2898db25555d9bbba5f35e356277d82d11d0a88700583de3c7c634aee78bb4fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_babbage, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 16:58:59 compute-0 systemd[1]: libpod-conmon-2898db25555d9bbba5f35e356277d82d11d0a88700583de3c7c634aee78bb4fa.scope: Deactivated successfully.
Jan 10 16:58:59 compute-0 sudo[91180]: pam_unix(sudo:session): session closed for user root
Jan 10 16:58:59 compute-0 python3[91319]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable volumes rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 16:58:59 compute-0 sudo[91332]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 16:58:59 compute-0 sudo[91332]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 16:58:59 compute-0 sudo[91332]: pam_unix(sudo:session): session closed for user root
Jan 10 16:58:59 compute-0 podman[91345]: 2026-01-10 16:58:59.345804555 +0000 UTC m=+0.055942317 container create 67bb2766109512e04b03955b1d2e89f7829e9cb734a2f10b0551c1fb204091a8 (image=quay.io/ceph/ceph:v20, name=nervous_sutherland, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 10 16:58:59 compute-0 systemd[1]: Started libpod-conmon-67bb2766109512e04b03955b1d2e89f7829e9cb734a2f10b0551c1fb204091a8.scope.
Jan 10 16:58:59 compute-0 systemd[1]: Started libcrun container.
Jan 10 16:58:59 compute-0 sudo[91370]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 -- raw list --format json
Jan 10 16:58:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58f62e945bb43d1280efbf3cff596d143572ddc392a102c4cc3fcbb4d3480d02/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 16:58:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58f62e945bb43d1280efbf3cff596d143572ddc392a102c4cc3fcbb4d3480d02/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 16:58:59 compute-0 sudo[91370]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 16:58:59 compute-0 podman[91345]: 2026-01-10 16:58:59.322631646 +0000 UTC m=+0.032769448 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 16:58:59 compute-0 podman[91345]: 2026-01-10 16:58:59.430865753 +0000 UTC m=+0.141003545 container init 67bb2766109512e04b03955b1d2e89f7829e9cb734a2f10b0551c1fb204091a8 (image=quay.io/ceph/ceph:v20, name=nervous_sutherland, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 10 16:58:59 compute-0 podman[91345]: 2026-01-10 16:58:59.440156171 +0000 UTC m=+0.150293963 container start 67bb2766109512e04b03955b1d2e89f7829e9cb734a2f10b0551c1fb204091a8 (image=quay.io/ceph/ceph:v20, name=nervous_sutherland, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 16:58:59 compute-0 podman[91345]: 2026-01-10 16:58:59.444741353 +0000 UTC m=+0.154879125 container attach 67bb2766109512e04b03955b1d2e89f7829e9cb734a2f10b0551c1fb204091a8 (image=quay.io/ceph/ceph:v20, name=nervous_sutherland, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 10 16:58:59 compute-0 podman[91432]: 2026-01-10 16:58:59.738615243 +0000 UTC m=+0.042167849 container create c2987cc618f17e222f866538ded9c2174473e8e5fe0bfd2708c7a6fac6e4d3e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_cartwright, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 16:58:59 compute-0 systemd[1]: Started libpod-conmon-c2987cc618f17e222f866538ded9c2174473e8e5fe0bfd2708c7a6fac6e4d3e1.scope.
Jan 10 16:58:59 compute-0 systemd[1]: Started libcrun container.
Jan 10 16:58:59 compute-0 podman[91432]: 2026-01-10 16:58:59.817167682 +0000 UTC m=+0.120720308 container init c2987cc618f17e222f866538ded9c2174473e8e5fe0bfd2708c7a6fac6e4d3e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_cartwright, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 16:58:59 compute-0 podman[91432]: 2026-01-10 16:58:59.721648813 +0000 UTC m=+0.025201449 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 16:58:59 compute-0 podman[91432]: 2026-01-10 16:58:59.825338528 +0000 UTC m=+0.128891134 container start c2987cc618f17e222f866538ded9c2174473e8e5fe0bfd2708c7a6fac6e4d3e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_cartwright, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 10 16:58:59 compute-0 podman[91432]: 2026-01-10 16:58:59.829127817 +0000 UTC m=+0.132680423 container attach c2987cc618f17e222f866538ded9c2174473e8e5fe0bfd2708c7a6fac6e4d3e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_cartwright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True)
Jan 10 16:58:59 compute-0 adoring_cartwright[91448]: 167 167
Jan 10 16:58:59 compute-0 systemd[1]: libpod-c2987cc618f17e222f866538ded9c2174473e8e5fe0bfd2708c7a6fac6e4d3e1.scope: Deactivated successfully.
Jan 10 16:58:59 compute-0 podman[91432]: 2026-01-10 16:58:59.830487037 +0000 UTC m=+0.134039653 container died c2987cc618f17e222f866538ded9c2174473e8e5fe0bfd2708c7a6fac6e4d3e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_cartwright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS)
Jan 10 16:58:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-ff821b29b5f1ffe573444988a281ab06577a709c28b016d9df45f22c9c1b0dd6-merged.mount: Deactivated successfully.
Jan 10 16:58:59 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/3426479788' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Jan 10 16:58:59 compute-0 ceph-mon[75249]: osdmap e27: 3 total, 3 up, 3 in
Jan 10 16:58:59 compute-0 podman[91432]: 2026-01-10 16:58:59.871149281 +0000 UTC m=+0.174701887 container remove c2987cc618f17e222f866538ded9c2174473e8e5fe0bfd2708c7a6fac6e4d3e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_cartwright, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 16:58:59 compute-0 systemd[1]: libpod-conmon-c2987cc618f17e222f866538ded9c2174473e8e5fe0bfd2708c7a6fac6e4d3e1.scope: Deactivated successfully.
Jan 10 16:58:59 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} v 0)
Jan 10 16:58:59 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2629164319' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} : dispatch
Jan 10 16:58:59 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v59: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 10 16:59:00 compute-0 podman[91474]: 2026-01-10 16:59:00.03967358 +0000 UTC m=+0.045801295 container create b08fe1aa56bd93028a01aecd5630730dbf9e7cea47a26ad14548ce09816552ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_antonelli, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 16:59:00 compute-0 systemd[1]: Started libpod-conmon-b08fe1aa56bd93028a01aecd5630730dbf9e7cea47a26ad14548ce09816552ab.scope.
Jan 10 16:59:00 compute-0 systemd[1]: Started libcrun container.
Jan 10 16:59:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1aa56428a8092378a7103d82e457fcf5c2c10a6f3d57e217a9ba9f03bf3a0a1b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 16:59:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1aa56428a8092378a7103d82e457fcf5c2c10a6f3d57e217a9ba9f03bf3a0a1b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 16:59:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1aa56428a8092378a7103d82e457fcf5c2c10a6f3d57e217a9ba9f03bf3a0a1b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 16:59:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1aa56428a8092378a7103d82e457fcf5c2c10a6f3d57e217a9ba9f03bf3a0a1b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 16:59:00 compute-0 podman[91474]: 2026-01-10 16:59:00.01858985 +0000 UTC m=+0.024717545 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 16:59:00 compute-0 podman[91474]: 2026-01-10 16:59:00.124193751 +0000 UTC m=+0.130321446 container init b08fe1aa56bd93028a01aecd5630730dbf9e7cea47a26ad14548ce09816552ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_antonelli, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 16:59:00 compute-0 podman[91474]: 2026-01-10 16:59:00.130456912 +0000 UTC m=+0.136584587 container start b08fe1aa56bd93028a01aecd5630730dbf9e7cea47a26ad14548ce09816552ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_antonelli, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 16:59:00 compute-0 podman[91474]: 2026-01-10 16:59:00.134076327 +0000 UTC m=+0.140204002 container attach b08fe1aa56bd93028a01aecd5630730dbf9e7cea47a26ad14548ce09816552ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_antonelli, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 16:59:00 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e27 do_prune osdmap full prune enabled
Jan 10 16:59:00 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/2629164319' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} : dispatch
Jan 10 16:59:00 compute-0 ceph-mon[75249]: pgmap v59: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 10 16:59:00 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2629164319' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Jan 10 16:59:00 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e28 e28: 3 total, 3 up, 3 in
Jan 10 16:59:00 compute-0 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e28: 3 total, 3 up, 3 in
Jan 10 16:59:00 compute-0 nervous_sutherland[91395]: enabled application 'rbd' on pool 'volumes'
Jan 10 16:59:00 compute-0 systemd[1]: libpod-67bb2766109512e04b03955b1d2e89f7829e9cb734a2f10b0551c1fb204091a8.scope: Deactivated successfully.
Jan 10 16:59:00 compute-0 conmon[91395]: conmon 67bb2766109512e04b03 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-67bb2766109512e04b03955b1d2e89f7829e9cb734a2f10b0551c1fb204091a8.scope/container/memory.events
Jan 10 16:59:00 compute-0 podman[91345]: 2026-01-10 16:59:00.909043673 +0000 UTC m=+1.619181455 container died 67bb2766109512e04b03955b1d2e89f7829e9cb734a2f10b0551c1fb204091a8 (image=quay.io/ceph/ceph:v20, name=nervous_sutherland, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 16:59:00 compute-0 lvm[91568]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 10 16:59:00 compute-0 lvm[91571]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 10 16:59:00 compute-0 lvm[91568]: VG ceph_vg0 finished
Jan 10 16:59:00 compute-0 lvm[91571]: VG ceph_vg1 finished
Jan 10 16:59:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-58f62e945bb43d1280efbf3cff596d143572ddc392a102c4cc3fcbb4d3480d02-merged.mount: Deactivated successfully.
Jan 10 16:59:00 compute-0 lvm[91583]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 10 16:59:00 compute-0 lvm[91583]: VG ceph_vg2 finished
Jan 10 16:59:00 compute-0 podman[91345]: 2026-01-10 16:59:00.966121062 +0000 UTC m=+1.676258834 container remove 67bb2766109512e04b03955b1d2e89f7829e9cb734a2f10b0551c1fb204091a8 (image=quay.io/ceph/ceph:v20, name=nervous_sutherland, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Jan 10 16:59:00 compute-0 systemd[1]: libpod-conmon-67bb2766109512e04b03955b1d2e89f7829e9cb734a2f10b0551c1fb204091a8.scope: Deactivated successfully.
Jan 10 16:59:00 compute-0 sudo[91317]: pam_unix(sudo:session): session closed for user root
Jan 10 16:59:01 compute-0 great_antonelli[91490]: {}
Jan 10 16:59:01 compute-0 systemd[1]: libpod-b08fe1aa56bd93028a01aecd5630730dbf9e7cea47a26ad14548ce09816552ab.scope: Deactivated successfully.
Jan 10 16:59:01 compute-0 podman[91474]: 2026-01-10 16:59:01.100821683 +0000 UTC m=+1.106949368 container died b08fe1aa56bd93028a01aecd5630730dbf9e7cea47a26ad14548ce09816552ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_antonelli, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 10 16:59:01 compute-0 systemd[1]: libpod-b08fe1aa56bd93028a01aecd5630730dbf9e7cea47a26ad14548ce09816552ab.scope: Consumed 1.642s CPU time.
Jan 10 16:59:01 compute-0 sudo[91609]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mgdhojpqzbjmvotgbhvfxhxwfasgthqo ; /usr/bin/python3'
Jan 10 16:59:01 compute-0 sudo[91609]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:59:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-1aa56428a8092378a7103d82e457fcf5c2c10a6f3d57e217a9ba9f03bf3a0a1b-merged.mount: Deactivated successfully.
Jan 10 16:59:01 compute-0 podman[91474]: 2026-01-10 16:59:01.162067543 +0000 UTC m=+1.168195258 container remove b08fe1aa56bd93028a01aecd5630730dbf9e7cea47a26ad14548ce09816552ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_antonelli, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 16:59:01 compute-0 systemd[1]: libpod-conmon-b08fe1aa56bd93028a01aecd5630730dbf9e7cea47a26ad14548ce09816552ab.scope: Deactivated successfully.
Jan 10 16:59:01 compute-0 sudo[91370]: pam_unix(sudo:session): session closed for user root
Jan 10 16:59:01 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 10 16:59:01 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:59:01 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 10 16:59:01 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:59:01 compute-0 python3[91618]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable backups rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 16:59:01 compute-0 sudo[91625]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 10 16:59:01 compute-0 sudo[91625]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 16:59:01 compute-0 sudo[91625]: pam_unix(sudo:session): session closed for user root
Jan 10 16:59:01 compute-0 podman[91648]: 2026-01-10 16:59:01.340438154 +0000 UTC m=+0.044925407 container create 4c2b4e297928b102d20224d55779588195d7e09745e54a6f4440577442e412e9 (image=quay.io/ceph/ceph:v20, name=cranky_black, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Jan 10 16:59:01 compute-0 systemd[1]: Started libpod-conmon-4c2b4e297928b102d20224d55779588195d7e09745e54a6f4440577442e412e9.scope.
Jan 10 16:59:01 compute-0 systemd[1]: Started libcrun container.
Jan 10 16:59:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51e939296fa3d608fd68f5cb14a358468826e4f0c4b78d247b50545fd9647e16/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 16:59:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51e939296fa3d608fd68f5cb14a358468826e4f0c4b78d247b50545fd9647e16/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 16:59:01 compute-0 podman[91648]: 2026-01-10 16:59:01.401163408 +0000 UTC m=+0.105650681 container init 4c2b4e297928b102d20224d55779588195d7e09745e54a6f4440577442e412e9 (image=quay.io/ceph/ceph:v20, name=cranky_black, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 16:59:01 compute-0 podman[91648]: 2026-01-10 16:59:01.407987256 +0000 UTC m=+0.112474509 container start 4c2b4e297928b102d20224d55779588195d7e09745e54a6f4440577442e412e9 (image=quay.io/ceph/ceph:v20, name=cranky_black, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 10 16:59:01 compute-0 podman[91648]: 2026-01-10 16:59:01.410648472 +0000 UTC m=+0.115135725 container attach 4c2b4e297928b102d20224d55779588195d7e09745e54a6f4440577442e412e9 (image=quay.io/ceph/ceph:v20, name=cranky_black, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 16:59:01 compute-0 podman[91648]: 2026-01-10 16:59:01.323319651 +0000 UTC m=+0.027806924 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 16:59:01 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} v 0)
Jan 10 16:59:01 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3862856904' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} : dispatch
Jan 10 16:59:01 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/2629164319' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Jan 10 16:59:01 compute-0 ceph-mon[75249]: osdmap e28: 3 total, 3 up, 3 in
Jan 10 16:59:01 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:59:01 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:59:01 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/3862856904' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} : dispatch
Jan 10 16:59:01 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v61: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 10 16:59:02 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e28 do_prune osdmap full prune enabled
Jan 10 16:59:02 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3862856904' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Jan 10 16:59:02 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e29 e29: 3 total, 3 up, 3 in
Jan 10 16:59:02 compute-0 cranky_black[91665]: enabled application 'rbd' on pool 'backups'
Jan 10 16:59:02 compute-0 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e29: 3 total, 3 up, 3 in
Jan 10 16:59:02 compute-0 systemd[1]: libpod-4c2b4e297928b102d20224d55779588195d7e09745e54a6f4440577442e412e9.scope: Deactivated successfully.
Jan 10 16:59:02 compute-0 podman[91648]: 2026-01-10 16:59:02.26005729 +0000 UTC m=+0.964544543 container died 4c2b4e297928b102d20224d55779588195d7e09745e54a6f4440577442e412e9 (image=quay.io/ceph/ceph:v20, name=cranky_black, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 16:59:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-51e939296fa3d608fd68f5cb14a358468826e4f0c4b78d247b50545fd9647e16-merged.mount: Deactivated successfully.
Jan 10 16:59:02 compute-0 podman[91648]: 2026-01-10 16:59:02.295324888 +0000 UTC m=+0.999812141 container remove 4c2b4e297928b102d20224d55779588195d7e09745e54a6f4440577442e412e9 (image=quay.io/ceph/ceph:v20, name=cranky_black, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 10 16:59:02 compute-0 systemd[1]: libpod-conmon-4c2b4e297928b102d20224d55779588195d7e09745e54a6f4440577442e412e9.scope: Deactivated successfully.
Jan 10 16:59:02 compute-0 sudo[91609]: pam_unix(sudo:session): session closed for user root
Jan 10 16:59:02 compute-0 sudo[91724]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-clyjqupyrptsadqslyrcpytclrpflpok ; /usr/bin/python3'
Jan 10 16:59:02 compute-0 sudo[91724]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:59:02 compute-0 python3[91726]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable images rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 16:59:02 compute-0 podman[91727]: 2026-01-10 16:59:02.650671423 +0000 UTC m=+0.042713404 container create 114882a091683fb8c4f95a657ee0c496ce3154895cfd3562928fb0305dfcf4a3 (image=quay.io/ceph/ceph:v20, name=laughing_swartz, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 10 16:59:02 compute-0 systemd[1]: Started libpod-conmon-114882a091683fb8c4f95a657ee0c496ce3154895cfd3562928fb0305dfcf4a3.scope.
Jan 10 16:59:02 compute-0 systemd[1]: Started libcrun container.
Jan 10 16:59:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69ea3be0a0e802c94ce5ae82fbee91f7c8cff4fcb8022269b42808074b725d47/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 16:59:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69ea3be0a0e802c94ce5ae82fbee91f7c8cff4fcb8022269b42808074b725d47/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 16:59:02 compute-0 podman[91727]: 2026-01-10 16:59:02.631137279 +0000 UTC m=+0.023179060 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 16:59:02 compute-0 podman[91727]: 2026-01-10 16:59:02.729160691 +0000 UTC m=+0.121202472 container init 114882a091683fb8c4f95a657ee0c496ce3154895cfd3562928fb0305dfcf4a3 (image=quay.io/ceph/ceph:v20, name=laughing_swartz, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 16:59:02 compute-0 podman[91727]: 2026-01-10 16:59:02.734990969 +0000 UTC m=+0.127032730 container start 114882a091683fb8c4f95a657ee0c496ce3154895cfd3562928fb0305dfcf4a3 (image=quay.io/ceph/ceph:v20, name=laughing_swartz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 16:59:02 compute-0 podman[91727]: 2026-01-10 16:59:02.738719957 +0000 UTC m=+0.130761778 container attach 114882a091683fb8c4f95a657ee0c496ce3154895cfd3562928fb0305dfcf4a3 (image=quay.io/ceph/ceph:v20, name=laughing_swartz, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 16:59:02 compute-0 ceph-mon[75249]: pgmap v61: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 10 16:59:02 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/3862856904' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Jan 10 16:59:02 compute-0 ceph-mon[75249]: osdmap e29: 3 total, 3 up, 3 in
Jan 10 16:59:03 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} v 0)
Jan 10 16:59:03 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/151479732' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} : dispatch
Jan 10 16:59:03 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e29 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 16:59:03 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e29 do_prune osdmap full prune enabled
Jan 10 16:59:03 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/151479732' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} : dispatch
Jan 10 16:59:03 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/151479732' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Jan 10 16:59:03 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e30 e30: 3 total, 3 up, 3 in
Jan 10 16:59:03 compute-0 laughing_swartz[91743]: enabled application 'rbd' on pool 'images'
Jan 10 16:59:03 compute-0 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e30: 3 total, 3 up, 3 in
Jan 10 16:59:03 compute-0 systemd[1]: libpod-114882a091683fb8c4f95a657ee0c496ce3154895cfd3562928fb0305dfcf4a3.scope: Deactivated successfully.
Jan 10 16:59:03 compute-0 podman[91727]: 2026-01-10 16:59:03.951584813 +0000 UTC m=+1.343626574 container died 114882a091683fb8c4f95a657ee0c496ce3154895cfd3562928fb0305dfcf4a3 (image=quay.io/ceph/ceph:v20, name=laughing_swartz, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 10 16:59:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-69ea3be0a0e802c94ce5ae82fbee91f7c8cff4fcb8022269b42808074b725d47-merged.mount: Deactivated successfully.
Jan 10 16:59:03 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v64: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 10 16:59:03 compute-0 podman[91727]: 2026-01-10 16:59:03.994666128 +0000 UTC m=+1.386707879 container remove 114882a091683fb8c4f95a657ee0c496ce3154895cfd3562928fb0305dfcf4a3 (image=quay.io/ceph/ceph:v20, name=laughing_swartz, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 16:59:04 compute-0 systemd[1]: libpod-conmon-114882a091683fb8c4f95a657ee0c496ce3154895cfd3562928fb0305dfcf4a3.scope: Deactivated successfully.
Jan 10 16:59:04 compute-0 sudo[91724]: pam_unix(sudo:session): session closed for user root
Jan 10 16:59:04 compute-0 sudo[91802]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tivddpiwbkqhofkpcevtdpynvgflptgr ; /usr/bin/python3'
Jan 10 16:59:04 compute-0 sudo[91802]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:59:04 compute-0 python3[91804]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.meta cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 16:59:04 compute-0 podman[91805]: 2026-01-10 16:59:04.337040768 +0000 UTC m=+0.042811747 container create c79b4bd84ea9bcc18fdc9f43c40d00441b0b2e57514c547daac492d7f019b047 (image=quay.io/ceph/ceph:v20, name=gifted_mahavira, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 16:59:04 compute-0 systemd[1]: Started libpod-conmon-c79b4bd84ea9bcc18fdc9f43c40d00441b0b2e57514c547daac492d7f019b047.scope.
Jan 10 16:59:04 compute-0 systemd[1]: Started libcrun container.
Jan 10 16:59:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f368d0203b29f2d0665b030fd1939d9f56767d760638851a0a75323c4cb05cee/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 16:59:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f368d0203b29f2d0665b030fd1939d9f56767d760638851a0a75323c4cb05cee/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 16:59:04 compute-0 podman[91805]: 2026-01-10 16:59:04.411627743 +0000 UTC m=+0.117398752 container init c79b4bd84ea9bcc18fdc9f43c40d00441b0b2e57514c547daac492d7f019b047 (image=quay.io/ceph/ceph:v20, name=gifted_mahavira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 16:59:04 compute-0 podman[91805]: 2026-01-10 16:59:04.316862235 +0000 UTC m=+0.022633234 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 16:59:04 compute-0 podman[91805]: 2026-01-10 16:59:04.417791091 +0000 UTC m=+0.123562070 container start c79b4bd84ea9bcc18fdc9f43c40d00441b0b2e57514c547daac492d7f019b047 (image=quay.io/ceph/ceph:v20, name=gifted_mahavira, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True)
Jan 10 16:59:04 compute-0 podman[91805]: 2026-01-10 16:59:04.420934742 +0000 UTC m=+0.126705721 container attach c79b4bd84ea9bcc18fdc9f43c40d00441b0b2e57514c547daac492d7f019b047 (image=quay.io/ceph/ceph:v20, name=gifted_mahavira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True)
Jan 10 16:59:04 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} v 0)
Jan 10 16:59:04 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1944505131' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} : dispatch
Jan 10 16:59:04 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e30 do_prune osdmap full prune enabled
Jan 10 16:59:04 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/151479732' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Jan 10 16:59:04 compute-0 ceph-mon[75249]: osdmap e30: 3 total, 3 up, 3 in
Jan 10 16:59:04 compute-0 ceph-mon[75249]: pgmap v64: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 10 16:59:04 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/1944505131' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} : dispatch
Jan 10 16:59:04 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1944505131' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Jan 10 16:59:04 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e31 e31: 3 total, 3 up, 3 in
Jan 10 16:59:04 compute-0 gifted_mahavira[91820]: enabled application 'cephfs' on pool 'cephfs.cephfs.meta'
Jan 10 16:59:04 compute-0 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e31: 3 total, 3 up, 3 in
Jan 10 16:59:04 compute-0 systemd[1]: libpod-c79b4bd84ea9bcc18fdc9f43c40d00441b0b2e57514c547daac492d7f019b047.scope: Deactivated successfully.
Jan 10 16:59:04 compute-0 podman[91805]: 2026-01-10 16:59:04.949259713 +0000 UTC m=+0.655030692 container died c79b4bd84ea9bcc18fdc9f43c40d00441b0b2e57514c547daac492d7f019b047 (image=quay.io/ceph/ceph:v20, name=gifted_mahavira, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 16:59:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-f368d0203b29f2d0665b030fd1939d9f56767d760638851a0a75323c4cb05cee-merged.mount: Deactivated successfully.
Jan 10 16:59:05 compute-0 podman[91805]: 2026-01-10 16:59:05.001637146 +0000 UTC m=+0.707408125 container remove c79b4bd84ea9bcc18fdc9f43c40d00441b0b2e57514c547daac492d7f019b047 (image=quay.io/ceph/ceph:v20, name=gifted_mahavira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 16:59:05 compute-0 systemd[1]: libpod-conmon-c79b4bd84ea9bcc18fdc9f43c40d00441b0b2e57514c547daac492d7f019b047.scope: Deactivated successfully.
Jan 10 16:59:05 compute-0 sudo[91802]: pam_unix(sudo:session): session closed for user root
Jan 10 16:59:05 compute-0 sudo[91879]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-funbvwgopyslbemirikejjemyhhijatb ; /usr/bin/python3'
Jan 10 16:59:05 compute-0 sudo[91879]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:59:05 compute-0 python3[91881]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.data cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 16:59:05 compute-0 podman[91882]: 2026-01-10 16:59:05.338979661 +0000 UTC m=+0.048740839 container create 3509f7e69c79d717bc34ecb60ac957e8196b2358bfb92e5ef862d27b424f587c (image=quay.io/ceph/ceph:v20, name=confident_mcnulty, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 16:59:05 compute-0 systemd[1]: Started libpod-conmon-3509f7e69c79d717bc34ecb60ac957e8196b2358bfb92e5ef862d27b424f587c.scope.
Jan 10 16:59:05 compute-0 podman[91882]: 2026-01-10 16:59:05.318735236 +0000 UTC m=+0.028496414 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 16:59:05 compute-0 systemd[1]: Started libcrun container.
Jan 10 16:59:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffa06df2b7865e9af255d26793c306c91c981a826a841924c2ce69aa778ad37c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 16:59:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffa06df2b7865e9af255d26793c306c91c981a826a841924c2ce69aa778ad37c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 16:59:05 compute-0 podman[91882]: 2026-01-10 16:59:05.430596857 +0000 UTC m=+0.140358035 container init 3509f7e69c79d717bc34ecb60ac957e8196b2358bfb92e5ef862d27b424f587c (image=quay.io/ceph/ceph:v20, name=confident_mcnulty, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 16:59:05 compute-0 podman[91882]: 2026-01-10 16:59:05.438804114 +0000 UTC m=+0.148565272 container start 3509f7e69c79d717bc34ecb60ac957e8196b2358bfb92e5ef862d27b424f587c (image=quay.io/ceph/ceph:v20, name=confident_mcnulty, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 10 16:59:05 compute-0 podman[91882]: 2026-01-10 16:59:05.442835411 +0000 UTC m=+0.152596599 container attach 3509f7e69c79d717bc34ecb60ac957e8196b2358bfb92e5ef862d27b424f587c (image=quay.io/ceph/ceph:v20, name=confident_mcnulty, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 16:59:05 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} v 0)
Jan 10 16:59:05 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2673783167' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} : dispatch
Jan 10 16:59:05 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v66: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 10 16:59:05 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e31 do_prune osdmap full prune enabled
Jan 10 16:59:06 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/1944505131' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Jan 10 16:59:06 compute-0 ceph-mon[75249]: osdmap e31: 3 total, 3 up, 3 in
Jan 10 16:59:06 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/2673783167' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} : dispatch
Jan 10 16:59:06 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2673783167' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Jan 10 16:59:06 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e32 e32: 3 total, 3 up, 3 in
Jan 10 16:59:06 compute-0 confident_mcnulty[91897]: enabled application 'cephfs' on pool 'cephfs.cephfs.data'
Jan 10 16:59:06 compute-0 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e32: 3 total, 3 up, 3 in
Jan 10 16:59:06 compute-0 systemd[1]: libpod-3509f7e69c79d717bc34ecb60ac957e8196b2358bfb92e5ef862d27b424f587c.scope: Deactivated successfully.
Jan 10 16:59:06 compute-0 podman[91882]: 2026-01-10 16:59:06.132333678 +0000 UTC m=+0.842094876 container died 3509f7e69c79d717bc34ecb60ac957e8196b2358bfb92e5ef862d27b424f587c (image=quay.io/ceph/ceph:v20, name=confident_mcnulty, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 16:59:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-ffa06df2b7865e9af255d26793c306c91c981a826a841924c2ce69aa778ad37c-merged.mount: Deactivated successfully.
Jan 10 16:59:06 compute-0 podman[91882]: 2026-01-10 16:59:06.177811082 +0000 UTC m=+0.887572240 container remove 3509f7e69c79d717bc34ecb60ac957e8196b2358bfb92e5ef862d27b424f587c (image=quay.io/ceph/ceph:v20, name=confident_mcnulty, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 10 16:59:06 compute-0 systemd[1]: libpod-conmon-3509f7e69c79d717bc34ecb60ac957e8196b2358bfb92e5ef862d27b424f587c.scope: Deactivated successfully.
Jan 10 16:59:06 compute-0 sudo[91879]: pam_unix(sudo:session): session closed for user root
Jan 10 16:59:07 compute-0 ceph-mon[75249]: pgmap v66: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 10 16:59:07 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/2673783167' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Jan 10 16:59:07 compute-0 ceph-mon[75249]: osdmap e32: 3 total, 3 up, 3 in
Jan 10 16:59:07 compute-0 python3[92010]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_mds.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 10 16:59:07 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v68: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 10 16:59:08 compute-0 python3[92081]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1768064347.6181085-36623-166873741844071/source dest=/tmp/ceph_mds.yml mode=0644 force=True follow=False _original_basename=ceph_mds.yml.j2 checksum=e359e26d9e42bc107a0de03375144cf8590b6f68 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 16:59:08 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e32 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 16:59:08 compute-0 sudo[92129]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zojwediqwqoentmtsklzlhwouoloxilo ; /usr/bin/python3'
Jan 10 16:59:08 compute-0 sudo[92129]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:59:08 compute-0 python3[92131]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   fs volume create cephfs '--placement=compute-0 '
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 16:59:08 compute-0 podman[92132]: 2026-01-10 16:59:08.848725527 +0000 UTC m=+0.048250824 container create 987e7174c63d7c7f199fff120341784f59e567dce4c6ff8d5f304883bb864a97 (image=quay.io/ceph/ceph:v20, name=crazy_shtern, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 10 16:59:08 compute-0 systemd[1]: Started libpod-conmon-987e7174c63d7c7f199fff120341784f59e567dce4c6ff8d5f304883bb864a97.scope.
Jan 10 16:59:08 compute-0 systemd[1]: Started libcrun container.
Jan 10 16:59:08 compute-0 podman[92132]: 2026-01-10 16:59:08.82665105 +0000 UTC m=+0.026176147 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 16:59:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b343ef95b1113f107ad78f622b0f6854e2e08d39c24ef97304d3b2b0f616ecef/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 16:59:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b343ef95b1113f107ad78f622b0f6854e2e08d39c24ef97304d3b2b0f616ecef/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 16:59:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b343ef95b1113f107ad78f622b0f6854e2e08d39c24ef97304d3b2b0f616ecef/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 10 16:59:08 compute-0 podman[92132]: 2026-01-10 16:59:08.945343968 +0000 UTC m=+0.144869095 container init 987e7174c63d7c7f199fff120341784f59e567dce4c6ff8d5f304883bb864a97 (image=quay.io/ceph/ceph:v20, name=crazy_shtern, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 16:59:08 compute-0 podman[92132]: 2026-01-10 16:59:08.952730592 +0000 UTC m=+0.152255689 container start 987e7174c63d7c7f199fff120341784f59e567dce4c6ff8d5f304883bb864a97 (image=quay.io/ceph/ceph:v20, name=crazy_shtern, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 10 16:59:08 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 16:59:08 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 16:59:08 compute-0 podman[92132]: 2026-01-10 16:59:08.957079837 +0000 UTC m=+0.156604924 container attach 987e7174c63d7c7f199fff120341784f59e567dce4c6ff8d5f304883bb864a97 (image=quay.io/ceph/ceph:v20, name=crazy_shtern, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 16:59:08 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 16:59:08 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 16:59:08 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 16:59:08 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 16:59:09 compute-0 ceph-mon[75249]: pgmap v68: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 10 16:59:09 compute-0 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.14230 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 ", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 16:59:09 compute-0 ceph-mgr[75538]: [volumes INFO volumes.module] Starting _cmd_fs_volume_create(name:cephfs, placement:compute-0 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Jan 10 16:59:09 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} v 0)
Jan 10 16:59:09 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} : dispatch
Jan 10 16:59:09 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} v 0)
Jan 10 16:59:09 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} : dispatch
Jan 10 16:59:09 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} v 0)
Jan 10 16:59:09 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} : dispatch
Jan 10 16:59:09 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e32 do_prune osdmap full prune enabled
Jan 10 16:59:09 compute-0 ceph-mon[75249]: log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Jan 10 16:59:09 compute-0 ceph-mon[75249]: log_channel(cluster) log [WRN] : Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Jan 10 16:59:09 compute-0 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-mon-compute-0[75245]: 2026-01-10T16:59:09.517+0000 7f8fff71e640 -1 log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Jan 10 16:59:09 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Jan 10 16:59:09 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).mds e2 new map
Jan 10 16:59:09 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).mds e2 print_map
                                           e2
                                           btime 2026-01-10T16:59:09:517838+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        2
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-01-10T16:59:09.517425+0000
                                           modified        2026-01-10T16:59:09.517425+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        
                                           up        {}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 0 members: 
                                            
                                            
Jan 10 16:59:09 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e33 e33: 3 total, 3 up, 3 in
Jan 10 16:59:09 compute-0 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e33: 3 total, 3 up, 3 in
Jan 10 16:59:09 compute-0 ceph-mon[75249]: log_channel(cluster) log [DBG] : fsmap cephfs:0
Jan 10 16:59:09 compute-0 ceph-mgr[75538]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0
Jan 10 16:59:09 compute-0 ceph-mgr[75538]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0
Jan 10 16:59:09 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Jan 10 16:59:09 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:59:09 compute-0 ceph-mgr[75538]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_create(name:cephfs, placement:compute-0 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Jan 10 16:59:09 compute-0 systemd[1]: libpod-987e7174c63d7c7f199fff120341784f59e567dce4c6ff8d5f304883bb864a97.scope: Deactivated successfully.
Jan 10 16:59:09 compute-0 podman[92132]: 2026-01-10 16:59:09.580031053 +0000 UTC m=+0.779556160 container died 987e7174c63d7c7f199fff120341784f59e567dce4c6ff8d5f304883bb864a97 (image=quay.io/ceph/ceph:v20, name=crazy_shtern, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 10 16:59:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-b343ef95b1113f107ad78f622b0f6854e2e08d39c24ef97304d3b2b0f616ecef-merged.mount: Deactivated successfully.
Jan 10 16:59:09 compute-0 sudo[92172]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 16:59:09 compute-0 podman[92132]: 2026-01-10 16:59:09.628534384 +0000 UTC m=+0.828059511 container remove 987e7174c63d7c7f199fff120341784f59e567dce4c6ff8d5f304883bb864a97 (image=quay.io/ceph/ceph:v20, name=crazy_shtern, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default)
Jan 10 16:59:09 compute-0 sudo[92172]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 16:59:09 compute-0 sudo[92172]: pam_unix(sudo:session): session closed for user root
Jan 10 16:59:09 compute-0 systemd[1]: libpod-conmon-987e7174c63d7c7f199fff120341784f59e567dce4c6ff8d5f304883bb864a97.scope: Deactivated successfully.
Jan 10 16:59:09 compute-0 sudo[92129]: pam_unix(sudo:session): session closed for user root
Jan 10 16:59:09 compute-0 sudo[92209]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Jan 10 16:59:09 compute-0 sudo[92209]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 16:59:09 compute-0 sudo[92257]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cbzizmpawfwfzntkcuwusvblnwpkodqn ; /usr/bin/python3'
Jan 10 16:59:09 compute-0 sudo[92257]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:59:09 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v70: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 10 16:59:09 compute-0 python3[92259]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 16:59:10 compute-0 podman[92277]: 2026-01-10 16:59:10.049661639 +0000 UTC m=+0.050209241 container create 51d7503ede518f65a16b63e0db33404803127328dbd88344fef94678880a2a3f (image=quay.io/ceph/ceph:v20, name=cranky_cohen, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 10 16:59:10 compute-0 systemd[1]: Started libpod-conmon-51d7503ede518f65a16b63e0db33404803127328dbd88344fef94678880a2a3f.scope.
Jan 10 16:59:10 compute-0 podman[92277]: 2026-01-10 16:59:10.029305991 +0000 UTC m=+0.029853613 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 16:59:10 compute-0 systemd[1]: Started libcrun container.
Jan 10 16:59:10 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} : dispatch
Jan 10 16:59:10 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} : dispatch
Jan 10 16:59:10 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} : dispatch
Jan 10 16:59:10 compute-0 ceph-mon[75249]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Jan 10 16:59:10 compute-0 ceph-mon[75249]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Jan 10 16:59:10 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Jan 10 16:59:10 compute-0 ceph-mon[75249]: osdmap e33: 3 total, 3 up, 3 in
Jan 10 16:59:10 compute-0 ceph-mon[75249]: fsmap cephfs:0
Jan 10 16:59:10 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:59:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb04ba13323bafbeda971b62cf9d11ff279098be3d7fbb472f5dd950154599e8/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 16:59:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb04ba13323bafbeda971b62cf9d11ff279098be3d7fbb472f5dd950154599e8/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 16:59:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb04ba13323bafbeda971b62cf9d11ff279098be3d7fbb472f5dd950154599e8/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 10 16:59:10 compute-0 podman[92277]: 2026-01-10 16:59:10.152557542 +0000 UTC m=+0.153105184 container init 51d7503ede518f65a16b63e0db33404803127328dbd88344fef94678880a2a3f (image=quay.io/ceph/ceph:v20, name=cranky_cohen, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 16:59:10 compute-0 podman[92277]: 2026-01-10 16:59:10.158065191 +0000 UTC m=+0.158612793 container start 51d7503ede518f65a16b63e0db33404803127328dbd88344fef94678880a2a3f (image=quay.io/ceph/ceph:v20, name=cranky_cohen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 16:59:10 compute-0 podman[92277]: 2026-01-10 16:59:10.169752328 +0000 UTC m=+0.170299980 container attach 51d7503ede518f65a16b63e0db33404803127328dbd88344fef94678880a2a3f (image=quay.io/ceph/ceph:v20, name=cranky_cohen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 16:59:10 compute-0 podman[92320]: 2026-01-10 16:59:10.181499958 +0000 UTC m=+0.062537188 container exec 69622407e4b336ab6e593d34ac16bfb19f7f8835a32ed22c7a89e50ee8c8d8e7 (image=quay.io/ceph/ceph:v20, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-mon-compute-0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 16:59:10 compute-0 podman[92320]: 2026-01-10 16:59:10.307340453 +0000 UTC m=+0.188377673 container exec_died 69622407e4b336ab6e593d34ac16bfb19f7f8835a32ed22c7a89e50ee8c8d8e7 (image=quay.io/ceph/ceph:v20, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 10 16:59:10 compute-0 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.14232 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 16:59:10 compute-0 ceph-mgr[75538]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0
Jan 10 16:59:10 compute-0 ceph-mgr[75538]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0
Jan 10 16:59:10 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Jan 10 16:59:10 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:59:10 compute-0 cranky_cohen[92316]: Scheduled mds.cephfs update...
Jan 10 16:59:10 compute-0 systemd[1]: libpod-51d7503ede518f65a16b63e0db33404803127328dbd88344fef94678880a2a3f.scope: Deactivated successfully.
Jan 10 16:59:10 compute-0 podman[92277]: 2026-01-10 16:59:10.626291647 +0000 UTC m=+0.626839289 container died 51d7503ede518f65a16b63e0db33404803127328dbd88344fef94678880a2a3f (image=quay.io/ceph/ceph:v20, name=cranky_cohen, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 16:59:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-cb04ba13323bafbeda971b62cf9d11ff279098be3d7fbb472f5dd950154599e8-merged.mount: Deactivated successfully.
Jan 10 16:59:10 compute-0 podman[92277]: 2026-01-10 16:59:10.677874727 +0000 UTC m=+0.678422319 container remove 51d7503ede518f65a16b63e0db33404803127328dbd88344fef94678880a2a3f (image=quay.io/ceph/ceph:v20, name=cranky_cohen, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 10 16:59:10 compute-0 systemd[1]: libpod-conmon-51d7503ede518f65a16b63e0db33404803127328dbd88344fef94678880a2a3f.scope: Deactivated successfully.
Jan 10 16:59:10 compute-0 sudo[92257]: pam_unix(sudo:session): session closed for user root
Jan 10 16:59:10 compute-0 sudo[92209]: pam_unix(sudo:session): session closed for user root
Jan 10 16:59:10 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 10 16:59:10 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:59:10 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 10 16:59:10 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:59:11 compute-0 sudo[92505]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 16:59:11 compute-0 sudo[92505]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 16:59:11 compute-0 sudo[92505]: pam_unix(sudo:session): session closed for user root
Jan 10 16:59:11 compute-0 sudo[92530]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 10 16:59:11 compute-0 sudo[92530]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 16:59:11 compute-0 ceph-mon[75249]: from='client.14230 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 ", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 16:59:11 compute-0 ceph-mon[75249]: Saving service mds.cephfs spec with placement compute-0
Jan 10 16:59:11 compute-0 ceph-mon[75249]: pgmap v70: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 10 16:59:11 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:59:11 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:59:11 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:59:11 compute-0 sudo[92642]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sakjbeovfniwsfowvhajnffferxwwxwl ; /usr/bin/python3'
Jan 10 16:59:11 compute-0 sudo[92642]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:59:11 compute-0 python3[92646]: ansible-ansible.legacy.stat Invoked with path=/etc/ceph/ceph.client.openstack.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 10 16:59:11 compute-0 sudo[92642]: pam_unix(sudo:session): session closed for user root
Jan 10 16:59:11 compute-0 sudo[92530]: pam_unix(sudo:session): session closed for user root
Jan 10 16:59:11 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 16:59:11 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 16:59:11 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 10 16:59:11 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 10 16:59:11 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 10 16:59:11 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:59:11 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 10 16:59:11 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 10 16:59:11 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 10 16:59:11 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 10 16:59:11 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 16:59:11 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 16:59:11 compute-0 sudo[92751]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ybbhuutitlzrvxcmpvspvryohtqxwdwd ; /usr/bin/python3'
Jan 10 16:59:11 compute-0 sudo[92751]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:59:11 compute-0 sudo[92718]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 16:59:11 compute-0 sudo[92718]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 16:59:11 compute-0 sudo[92718]: pam_unix(sudo:session): session closed for user root
Jan 10 16:59:11 compute-0 sudo[92762]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 10 16:59:11 compute-0 sudo[92762]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 16:59:11 compute-0 python3[92759]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1768064351.1436458-36655-105246331815915/source dest=/etc/ceph/ceph.client.openstack.keyring mode=0644 force=True owner=167 group=167 follow=False _original_basename=ceph_key.j2 checksum=7cc641ddc3c198361b04b7e13e353930d285d63f backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 16:59:11 compute-0 sudo[92751]: pam_unix(sudo:session): session closed for user root
Jan 10 16:59:11 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v71: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 10 16:59:12 compute-0 podman[92822]: 2026-01-10 16:59:12.03463918 +0000 UTC m=+0.057657376 container create 9be9c89956cf9e3bf60700b6dd0b3247dd428d55d42e02f6830cb6be36b482b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_fermi, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 10 16:59:12 compute-0 systemd[1]: Started libpod-conmon-9be9c89956cf9e3bf60700b6dd0b3247dd428d55d42e02f6830cb6be36b482b4.scope.
Jan 10 16:59:12 compute-0 podman[92822]: 2026-01-10 16:59:12.01040953 +0000 UTC m=+0.033427796 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 16:59:12 compute-0 systemd[1]: Started libcrun container.
Jan 10 16:59:12 compute-0 podman[92822]: 2026-01-10 16:59:12.120959933 +0000 UTC m=+0.143978139 container init 9be9c89956cf9e3bf60700b6dd0b3247dd428d55d42e02f6830cb6be36b482b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_fermi, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 16:59:12 compute-0 podman[92822]: 2026-01-10 16:59:12.128190722 +0000 UTC m=+0.151208908 container start 9be9c89956cf9e3bf60700b6dd0b3247dd428d55d42e02f6830cb6be36b482b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_fermi, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 10 16:59:12 compute-0 podman[92822]: 2026-01-10 16:59:12.132030193 +0000 UTC m=+0.155048379 container attach 9be9c89956cf9e3bf60700b6dd0b3247dd428d55d42e02f6830cb6be36b482b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_fermi, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 16:59:12 compute-0 jolly_fermi[92839]: 167 167
Jan 10 16:59:12 compute-0 systemd[1]: libpod-9be9c89956cf9e3bf60700b6dd0b3247dd428d55d42e02f6830cb6be36b482b4.scope: Deactivated successfully.
Jan 10 16:59:12 compute-0 podman[92822]: 2026-01-10 16:59:12.136357448 +0000 UTC m=+0.159375664 container died 9be9c89956cf9e3bf60700b6dd0b3247dd428d55d42e02f6830cb6be36b482b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_fermi, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 16:59:12 compute-0 ceph-mon[75249]: from='client.14232 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 16:59:12 compute-0 ceph-mon[75249]: Saving service mds.cephfs spec with placement compute-0
Jan 10 16:59:12 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 16:59:12 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 10 16:59:12 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:59:12 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 10 16:59:12 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 10 16:59:12 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 16:59:12 compute-0 sudo[92867]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-roepwpqpdvxrqlwqsjatnkawggfqcwfr ; /usr/bin/python3'
Jan 10 16:59:12 compute-0 sudo[92867]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:59:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-e879dee90e1bc60595af05e9a28d303f828e7cbe7e8cac825b3b39b3e591c31c-merged.mount: Deactivated successfully.
Jan 10 16:59:12 compute-0 podman[92822]: 2026-01-10 16:59:12.192063827 +0000 UTC m=+0.215082043 container remove 9be9c89956cf9e3bf60700b6dd0b3247dd428d55d42e02f6830cb6be36b482b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_fermi, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 16:59:12 compute-0 systemd[1]: libpod-conmon-9be9c89956cf9e3bf60700b6dd0b3247dd428d55d42e02f6830cb6be36b482b4.scope: Deactivated successfully.
Jan 10 16:59:12 compute-0 python3[92876]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth import -i /etc/ceph/ceph.client.openstack.keyring _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 16:59:12 compute-0 podman[92889]: 2026-01-10 16:59:12.381082707 +0000 UTC m=+0.049698577 container create e33f50414f6c8c0804b31c8aa90fd80dfb83857c886a4c8be88ea2cd3523c658 (image=quay.io/ceph/ceph:v20, name=elegant_ptolemy, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 10 16:59:12 compute-0 podman[92888]: 2026-01-10 16:59:12.387021419 +0000 UTC m=+0.058086429 container create eaf768db8ad85cf03d2a6557ffb49a9d828d9a030bc51b79ca52cd5238e1e3ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_tharp, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 10 16:59:12 compute-0 systemd[1]: Started libpod-conmon-eaf768db8ad85cf03d2a6557ffb49a9d828d9a030bc51b79ca52cd5238e1e3ee.scope.
Jan 10 16:59:12 compute-0 systemd[1]: Started libpod-conmon-e33f50414f6c8c0804b31c8aa90fd80dfb83857c886a4c8be88ea2cd3523c658.scope.
Jan 10 16:59:12 compute-0 systemd[1]: Started libcrun container.
Jan 10 16:59:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cbfc0578d51b0cdec5f6eb77f42fa32e916ddfee9f1285e0f134465708f59b6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 16:59:12 compute-0 systemd[1]: Started libcrun container.
Jan 10 16:59:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cbfc0578d51b0cdec5f6eb77f42fa32e916ddfee9f1285e0f134465708f59b6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 16:59:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cbfc0578d51b0cdec5f6eb77f42fa32e916ddfee9f1285e0f134465708f59b6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 16:59:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cbfc0578d51b0cdec5f6eb77f42fa32e916ddfee9f1285e0f134465708f59b6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 16:59:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cbfc0578d51b0cdec5f6eb77f42fa32e916ddfee9f1285e0f134465708f59b6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 10 16:59:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d152f03b0e145b4be6b0c737653456611f2f193035b579c630eda5a2848101f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 16:59:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d152f03b0e145b4be6b0c737653456611f2f193035b579c630eda5a2848101f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 16:59:12 compute-0 podman[92888]: 2026-01-10 16:59:12.363434887 +0000 UTC m=+0.034499997 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 16:59:12 compute-0 podman[92889]: 2026-01-10 16:59:12.361018427 +0000 UTC m=+0.029634327 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 16:59:12 compute-0 podman[92888]: 2026-01-10 16:59:12.464735214 +0000 UTC m=+0.135800254 container init eaf768db8ad85cf03d2a6557ffb49a9d828d9a030bc51b79ca52cd5238e1e3ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_tharp, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 10 16:59:12 compute-0 podman[92889]: 2026-01-10 16:59:12.475529985 +0000 UTC m=+0.144145925 container init e33f50414f6c8c0804b31c8aa90fd80dfb83857c886a4c8be88ea2cd3523c658 (image=quay.io/ceph/ceph:v20, name=elegant_ptolemy, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 10 16:59:12 compute-0 podman[92888]: 2026-01-10 16:59:12.476295457 +0000 UTC m=+0.147360467 container start eaf768db8ad85cf03d2a6557ffb49a9d828d9a030bc51b79ca52cd5238e1e3ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_tharp, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 10 16:59:12 compute-0 podman[92889]: 2026-01-10 16:59:12.480801928 +0000 UTC m=+0.149417818 container start e33f50414f6c8c0804b31c8aa90fd80dfb83857c886a4c8be88ea2cd3523c658 (image=quay.io/ceph/ceph:v20, name=elegant_ptolemy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 10 16:59:12 compute-0 podman[92888]: 2026-01-10 16:59:12.482144146 +0000 UTC m=+0.153209166 container attach eaf768db8ad85cf03d2a6557ffb49a9d828d9a030bc51b79ca52cd5238e1e3ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_tharp, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 16:59:12 compute-0 podman[92889]: 2026-01-10 16:59:12.486597785 +0000 UTC m=+0.155213855 container attach e33f50414f6c8c0804b31c8aa90fd80dfb83857c886a4c8be88ea2cd3523c658 (image=quay.io/ceph/ceph:v20, name=elegant_ptolemy, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 16:59:12 compute-0 infallible_tharp[92919]: --> passed data devices: 0 physical, 3 LVM
Jan 10 16:59:12 compute-0 infallible_tharp[92919]: --> All data devices are unavailable
Jan 10 16:59:12 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth import"} v 0)
Jan 10 16:59:12 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2223794276' entity='client.admin' cmd={"prefix": "auth import"} : dispatch
Jan 10 16:59:12 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2223794276' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Jan 10 16:59:13 compute-0 systemd[1]: libpod-e33f50414f6c8c0804b31c8aa90fd80dfb83857c886a4c8be88ea2cd3523c658.scope: Deactivated successfully.
Jan 10 16:59:13 compute-0 podman[92889]: 2026-01-10 16:59:13.008271555 +0000 UTC m=+0.676887425 container died e33f50414f6c8c0804b31c8aa90fd80dfb83857c886a4c8be88ea2cd3523c658 (image=quay.io/ceph/ceph:v20, name=elegant_ptolemy, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 10 16:59:13 compute-0 systemd[1]: libpod-eaf768db8ad85cf03d2a6557ffb49a9d828d9a030bc51b79ca52cd5238e1e3ee.scope: Deactivated successfully.
Jan 10 16:59:13 compute-0 podman[92888]: 2026-01-10 16:59:13.018984094 +0000 UTC m=+0.690049114 container died eaf768db8ad85cf03d2a6557ffb49a9d828d9a030bc51b79ca52cd5238e1e3ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_tharp, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 16:59:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-1d152f03b0e145b4be6b0c737653456611f2f193035b579c630eda5a2848101f-merged.mount: Deactivated successfully.
Jan 10 16:59:13 compute-0 podman[92889]: 2026-01-10 16:59:13.057041944 +0000 UTC m=+0.725657824 container remove e33f50414f6c8c0804b31c8aa90fd80dfb83857c886a4c8be88ea2cd3523c658 (image=quay.io/ceph/ceph:v20, name=elegant_ptolemy, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 16:59:13 compute-0 systemd[1]: libpod-conmon-e33f50414f6c8c0804b31c8aa90fd80dfb83857c886a4c8be88ea2cd3523c658.scope: Deactivated successfully.
Jan 10 16:59:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-8cbfc0578d51b0cdec5f6eb77f42fa32e916ddfee9f1285e0f134465708f59b6-merged.mount: Deactivated successfully.
Jan 10 16:59:13 compute-0 sudo[92867]: pam_unix(sudo:session): session closed for user root
Jan 10 16:59:13 compute-0 podman[92888]: 2026-01-10 16:59:13.109112858 +0000 UTC m=+0.780177868 container remove eaf768db8ad85cf03d2a6557ffb49a9d828d9a030bc51b79ca52cd5238e1e3ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_tharp, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 10 16:59:13 compute-0 systemd[1]: libpod-conmon-eaf768db8ad85cf03d2a6557ffb49a9d828d9a030bc51b79ca52cd5238e1e3ee.scope: Deactivated successfully.
Jan 10 16:59:13 compute-0 ceph-mon[75249]: pgmap v71: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 10 16:59:13 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/2223794276' entity='client.admin' cmd={"prefix": "auth import"} : dispatch
Jan 10 16:59:13 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/2223794276' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Jan 10 16:59:13 compute-0 sudo[92762]: pam_unix(sudo:session): session closed for user root
Jan 10 16:59:13 compute-0 sudo[92989]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 16:59:13 compute-0 sudo[92989]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 16:59:13 compute-0 sudo[92989]: pam_unix(sudo:session): session closed for user root
Jan 10 16:59:13 compute-0 sudo[93014]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 -- lvm list --format json
Jan 10 16:59:13 compute-0 sudo[93014]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 16:59:13 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e33 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 16:59:13 compute-0 podman[93051]: 2026-01-10 16:59:13.623663972 +0000 UTC m=+0.060729635 container create 5c2dfeae84285543a2eb78035a4191c46f27d74fe03a2c59f550a3c49e3960ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_almeida, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 10 16:59:13 compute-0 systemd[1]: Started libpod-conmon-5c2dfeae84285543a2eb78035a4191c46f27d74fe03a2c59f550a3c49e3960ae.scope.
Jan 10 16:59:13 compute-0 systemd[1]: Started libcrun container.
Jan 10 16:59:13 compute-0 podman[93051]: 2026-01-10 16:59:13.601822121 +0000 UTC m=+0.038887804 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 16:59:13 compute-0 podman[93051]: 2026-01-10 16:59:13.703194449 +0000 UTC m=+0.140260132 container init 5c2dfeae84285543a2eb78035a4191c46f27d74fe03a2c59f550a3c49e3960ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_almeida, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 10 16:59:13 compute-0 sudo[93093]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wvjtkbhjuamyhtjmuvzbappviugectrz ; /usr/bin/python3'
Jan 10 16:59:13 compute-0 sudo[93093]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:59:13 compute-0 podman[93051]: 2026-01-10 16:59:13.714315081 +0000 UTC m=+0.151380744 container start 5c2dfeae84285543a2eb78035a4191c46f27d74fe03a2c59f550a3c49e3960ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_almeida, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 16:59:13 compute-0 podman[93051]: 2026-01-10 16:59:13.7177665 +0000 UTC m=+0.154832163 container attach 5c2dfeae84285543a2eb78035a4191c46f27d74fe03a2c59f550a3c49e3960ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_almeida, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 16:59:13 compute-0 happy_almeida[93081]: 167 167
Jan 10 16:59:13 compute-0 systemd[1]: libpod-5c2dfeae84285543a2eb78035a4191c46f27d74fe03a2c59f550a3c49e3960ae.scope: Deactivated successfully.
Jan 10 16:59:13 compute-0 podman[93051]: 2026-01-10 16:59:13.721185459 +0000 UTC m=+0.158251122 container died 5c2dfeae84285543a2eb78035a4191c46f27d74fe03a2c59f550a3c49e3960ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_almeida, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 10 16:59:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-e9033654c66f3db4adb68a8adb105f60425485e364f6d1e399a8b3ab4f2cdad7-merged.mount: Deactivated successfully.
Jan 10 16:59:13 compute-0 podman[93051]: 2026-01-10 16:59:13.770102552 +0000 UTC m=+0.207168225 container remove 5c2dfeae84285543a2eb78035a4191c46f27d74fe03a2c59f550a3c49e3960ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_almeida, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 16:59:13 compute-0 systemd[1]: libpod-conmon-5c2dfeae84285543a2eb78035a4191c46f27d74fe03a2c59f550a3c49e3960ae.scope: Deactivated successfully.
Jan 10 16:59:13 compute-0 python3[93097]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .monmap.num_mons _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 16:59:13 compute-0 podman[93114]: 2026-01-10 16:59:13.966777094 +0000 UTC m=+0.059351586 container create 06e1a649be2dce06b3ac0e51f2a968696bccddc10d00d055bb532db12b1e7f13 (image=quay.io/ceph/ceph:v20, name=nifty_ellis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 16:59:13 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v72: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 10 16:59:13 compute-0 podman[93120]: 2026-01-10 16:59:13.994112973 +0000 UTC m=+0.073011390 container create 5811b3b9d3a41c829367215c1bc5a4a6923096f0358266e4bf60cdcbf42e301d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_kilby, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 10 16:59:14 compute-0 systemd[1]: Started libpod-conmon-06e1a649be2dce06b3ac0e51f2a968696bccddc10d00d055bb532db12b1e7f13.scope.
Jan 10 16:59:14 compute-0 systemd[1]: Started libpod-conmon-5811b3b9d3a41c829367215c1bc5a4a6923096f0358266e4bf60cdcbf42e301d.scope.
Jan 10 16:59:14 compute-0 podman[93114]: 2026-01-10 16:59:13.941844723 +0000 UTC m=+0.034419295 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 16:59:14 compute-0 podman[93120]: 2026-01-10 16:59:13.954967772 +0000 UTC m=+0.033866239 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 16:59:14 compute-0 systemd[1]: Started libcrun container.
Jan 10 16:59:14 compute-0 systemd[1]: Started libcrun container.
Jan 10 16:59:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb283939facb7ecf42c015b48b1b4be06879ec013f4e40bf61c66cc5859a49e1/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 16:59:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c34f7c90e13cb7d68b36c6c6c7d9c25659953278207c49a5cb0259d0d1213ec4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 16:59:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb283939facb7ecf42c015b48b1b4be06879ec013f4e40bf61c66cc5859a49e1/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 16:59:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c34f7c90e13cb7d68b36c6c6c7d9c25659953278207c49a5cb0259d0d1213ec4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 16:59:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c34f7c90e13cb7d68b36c6c6c7d9c25659953278207c49a5cb0259d0d1213ec4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 16:59:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c34f7c90e13cb7d68b36c6c6c7d9c25659953278207c49a5cb0259d0d1213ec4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 16:59:14 compute-0 podman[93120]: 2026-01-10 16:59:14.078924493 +0000 UTC m=+0.157822930 container init 5811b3b9d3a41c829367215c1bc5a4a6923096f0358266e4bf60cdcbf42e301d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_kilby, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 16:59:14 compute-0 podman[93114]: 2026-01-10 16:59:14.083785984 +0000 UTC m=+0.176360486 container init 06e1a649be2dce06b3ac0e51f2a968696bccddc10d00d055bb532db12b1e7f13 (image=quay.io/ceph/ceph:v20, name=nifty_ellis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 10 16:59:14 compute-0 podman[93120]: 2026-01-10 16:59:14.093798843 +0000 UTC m=+0.172697280 container start 5811b3b9d3a41c829367215c1bc5a4a6923096f0358266e4bf60cdcbf42e301d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_kilby, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 16:59:14 compute-0 podman[93120]: 2026-01-10 16:59:14.097944863 +0000 UTC m=+0.176843290 container attach 5811b3b9d3a41c829367215c1bc5a4a6923096f0358266e4bf60cdcbf42e301d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_kilby, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 10 16:59:14 compute-0 podman[93114]: 2026-01-10 16:59:14.098344624 +0000 UTC m=+0.190919106 container start 06e1a649be2dce06b3ac0e51f2a968696bccddc10d00d055bb532db12b1e7f13 (image=quay.io/ceph/ceph:v20, name=nifty_ellis, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 10 16:59:14 compute-0 podman[93114]: 2026-01-10 16:59:14.10271214 +0000 UTC m=+0.195286652 container attach 06e1a649be2dce06b3ac0e51f2a968696bccddc10d00d055bb532db12b1e7f13 (image=quay.io/ceph/ceph:v20, name=nifty_ellis, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 16:59:14 compute-0 agitated_kilby[93153]: {
Jan 10 16:59:14 compute-0 agitated_kilby[93153]:     "0": [
Jan 10 16:59:14 compute-0 agitated_kilby[93153]:         {
Jan 10 16:59:14 compute-0 agitated_kilby[93153]:             "devices": [
Jan 10 16:59:14 compute-0 agitated_kilby[93153]:                 "/dev/loop3"
Jan 10 16:59:14 compute-0 agitated_kilby[93153]:             ],
Jan 10 16:59:14 compute-0 agitated_kilby[93153]:             "lv_name": "ceph_lv0",
Jan 10 16:59:14 compute-0 agitated_kilby[93153]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 10 16:59:14 compute-0 agitated_kilby[93153]:             "lv_size": "21470642176",
Jan 10 16:59:14 compute-0 agitated_kilby[93153]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9aa1dcc9-88f4-49c0-be40-744313964d3e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 16:59:14 compute-0 agitated_kilby[93153]:             "lv_uuid": "fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2",
Jan 10 16:59:14 compute-0 agitated_kilby[93153]:             "name": "ceph_lv0",
Jan 10 16:59:14 compute-0 agitated_kilby[93153]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 10 16:59:14 compute-0 agitated_kilby[93153]:             "tags": {
Jan 10 16:59:14 compute-0 agitated_kilby[93153]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 10 16:59:14 compute-0 agitated_kilby[93153]:                 "ceph.block_uuid": "fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2",
Jan 10 16:59:14 compute-0 agitated_kilby[93153]:                 "ceph.cephx_lockbox_secret": "",
Jan 10 16:59:14 compute-0 agitated_kilby[93153]:                 "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 16:59:14 compute-0 agitated_kilby[93153]:                 "ceph.cluster_name": "ceph",
Jan 10 16:59:14 compute-0 agitated_kilby[93153]:                 "ceph.crush_device_class": "",
Jan 10 16:59:14 compute-0 agitated_kilby[93153]:                 "ceph.encrypted": "0",
Jan 10 16:59:14 compute-0 agitated_kilby[93153]:                 "ceph.objectstore": "bluestore",
Jan 10 16:59:14 compute-0 agitated_kilby[93153]:                 "ceph.osd_fsid": "9aa1dcc9-88f4-49c0-be40-744313964d3e",
Jan 10 16:59:14 compute-0 agitated_kilby[93153]:                 "ceph.osd_id": "0",
Jan 10 16:59:14 compute-0 agitated_kilby[93153]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 10 16:59:14 compute-0 agitated_kilby[93153]:                 "ceph.type": "block",
Jan 10 16:59:14 compute-0 agitated_kilby[93153]:                 "ceph.vdo": "0",
Jan 10 16:59:14 compute-0 agitated_kilby[93153]:                 "ceph.with_tpm": "0"
Jan 10 16:59:14 compute-0 agitated_kilby[93153]:             },
Jan 10 16:59:14 compute-0 agitated_kilby[93153]:             "type": "block",
Jan 10 16:59:14 compute-0 agitated_kilby[93153]:             "vg_name": "ceph_vg0"
Jan 10 16:59:14 compute-0 agitated_kilby[93153]:         }
Jan 10 16:59:14 compute-0 agitated_kilby[93153]:     ],
Jan 10 16:59:14 compute-0 agitated_kilby[93153]:     "1": [
Jan 10 16:59:14 compute-0 agitated_kilby[93153]:         {
Jan 10 16:59:14 compute-0 agitated_kilby[93153]:             "devices": [
Jan 10 16:59:14 compute-0 agitated_kilby[93153]:                 "/dev/loop4"
Jan 10 16:59:14 compute-0 agitated_kilby[93153]:             ],
Jan 10 16:59:14 compute-0 agitated_kilby[93153]:             "lv_name": "ceph_lv1",
Jan 10 16:59:14 compute-0 agitated_kilby[93153]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 10 16:59:14 compute-0 agitated_kilby[93153]:             "lv_size": "21470642176",
Jan 10 16:59:14 compute-0 agitated_kilby[93153]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e8e31518-65ae-476c-891c-e2fc550d0a1c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 16:59:14 compute-0 agitated_kilby[93153]:             "lv_uuid": "jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE",
Jan 10 16:59:14 compute-0 agitated_kilby[93153]:             "name": "ceph_lv1",
Jan 10 16:59:14 compute-0 agitated_kilby[93153]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 10 16:59:14 compute-0 agitated_kilby[93153]:             "tags": {
Jan 10 16:59:14 compute-0 agitated_kilby[93153]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 10 16:59:14 compute-0 agitated_kilby[93153]:                 "ceph.block_uuid": "jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE",
Jan 10 16:59:14 compute-0 agitated_kilby[93153]:                 "ceph.cephx_lockbox_secret": "",
Jan 10 16:59:14 compute-0 agitated_kilby[93153]:                 "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 16:59:14 compute-0 agitated_kilby[93153]:                 "ceph.cluster_name": "ceph",
Jan 10 16:59:14 compute-0 agitated_kilby[93153]:                 "ceph.crush_device_class": "",
Jan 10 16:59:14 compute-0 agitated_kilby[93153]:                 "ceph.encrypted": "0",
Jan 10 16:59:14 compute-0 agitated_kilby[93153]:                 "ceph.objectstore": "bluestore",
Jan 10 16:59:14 compute-0 agitated_kilby[93153]:                 "ceph.osd_fsid": "e8e31518-65ae-476c-891c-e2fc550d0a1c",
Jan 10 16:59:14 compute-0 agitated_kilby[93153]:                 "ceph.osd_id": "1",
Jan 10 16:59:14 compute-0 agitated_kilby[93153]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 10 16:59:14 compute-0 agitated_kilby[93153]:                 "ceph.type": "block",
Jan 10 16:59:14 compute-0 agitated_kilby[93153]:                 "ceph.vdo": "0",
Jan 10 16:59:14 compute-0 agitated_kilby[93153]:                 "ceph.with_tpm": "0"
Jan 10 16:59:14 compute-0 agitated_kilby[93153]:             },
Jan 10 16:59:14 compute-0 agitated_kilby[93153]:             "type": "block",
Jan 10 16:59:14 compute-0 agitated_kilby[93153]:             "vg_name": "ceph_vg1"
Jan 10 16:59:14 compute-0 agitated_kilby[93153]:         }
Jan 10 16:59:14 compute-0 agitated_kilby[93153]:     ],
Jan 10 16:59:14 compute-0 agitated_kilby[93153]:     "2": [
Jan 10 16:59:14 compute-0 agitated_kilby[93153]:         {
Jan 10 16:59:14 compute-0 agitated_kilby[93153]:             "devices": [
Jan 10 16:59:14 compute-0 agitated_kilby[93153]:                 "/dev/loop5"
Jan 10 16:59:14 compute-0 agitated_kilby[93153]:             ],
Jan 10 16:59:14 compute-0 agitated_kilby[93153]:             "lv_name": "ceph_lv2",
Jan 10 16:59:14 compute-0 agitated_kilby[93153]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 10 16:59:14 compute-0 agitated_kilby[93153]:             "lv_size": "21470642176",
Jan 10 16:59:14 compute-0 agitated_kilby[93153]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=87473727-6468-4f68-8371-e0bf60edaa43,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 16:59:14 compute-0 agitated_kilby[93153]:             "lv_uuid": "47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL",
Jan 10 16:59:14 compute-0 agitated_kilby[93153]:             "name": "ceph_lv2",
Jan 10 16:59:14 compute-0 agitated_kilby[93153]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 10 16:59:14 compute-0 agitated_kilby[93153]:             "tags": {
Jan 10 16:59:14 compute-0 agitated_kilby[93153]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 10 16:59:14 compute-0 agitated_kilby[93153]:                 "ceph.block_uuid": "47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL",
Jan 10 16:59:14 compute-0 agitated_kilby[93153]:                 "ceph.cephx_lockbox_secret": "",
Jan 10 16:59:14 compute-0 agitated_kilby[93153]:                 "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 16:59:14 compute-0 agitated_kilby[93153]:                 "ceph.cluster_name": "ceph",
Jan 10 16:59:14 compute-0 agitated_kilby[93153]:                 "ceph.crush_device_class": "",
Jan 10 16:59:14 compute-0 agitated_kilby[93153]:                 "ceph.encrypted": "0",
Jan 10 16:59:14 compute-0 agitated_kilby[93153]:                 "ceph.objectstore": "bluestore",
Jan 10 16:59:14 compute-0 agitated_kilby[93153]:                 "ceph.osd_fsid": "87473727-6468-4f68-8371-e0bf60edaa43",
Jan 10 16:59:14 compute-0 agitated_kilby[93153]:                 "ceph.osd_id": "2",
Jan 10 16:59:14 compute-0 agitated_kilby[93153]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 10 16:59:14 compute-0 agitated_kilby[93153]:                 "ceph.type": "block",
Jan 10 16:59:14 compute-0 agitated_kilby[93153]:                 "ceph.vdo": "0",
Jan 10 16:59:14 compute-0 agitated_kilby[93153]:                 "ceph.with_tpm": "0"
Jan 10 16:59:14 compute-0 agitated_kilby[93153]:             },
Jan 10 16:59:14 compute-0 agitated_kilby[93153]:             "type": "block",
Jan 10 16:59:14 compute-0 agitated_kilby[93153]:             "vg_name": "ceph_vg2"
Jan 10 16:59:14 compute-0 agitated_kilby[93153]:         }
Jan 10 16:59:14 compute-0 agitated_kilby[93153]:     ]
Jan 10 16:59:14 compute-0 agitated_kilby[93153]: }
Jan 10 16:59:14 compute-0 systemd[1]: libpod-5811b3b9d3a41c829367215c1bc5a4a6923096f0358266e4bf60cdcbf42e301d.scope: Deactivated successfully.
Jan 10 16:59:14 compute-0 conmon[93153]: conmon 5811b3b9d3a41c829367 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5811b3b9d3a41c829367215c1bc5a4a6923096f0358266e4bf60cdcbf42e301d.scope/container/memory.events
Jan 10 16:59:14 compute-0 podman[93120]: 2026-01-10 16:59:14.436166753 +0000 UTC m=+0.515065170 container died 5811b3b9d3a41c829367215c1bc5a4a6923096f0358266e4bf60cdcbf42e301d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_kilby, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 16:59:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-c34f7c90e13cb7d68b36c6c6c7d9c25659953278207c49a5cb0259d0d1213ec4-merged.mount: Deactivated successfully.
Jan 10 16:59:14 compute-0 podman[93120]: 2026-01-10 16:59:14.488848145 +0000 UTC m=+0.567746562 container remove 5811b3b9d3a41c829367215c1bc5a4a6923096f0358266e4bf60cdcbf42e301d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_kilby, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 10 16:59:14 compute-0 systemd[1]: libpod-conmon-5811b3b9d3a41c829367215c1bc5a4a6923096f0358266e4bf60cdcbf42e301d.scope: Deactivated successfully.
Jan 10 16:59:14 compute-0 sudo[93014]: pam_unix(sudo:session): session closed for user root
Jan 10 16:59:14 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Jan 10 16:59:14 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/192702526' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Jan 10 16:59:14 compute-0 sudo[93195]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 16:59:14 compute-0 nifty_ellis[93151]: 
Jan 10 16:59:14 compute-0 nifty_ellis[93151]: {"fsid":"a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4","health":{"status":"HEALTH_ERR","checks":{"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":116,"monmap":{"epoch":1,"min_mon_release_name":"tentacle","num_mons":1},"osdmap":{"epoch":33,"num_osds":3,"num_up_osds":3,"osd_up_since":1768064329,"num_in_osds":3,"osd_in_since":1768064301,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":7}],"num_pgs":7,"num_pools":7,"num_objects":2,"data_bytes":459280,"bytes_used":83918848,"bytes_avail":64328007680,"bytes_total":64411926528},"fsmap":{"epoch":2,"btime":"2026-01-10T16:59:09:517838+0000","id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs"],"services":{}},"servicemap":{"epoch":2,"modified":"2026-01-10T16:58:41.970835+0000","services":{"osd":{"daemons":{"summary":"","0":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{}}
Jan 10 16:59:14 compute-0 sudo[93195]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 16:59:14 compute-0 sudo[93195]: pam_unix(sudo:session): session closed for user root
Jan 10 16:59:14 compute-0 systemd[1]: libpod-06e1a649be2dce06b3ac0e51f2a968696bccddc10d00d055bb532db12b1e7f13.scope: Deactivated successfully.
Jan 10 16:59:14 compute-0 podman[93114]: 2026-01-10 16:59:14.656100786 +0000 UTC m=+0.748675268 container died 06e1a649be2dce06b3ac0e51f2a968696bccddc10d00d055bb532db12b1e7f13 (image=quay.io/ceph/ceph:v20, name=nifty_ellis, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 10 16:59:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-eb283939facb7ecf42c015b48b1b4be06879ec013f4e40bf61c66cc5859a49e1-merged.mount: Deactivated successfully.
Jan 10 16:59:14 compute-0 podman[93114]: 2026-01-10 16:59:14.707432219 +0000 UTC m=+0.800006691 container remove 06e1a649be2dce06b3ac0e51f2a968696bccddc10d00d055bb532db12b1e7f13 (image=quay.io/ceph/ceph:v20, name=nifty_ellis, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 10 16:59:14 compute-0 sudo[93222]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 -- raw list --format json
Jan 10 16:59:14 compute-0 sudo[93222]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 16:59:14 compute-0 systemd[1]: libpod-conmon-06e1a649be2dce06b3ac0e51f2a968696bccddc10d00d055bb532db12b1e7f13.scope: Deactivated successfully.
Jan 10 16:59:14 compute-0 sudo[93093]: pam_unix(sudo:session): session closed for user root
Jan 10 16:59:14 compute-0 sudo[93280]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xzdbbwdfycihryyaebydpuvcncnsafxs ; /usr/bin/python3'
Jan 10 16:59:14 compute-0 sudo[93280]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:59:15 compute-0 podman[93296]: 2026-01-10 16:59:15.021990036 +0000 UTC m=+0.052598230 container create a52dfb9871dda90395d3ba5d362c7de21b9597fb5b27d05d45b9a39106f904e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_kare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 16:59:15 compute-0 python3[93282]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mon dump --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 16:59:15 compute-0 systemd[1]: Started libpod-conmon-a52dfb9871dda90395d3ba5d362c7de21b9597fb5b27d05d45b9a39106f904e3.scope.
Jan 10 16:59:15 compute-0 podman[93296]: 2026-01-10 16:59:14.998094686 +0000 UTC m=+0.028702930 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 16:59:15 compute-0 systemd[1]: Started libcrun container.
Jan 10 16:59:15 compute-0 podman[93313]: 2026-01-10 16:59:15.113219571 +0000 UTC m=+0.049701646 container create b1eda6c60ef4c876cbf3a27fa251f01fcd87b7ef0e02467704e8cbc868e42c59 (image=quay.io/ceph/ceph:v20, name=sharp_hellman, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 10 16:59:15 compute-0 podman[93296]: 2026-01-10 16:59:15.122034726 +0000 UTC m=+0.152642970 container init a52dfb9871dda90395d3ba5d362c7de21b9597fb5b27d05d45b9a39106f904e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_kare, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 10 16:59:15 compute-0 podman[93296]: 2026-01-10 16:59:15.130552162 +0000 UTC m=+0.161160326 container start a52dfb9871dda90395d3ba5d362c7de21b9597fb5b27d05d45b9a39106f904e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_kare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 10 16:59:15 compute-0 podman[93296]: 2026-01-10 16:59:15.134027123 +0000 UTC m=+0.164635327 container attach a52dfb9871dda90395d3ba5d362c7de21b9597fb5b27d05d45b9a39106f904e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_kare, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 10 16:59:15 compute-0 vigilant_kare[93319]: 167 167
Jan 10 16:59:15 compute-0 systemd[1]: libpod-a52dfb9871dda90395d3ba5d362c7de21b9597fb5b27d05d45b9a39106f904e3.scope: Deactivated successfully.
Jan 10 16:59:15 compute-0 podman[93296]: 2026-01-10 16:59:15.137535534 +0000 UTC m=+0.168143708 container died a52dfb9871dda90395d3ba5d362c7de21b9597fb5b27d05d45b9a39106f904e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_kare, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 16:59:15 compute-0 ceph-mon[75249]: pgmap v72: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 10 16:59:15 compute-0 systemd[1]: Started libpod-conmon-b1eda6c60ef4c876cbf3a27fa251f01fcd87b7ef0e02467704e8cbc868e42c59.scope.
Jan 10 16:59:15 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/192702526' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Jan 10 16:59:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-cb92743d761508165f012064c41babd56eb85ca0971c0799c18ebd49dd1ac05f-merged.mount: Deactivated successfully.
Jan 10 16:59:15 compute-0 podman[93296]: 2026-01-10 16:59:15.177977412 +0000 UTC m=+0.208585586 container remove a52dfb9871dda90395d3ba5d362c7de21b9597fb5b27d05d45b9a39106f904e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_kare, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 10 16:59:15 compute-0 systemd[1]: Started libcrun container.
Jan 10 16:59:15 compute-0 podman[93313]: 2026-01-10 16:59:15.091879725 +0000 UTC m=+0.028361830 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 16:59:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86ca54eefc3434571b8510d6578b8d16c001bb8532894a94ba1b9e875d91c57b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 16:59:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86ca54eefc3434571b8510d6578b8d16c001bb8532894a94ba1b9e875d91c57b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 16:59:15 compute-0 systemd[1]: libpod-conmon-a52dfb9871dda90395d3ba5d362c7de21b9597fb5b27d05d45b9a39106f904e3.scope: Deactivated successfully.
Jan 10 16:59:15 compute-0 podman[93313]: 2026-01-10 16:59:15.208102062 +0000 UTC m=+0.144584187 container init b1eda6c60ef4c876cbf3a27fa251f01fcd87b7ef0e02467704e8cbc868e42c59 (image=quay.io/ceph/ceph:v20, name=sharp_hellman, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 16:59:15 compute-0 podman[93313]: 2026-01-10 16:59:15.215321731 +0000 UTC m=+0.151803806 container start b1eda6c60ef4c876cbf3a27fa251f01fcd87b7ef0e02467704e8cbc868e42c59 (image=quay.io/ceph/ceph:v20, name=sharp_hellman, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 16:59:15 compute-0 podman[93313]: 2026-01-10 16:59:15.2187603 +0000 UTC m=+0.155242385 container attach b1eda6c60ef4c876cbf3a27fa251f01fcd87b7ef0e02467704e8cbc868e42c59 (image=quay.io/ceph/ceph:v20, name=sharp_hellman, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 16:59:15 compute-0 podman[93354]: 2026-01-10 16:59:15.34303856 +0000 UTC m=+0.046504194 container create 6fc26661f23f7559a6333506820fc7706874b4357448b2be8ea66daa03842a4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_wilson, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 10 16:59:15 compute-0 systemd[1]: Started libpod-conmon-6fc26661f23f7559a6333506820fc7706874b4357448b2be8ea66daa03842a4d.scope.
Jan 10 16:59:15 compute-0 systemd[1]: Started libcrun container.
Jan 10 16:59:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ab4abd1ed44f29b8a1ec51024b5a87bdd2f7fdf14e434a49106fd1e7fdf1d0c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 16:59:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ab4abd1ed44f29b8a1ec51024b5a87bdd2f7fdf14e434a49106fd1e7fdf1d0c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 16:59:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ab4abd1ed44f29b8a1ec51024b5a87bdd2f7fdf14e434a49106fd1e7fdf1d0c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 16:59:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ab4abd1ed44f29b8a1ec51024b5a87bdd2f7fdf14e434a49106fd1e7fdf1d0c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 16:59:15 compute-0 podman[93354]: 2026-01-10 16:59:15.323547617 +0000 UTC m=+0.027013281 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 16:59:15 compute-0 podman[93354]: 2026-01-10 16:59:15.434850863 +0000 UTC m=+0.138316527 container init 6fc26661f23f7559a6333506820fc7706874b4357448b2be8ea66daa03842a4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_wilson, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 16:59:15 compute-0 podman[93354]: 2026-01-10 16:59:15.455014445 +0000 UTC m=+0.158480079 container start 6fc26661f23f7559a6333506820fc7706874b4357448b2be8ea66daa03842a4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_wilson, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 16:59:15 compute-0 podman[93354]: 2026-01-10 16:59:15.45899754 +0000 UTC m=+0.162463214 container attach 6fc26661f23f7559a6333506820fc7706874b4357448b2be8ea66daa03842a4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_wilson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 10 16:59:15 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 10 16:59:15 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4287799242' entity='client.admin' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 10 16:59:15 compute-0 sharp_hellman[93340]: 
Jan 10 16:59:15 compute-0 sharp_hellman[93340]: {"epoch":1,"fsid":"a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4","modified":"2026-01-10T16:57:13.592121Z","created":"2026-01-10T16:57:13.592121Z","min_mon_release":20,"min_mon_release_name":"tentacle","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid","tentacle"],"optional":[]},"mons":[{"rank":0,"name":"compute-0","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.100:3300","nonce":0},{"type":"v1","addr":"192.168.122.100:6789","nonce":0}]},"addr":"192.168.122.100:6789/0","public_addr":"192.168.122.100:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]}
Jan 10 16:59:15 compute-0 sharp_hellman[93340]: dumped monmap epoch 1
Jan 10 16:59:15 compute-0 systemd[1]: libpod-b1eda6c60ef4c876cbf3a27fa251f01fcd87b7ef0e02467704e8cbc868e42c59.scope: Deactivated successfully.
Jan 10 16:59:15 compute-0 podman[93313]: 2026-01-10 16:59:15.843900448 +0000 UTC m=+0.780382533 container died b1eda6c60ef4c876cbf3a27fa251f01fcd87b7ef0e02467704e8cbc868e42c59 (image=quay.io/ceph/ceph:v20, name=sharp_hellman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 10 16:59:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-86ca54eefc3434571b8510d6578b8d16c001bb8532894a94ba1b9e875d91c57b-merged.mount: Deactivated successfully.
Jan 10 16:59:15 compute-0 podman[93313]: 2026-01-10 16:59:15.885365416 +0000 UTC m=+0.821847491 container remove b1eda6c60ef4c876cbf3a27fa251f01fcd87b7ef0e02467704e8cbc868e42c59 (image=quay.io/ceph/ceph:v20, name=sharp_hellman, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 10 16:59:15 compute-0 systemd[1]: libpod-conmon-b1eda6c60ef4c876cbf3a27fa251f01fcd87b7ef0e02467704e8cbc868e42c59.scope: Deactivated successfully.
Jan 10 16:59:15 compute-0 sudo[93280]: pam_unix(sudo:session): session closed for user root
Jan 10 16:59:15 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v73: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 10 16:59:16 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/4287799242' entity='client.admin' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 10 16:59:16 compute-0 lvm[93485]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 10 16:59:16 compute-0 lvm[93485]: VG ceph_vg1 finished
Jan 10 16:59:16 compute-0 lvm[93484]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 10 16:59:16 compute-0 lvm[93484]: VG ceph_vg0 finished
Jan 10 16:59:16 compute-0 lvm[93495]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 10 16:59:16 compute-0 lvm[93495]: VG ceph_vg2 finished
Jan 10 16:59:16 compute-0 sudo[93511]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rjpwhvnbokfmmqnutfltizddjisfbnro ; /usr/bin/python3'
Jan 10 16:59:16 compute-0 sudo[93511]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:59:16 compute-0 determined_wilson[93390]: {}
Jan 10 16:59:16 compute-0 systemd[1]: libpod-6fc26661f23f7559a6333506820fc7706874b4357448b2be8ea66daa03842a4d.scope: Deactivated successfully.
Jan 10 16:59:16 compute-0 systemd[1]: libpod-6fc26661f23f7559a6333506820fc7706874b4357448b2be8ea66daa03842a4d.scope: Consumed 1.475s CPU time.
Jan 10 16:59:16 compute-0 podman[93354]: 2026-01-10 16:59:16.395347878 +0000 UTC m=+1.098813542 container died 6fc26661f23f7559a6333506820fc7706874b4357448b2be8ea66daa03842a4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_wilson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 10 16:59:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-3ab4abd1ed44f29b8a1ec51024b5a87bdd2f7fdf14e434a49106fd1e7fdf1d0c-merged.mount: Deactivated successfully.
Jan 10 16:59:16 compute-0 python3[93514]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth get client.openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 16:59:16 compute-0 podman[93354]: 2026-01-10 16:59:16.453839647 +0000 UTC m=+1.157305281 container remove 6fc26661f23f7559a6333506820fc7706874b4357448b2be8ea66daa03842a4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_wilson, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 10 16:59:16 compute-0 systemd[1]: libpod-conmon-6fc26661f23f7559a6333506820fc7706874b4357448b2be8ea66daa03842a4d.scope: Deactivated successfully.
Jan 10 16:59:16 compute-0 podman[93528]: 2026-01-10 16:59:16.500127335 +0000 UTC m=+0.045888267 container create 4d7e4af64c1729c147d1e1adc779fa3abae5f327e565ce9478d5550f0891fdfd (image=quay.io/ceph/ceph:v20, name=tender_roentgen, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 16:59:16 compute-0 sudo[93222]: pam_unix(sudo:session): session closed for user root
Jan 10 16:59:16 compute-0 systemd[1]: Started libpod-conmon-4d7e4af64c1729c147d1e1adc779fa3abae5f327e565ce9478d5550f0891fdfd.scope.
Jan 10 16:59:16 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 10 16:59:16 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:59:16 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 10 16:59:16 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:59:16 compute-0 ceph-mgr[75538]: [progress INFO root] update: starting ev 5176c768-cabd-4d63-b825-44d378ef605b (Updating mds.cephfs deployment (+1 -> 1))
Jan 10 16:59:16 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.anmivh", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Jan 10 16:59:16 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.anmivh", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Jan 10 16:59:16 compute-0 systemd[1]: Started libcrun container.
Jan 10 16:59:16 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.anmivh", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Jan 10 16:59:16 compute-0 podman[93528]: 2026-01-10 16:59:16.482354561 +0000 UTC m=+0.028115513 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 16:59:16 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 16:59:16 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 16:59:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be1daadcaf547574d868e0a71a74fea7c6be1d9caa8d3b67620d4d83cc986aba/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 16:59:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be1daadcaf547574d868e0a71a74fea7c6be1d9caa8d3b67620d4d83cc986aba/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 16:59:16 compute-0 ceph-mgr[75538]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-0.anmivh on compute-0
Jan 10 16:59:16 compute-0 ceph-mgr[75538]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-0.anmivh on compute-0
Jan 10 16:59:16 compute-0 podman[93528]: 2026-01-10 16:59:16.597386784 +0000 UTC m=+0.143147766 container init 4d7e4af64c1729c147d1e1adc779fa3abae5f327e565ce9478d5550f0891fdfd (image=quay.io/ceph/ceph:v20, name=tender_roentgen, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 10 16:59:16 compute-0 podman[93528]: 2026-01-10 16:59:16.605870219 +0000 UTC m=+0.151631161 container start 4d7e4af64c1729c147d1e1adc779fa3abae5f327e565ce9478d5550f0891fdfd (image=quay.io/ceph/ceph:v20, name=tender_roentgen, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 10 16:59:16 compute-0 podman[93528]: 2026-01-10 16:59:16.609856504 +0000 UTC m=+0.155617436 container attach 4d7e4af64c1729c147d1e1adc779fa3abae5f327e565ce9478d5550f0891fdfd (image=quay.io/ceph/ceph:v20, name=tender_roentgen, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 16:59:16 compute-0 sudo[93546]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 16:59:16 compute-0 sudo[93546]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 16:59:16 compute-0 sudo[93546]: pam_unix(sudo:session): session closed for user root
Jan 10 16:59:16 compute-0 sudo[93572]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 _orch deploy --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4
Jan 10 16:59:16 compute-0 sudo[93572]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 16:59:17 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.openstack"} v 0)
Jan 10 16:59:17 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/284800316' entity='client.admin' cmd={"prefix": "auth get", "entity": "client.openstack"} : dispatch
Jan 10 16:59:17 compute-0 tender_roentgen[93543]: [client.openstack]
Jan 10 16:59:17 compute-0 tender_roentgen[93543]:         key = AQC7hGJpAAAAABAAX18vjtSqzsniwZc0Ni8AQg==
Jan 10 16:59:17 compute-0 tender_roentgen[93543]:         caps mgr = "allow *"
Jan 10 16:59:17 compute-0 tender_roentgen[93543]:         caps mon = "profile rbd"
Jan 10 16:59:17 compute-0 tender_roentgen[93543]:         caps osd = "profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=images, profile rbd pool=cephfs.cephfs.meta, profile rbd pool=cephfs.cephfs.data"
Jan 10 16:59:17 compute-0 systemd[1]: libpod-4d7e4af64c1729c147d1e1adc779fa3abae5f327e565ce9478d5550f0891fdfd.scope: Deactivated successfully.
Jan 10 16:59:17 compute-0 podman[93528]: 2026-01-10 16:59:17.175187235 +0000 UTC m=+0.720948167 container died 4d7e4af64c1729c147d1e1adc779fa3abae5f327e565ce9478d5550f0891fdfd (image=quay.io/ceph/ceph:v20, name=tender_roentgen, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 10 16:59:17 compute-0 ceph-mon[75249]: pgmap v73: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 10 16:59:17 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:59:17 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:59:17 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.anmivh", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Jan 10 16:59:17 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.anmivh", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Jan 10 16:59:17 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 16:59:17 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/284800316' entity='client.admin' cmd={"prefix": "auth get", "entity": "client.openstack"} : dispatch
Jan 10 16:59:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-be1daadcaf547574d868e0a71a74fea7c6be1d9caa8d3b67620d4d83cc986aba-merged.mount: Deactivated successfully.
Jan 10 16:59:17 compute-0 podman[93658]: 2026-01-10 16:59:17.212249626 +0000 UTC m=+0.052633821 container create f34971f6cb111a07b759547b88fd2a32f5bd7389c3c3f9bbb5dfd8adc9c8cb9a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_nash, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 16:59:17 compute-0 podman[93528]: 2026-01-10 16:59:17.242241732 +0000 UTC m=+0.788002654 container remove 4d7e4af64c1729c147d1e1adc779fa3abae5f327e565ce9478d5550f0891fdfd (image=quay.io/ceph/ceph:v20, name=tender_roentgen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 16:59:17 compute-0 systemd[1]: Started libpod-conmon-f34971f6cb111a07b759547b88fd2a32f5bd7389c3c3f9bbb5dfd8adc9c8cb9a.scope.
Jan 10 16:59:17 compute-0 systemd[1]: libpod-conmon-4d7e4af64c1729c147d1e1adc779fa3abae5f327e565ce9478d5550f0891fdfd.scope: Deactivated successfully.
Jan 10 16:59:17 compute-0 sudo[93511]: pam_unix(sudo:session): session closed for user root
Jan 10 16:59:17 compute-0 systemd[1]: Started libcrun container.
Jan 10 16:59:17 compute-0 podman[93658]: 2026-01-10 16:59:17.191387703 +0000 UTC m=+0.031771928 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 16:59:17 compute-0 podman[93658]: 2026-01-10 16:59:17.295310215 +0000 UTC m=+0.135694420 container init f34971f6cb111a07b759547b88fd2a32f5bd7389c3c3f9bbb5dfd8adc9c8cb9a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_nash, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 10 16:59:17 compute-0 podman[93658]: 2026-01-10 16:59:17.303885533 +0000 UTC m=+0.144269728 container start f34971f6cb111a07b759547b88fd2a32f5bd7389c3c3f9bbb5dfd8adc9c8cb9a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_nash, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 16:59:17 compute-0 flamboyant_nash[93687]: 167 167
Jan 10 16:59:17 compute-0 systemd[1]: libpod-f34971f6cb111a07b759547b88fd2a32f5bd7389c3c3f9bbb5dfd8adc9c8cb9a.scope: Deactivated successfully.
Jan 10 16:59:17 compute-0 podman[93658]: 2026-01-10 16:59:17.309152945 +0000 UTC m=+0.149537140 container attach f34971f6cb111a07b759547b88fd2a32f5bd7389c3c3f9bbb5dfd8adc9c8cb9a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_nash, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 16:59:17 compute-0 podman[93658]: 2026-01-10 16:59:17.310314529 +0000 UTC m=+0.150698724 container died f34971f6cb111a07b759547b88fd2a32f5bd7389c3c3f9bbb5dfd8adc9c8cb9a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_nash, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 16:59:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-fab251699d484795246e069e98296bd7e6aba6ecec43738b415006837a7b4b19-merged.mount: Deactivated successfully.
Jan 10 16:59:17 compute-0 podman[93658]: 2026-01-10 16:59:17.355384081 +0000 UTC m=+0.195768276 container remove f34971f6cb111a07b759547b88fd2a32f5bd7389c3c3f9bbb5dfd8adc9c8cb9a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_nash, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 10 16:59:17 compute-0 systemd[1]: libpod-conmon-f34971f6cb111a07b759547b88fd2a32f5bd7389c3c3f9bbb5dfd8adc9c8cb9a.scope: Deactivated successfully.
Jan 10 16:59:17 compute-0 systemd[1]: Reloading.
Jan 10 16:59:17 compute-0 systemd-sysv-generator[93733]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 10 16:59:17 compute-0 systemd-rc-local-generator[93728]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 16:59:17 compute-0 systemd[1]: Reloading.
Jan 10 16:59:17 compute-0 systemd-rc-local-generator[93770]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 16:59:17 compute-0 systemd-sysv-generator[93774]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 10 16:59:17 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v74: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 10 16:59:18 compute-0 systemd[1]: Starting Ceph mds.cephfs.compute-0.anmivh for a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4...
Jan 10 16:59:18 compute-0 ceph-mon[75249]: Deploying daemon mds.cephfs.compute-0.anmivh on compute-0
Jan 10 16:59:18 compute-0 podman[93848]: 2026-01-10 16:59:18.266303865 +0000 UTC m=+0.040861951 container create 9a7a6ac388746ecb20aae5585e61fd4457540360401a8c6768e113c523c746c5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-mds-cephfs-compute-0-anmivh, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 16:59:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8da2cd4a14b5de8cbe7dda52435dee72ae75d021602c1e094bd5c65b1f07f7d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 16:59:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8da2cd4a14b5de8cbe7dda52435dee72ae75d021602c1e094bd5c65b1f07f7d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 16:59:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8da2cd4a14b5de8cbe7dda52435dee72ae75d021602c1e094bd5c65b1f07f7d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 16:59:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8da2cd4a14b5de8cbe7dda52435dee72ae75d021602c1e094bd5c65b1f07f7d/merged/var/lib/ceph/mds/ceph-cephfs.compute-0.anmivh supports timestamps until 2038 (0x7fffffff)
Jan 10 16:59:18 compute-0 podman[93848]: 2026-01-10 16:59:18.323164537 +0000 UTC m=+0.097722643 container init 9a7a6ac388746ecb20aae5585e61fd4457540360401a8c6768e113c523c746c5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-mds-cephfs-compute-0-anmivh, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 16:59:18 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e33 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 16:59:18 compute-0 podman[93848]: 2026-01-10 16:59:18.333258929 +0000 UTC m=+0.107817025 container start 9a7a6ac388746ecb20aae5585e61fd4457540360401a8c6768e113c523c746c5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-mds-cephfs-compute-0-anmivh, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 10 16:59:18 compute-0 bash[93848]: 9a7a6ac388746ecb20aae5585e61fd4457540360401a8c6768e113c523c746c5
Jan 10 16:59:18 compute-0 podman[93848]: 2026-01-10 16:59:18.249006745 +0000 UTC m=+0.023564851 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 16:59:18 compute-0 systemd[1]: Started Ceph mds.cephfs.compute-0.anmivh for a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4.
Jan 10 16:59:18 compute-0 ceph-mds[93917]: set uid:gid to 167:167 (ceph:ceph)
Jan 10 16:59:18 compute-0 ceph-mds[93917]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mds, pid 2
Jan 10 16:59:18 compute-0 ceph-mds[93917]: main not setting numa affinity
Jan 10 16:59:18 compute-0 ceph-mds[93917]: pidfile_write: ignore empty --pid-file
Jan 10 16:59:18 compute-0 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-mds-cephfs-compute-0-anmivh[93891]: starting mds.cephfs.compute-0.anmivh at 
Jan 10 16:59:18 compute-0 ceph-mds[93917]: mds.cephfs.compute-0.anmivh Updating MDS map to version 2 from mon.0
Jan 10 16:59:18 compute-0 sudo[93572]: pam_unix(sudo:session): session closed for user root
Jan 10 16:59:18 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 10 16:59:18 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:59:18 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 10 16:59:18 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:59:18 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Jan 10 16:59:18 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:59:18 compute-0 ceph-mgr[75538]: [progress INFO root] complete: finished ev 5176c768-cabd-4d63-b825-44d378ef605b (Updating mds.cephfs deployment (+1 -> 1))
Jan 10 16:59:18 compute-0 ceph-mgr[75538]: [progress INFO root] Completed event 5176c768-cabd-4d63-b825-44d378ef605b (Updating mds.cephfs deployment (+1 -> 1)) in 2 seconds
Jan 10 16:59:18 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mds_join_fs}] v 0)
Jan 10 16:59:18 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:59:18 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Jan 10 16:59:18 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:59:18 compute-0 sudo[93964]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 10 16:59:18 compute-0 sudo[93964]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 16:59:18 compute-0 sudo[93964]: pam_unix(sudo:session): session closed for user root
Jan 10 16:59:18 compute-0 sudo[94051]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-drrczowlubnhwxiygsyvxkoncbggebst ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1768064358.2146611-36727-219939285804744/async_wrapper.py j957312711211 30 /home/zuul/.ansible/tmp/ansible-tmp-1768064358.2146611-36727-219939285804744/AnsiballZ_command.py _'
Jan 10 16:59:18 compute-0 sudo[94051]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:59:18 compute-0 sudo[94018]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 16:59:18 compute-0 sudo[94018]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 16:59:18 compute-0 sudo[94018]: pam_unix(sudo:session): session closed for user root
Jan 10 16:59:18 compute-0 sudo[94061]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Jan 10 16:59:18 compute-0 sudo[94061]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 16:59:18 compute-0 ansible-async_wrapper.py[94058]: Invoked with j957312711211 30 /home/zuul/.ansible/tmp/ansible-tmp-1768064358.2146611-36727-219939285804744/AnsiballZ_command.py _
Jan 10 16:59:18 compute-0 ansible-async_wrapper.py[94088]: Starting module and watcher
Jan 10 16:59:18 compute-0 ansible-async_wrapper.py[94088]: Start watching 94089 (30)
Jan 10 16:59:18 compute-0 ansible-async_wrapper.py[94089]: Start module (94089)
Jan 10 16:59:18 compute-0 ansible-async_wrapper.py[94058]: Return async_wrapper task started.
Jan 10 16:59:18 compute-0 sudo[94051]: pam_unix(sudo:session): session closed for user root
Jan 10 16:59:18 compute-0 python3[94090]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 16:59:18 compute-0 podman[94093]: 2026-01-10 16:59:18.942573641 +0000 UTC m=+0.053327792 container create 1191ef88f10db52b89a4f86e3f6256c4221cd4203071e1ca49c24c1b860e97d8 (image=quay.io/ceph/ceph:v20, name=thirsty_ellis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 10 16:59:18 compute-0 systemd[1]: Started libpod-conmon-1191ef88f10db52b89a4f86e3f6256c4221cd4203071e1ca49c24c1b860e97d8.scope.
Jan 10 16:59:19 compute-0 podman[94093]: 2026-01-10 16:59:18.916990712 +0000 UTC m=+0.027744873 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 16:59:19 compute-0 systemd[1]: Started libcrun container.
Jan 10 16:59:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a167183a7f09e4a0a3353ffe0448c0530c96119ff047fffffddcb7c9f99f22e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 16:59:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a167183a7f09e4a0a3353ffe0448c0530c96119ff047fffffddcb7c9f99f22e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 16:59:19 compute-0 podman[94093]: 2026-01-10 16:59:19.04328554 +0000 UTC m=+0.154039671 container init 1191ef88f10db52b89a4f86e3f6256c4221cd4203071e1ca49c24c1b860e97d8 (image=quay.io/ceph/ceph:v20, name=thirsty_ellis, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 16:59:19 compute-0 podman[94093]: 2026-01-10 16:59:19.052793565 +0000 UTC m=+0.163547706 container start 1191ef88f10db52b89a4f86e3f6256c4221cd4203071e1ca49c24c1b860e97d8 (image=quay.io/ceph/ceph:v20, name=thirsty_ellis, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 10 16:59:19 compute-0 podman[94093]: 2026-01-10 16:59:19.057223683 +0000 UTC m=+0.167977814 container attach 1191ef88f10db52b89a4f86e3f6256c4221cd4203071e1ca49c24c1b860e97d8 (image=quay.io/ceph/ceph:v20, name=thirsty_ellis, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 16:59:19 compute-0 podman[94154]: 2026-01-10 16:59:19.158189739 +0000 UTC m=+0.069264132 container exec 69622407e4b336ab6e593d34ac16bfb19f7f8835a32ed22c7a89e50ee8c8d8e7 (image=quay.io/ceph/ceph:v20, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-mon-compute-0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 10 16:59:19 compute-0 ceph-mon[75249]: pgmap v74: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 10 16:59:19 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:59:19 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:59:19 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:59:19 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:59:19 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:59:19 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).mds e3 new map
Jan 10 16:59:19 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).mds e3 print_map
                                           e3
                                           btime 2026-01-10T16:59:19:194454+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        2
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-01-10T16:59:09.517425+0000
                                           modified        2026-01-10T16:59:09.517425+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        
                                           up        {}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 0 members: 
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.anmivh{-1:14242} state up:standby seq 1 addr [v2:192.168.122.100:6814/3831969488,v1:192.168.122.100:6815/3831969488] compat {c=[1],r=[1],i=[1fff]}]
Jan 10 16:59:19 compute-0 ceph-mds[93917]: mds.cephfs.compute-0.anmivh Updating MDS map to version 3 from mon.0
Jan 10 16:59:19 compute-0 ceph-mds[93917]: mds.cephfs.compute-0.anmivh Monitors have assigned me to become a standby
Jan 10 16:59:19 compute-0 ceph-mon[75249]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6814/3831969488,v1:192.168.122.100:6815/3831969488] up:boot
Jan 10 16:59:19 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).mds e3 assigned standby [v2:192.168.122.100:6814/3831969488,v1:192.168.122.100:6815/3831969488] as mds.0
Jan 10 16:59:19 compute-0 ceph-mon[75249]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-0.anmivh assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Jan 10 16:59:19 compute-0 ceph-mon[75249]: log_channel(cluster) log [INF] : Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Jan 10 16:59:19 compute-0 ceph-mon[75249]: log_channel(cluster) log [INF] : Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Jan 10 16:59:19 compute-0 ceph-mon[75249]: log_channel(cluster) log [INF] : Cluster is now healthy
Jan 10 16:59:19 compute-0 ceph-mon[75249]: log_channel(cluster) log [DBG] : fsmap cephfs:0 1 up:standby
Jan 10 16:59:19 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-0.anmivh"} v 0)
Jan 10 16:59:19 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "mds metadata", "who": "cephfs.compute-0.anmivh"} : dispatch
Jan 10 16:59:19 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).mds e3 all = 0
Jan 10 16:59:19 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).mds e4 new map
Jan 10 16:59:19 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).mds e4 print_map
                                           e4
                                           btime 2026-01-10T16:59:19:214126+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        4
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-01-10T16:59:09.517425+0000
                                           modified        2026-01-10T16:59:19.214117+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=14242}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 0 members: 
                                           [mds.cephfs.compute-0.anmivh{0:14242} state up:creating seq 1 addr [v2:192.168.122.100:6814/3831969488,v1:192.168.122.100:6815/3831969488] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
Jan 10 16:59:19 compute-0 ceph-mon[75249]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.anmivh=up:creating}
Jan 10 16:59:19 compute-0 ceph-mds[93917]: mds.cephfs.compute-0.anmivh Updating MDS map to version 4 from mon.0
Jan 10 16:59:19 compute-0 ceph-mds[93917]: mds.0.4 handle_mds_map I am now mds.0.4
Jan 10 16:59:19 compute-0 ceph-mds[93917]: mds.0.4 handle_mds_map state change up:standby --> up:creating
Jan 10 16:59:19 compute-0 ceph-mds[93917]: mds.0.cache creating system inode with ino:0x1
Jan 10 16:59:19 compute-0 ceph-mds[93917]: mds.0.cache creating system inode with ino:0x100
Jan 10 16:59:19 compute-0 ceph-mds[93917]: mds.0.cache creating system inode with ino:0x600
Jan 10 16:59:19 compute-0 ceph-mds[93917]: mds.0.cache creating system inode with ino:0x601
Jan 10 16:59:19 compute-0 ceph-mds[93917]: mds.0.cache creating system inode with ino:0x602
Jan 10 16:59:19 compute-0 ceph-mds[93917]: mds.0.cache creating system inode with ino:0x603
Jan 10 16:59:19 compute-0 ceph-mds[93917]: mds.0.cache creating system inode with ino:0x604
Jan 10 16:59:19 compute-0 ceph-mds[93917]: mds.0.cache creating system inode with ino:0x605
Jan 10 16:59:19 compute-0 ceph-mds[93917]: mds.0.cache creating system inode with ino:0x606
Jan 10 16:59:19 compute-0 ceph-mds[93917]: mds.0.cache creating system inode with ino:0x607
Jan 10 16:59:19 compute-0 ceph-mds[93917]: mds.0.cache creating system inode with ino:0x608
Jan 10 16:59:19 compute-0 ceph-mds[93917]: mds.0.cache creating system inode with ino:0x609
Jan 10 16:59:19 compute-0 ceph-mgr[75538]: [progress INFO root] Writing back 4 completed events
Jan 10 16:59:19 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 10 16:59:19 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:59:19 compute-0 ceph-mds[93917]: mds.0.4 creating_done
Jan 10 16:59:19 compute-0 ceph-mon[75249]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-0.anmivh is now active in filesystem cephfs as rank 0
Jan 10 16:59:19 compute-0 podman[94154]: 2026-01-10 16:59:19.328183379 +0000 UTC m=+0.239257782 container exec_died 69622407e4b336ab6e593d34ac16bfb19f7f8835a32ed22c7a89e50ee8c8d8e7 (image=quay.io/ceph/ceph:v20, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-mon-compute-0, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 16:59:19 compute-0 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.14244 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 10 16:59:19 compute-0 thirsty_ellis[94131]: 
Jan 10 16:59:19 compute-0 thirsty_ellis[94131]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Jan 10 16:59:19 compute-0 systemd[1]: libpod-1191ef88f10db52b89a4f86e3f6256c4221cd4203071e1ca49c24c1b860e97d8.scope: Deactivated successfully.
Jan 10 16:59:19 compute-0 podman[94093]: 2026-01-10 16:59:19.515097348 +0000 UTC m=+0.625851459 container died 1191ef88f10db52b89a4f86e3f6256c4221cd4203071e1ca49c24c1b860e97d8 (image=quay.io/ceph/ceph:v20, name=thirsty_ellis, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 16:59:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-3a167183a7f09e4a0a3353ffe0448c0530c96119ff047fffffddcb7c9f99f22e-merged.mount: Deactivated successfully.
Jan 10 16:59:19 compute-0 podman[94093]: 2026-01-10 16:59:19.556461453 +0000 UTC m=+0.667215564 container remove 1191ef88f10db52b89a4f86e3f6256c4221cd4203071e1ca49c24c1b860e97d8 (image=quay.io/ceph/ceph:v20, name=thirsty_ellis, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 10 16:59:19 compute-0 systemd[1]: libpod-conmon-1191ef88f10db52b89a4f86e3f6256c4221cd4203071e1ca49c24c1b860e97d8.scope: Deactivated successfully.
Jan 10 16:59:19 compute-0 ansible-async_wrapper.py[94089]: Module complete (94089)
Jan 10 16:59:19 compute-0 sudo[94376]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vzeriacyxoflhnuudvgeglszgyypdsng ; /usr/bin/python3'
Jan 10 16:59:19 compute-0 sudo[94376]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:59:19 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v75: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 10 16:59:20 compute-0 python3[94385]: ansible-ansible.legacy.async_status Invoked with jid=j957312711211.94058 mode=status _async_dir=/root/.ansible_async
Jan 10 16:59:20 compute-0 sudo[94376]: pam_unix(sudo:session): session closed for user root
Jan 10 16:59:20 compute-0 sudo[94061]: pam_unix(sudo:session): session closed for user root
Jan 10 16:59:20 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 10 16:59:20 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:59:20 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 10 16:59:20 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:59:20 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 16:59:20 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 16:59:20 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 10 16:59:20 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 10 16:59:20 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 10 16:59:20 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:59:20 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 10 16:59:20 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 10 16:59:20 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 10 16:59:20 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 10 16:59:20 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 16:59:20 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 16:59:20 compute-0 ceph-mon[75249]: mds.? [v2:192.168.122.100:6814/3831969488,v1:192.168.122.100:6815/3831969488] up:boot
Jan 10 16:59:20 compute-0 ceph-mon[75249]: daemon mds.cephfs.compute-0.anmivh assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Jan 10 16:59:20 compute-0 ceph-mon[75249]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Jan 10 16:59:20 compute-0 ceph-mon[75249]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Jan 10 16:59:20 compute-0 ceph-mon[75249]: Cluster is now healthy
Jan 10 16:59:20 compute-0 ceph-mon[75249]: fsmap cephfs:0 1 up:standby
Jan 10 16:59:20 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "mds metadata", "who": "cephfs.compute-0.anmivh"} : dispatch
Jan 10 16:59:20 compute-0 ceph-mon[75249]: fsmap cephfs:1 {0=cephfs.compute-0.anmivh=up:creating}
Jan 10 16:59:20 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:59:20 compute-0 ceph-mon[75249]: daemon mds.cephfs.compute-0.anmivh is now active in filesystem cephfs as rank 0
Jan 10 16:59:20 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:59:20 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:59:20 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 16:59:20 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 10 16:59:20 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:59:20 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 10 16:59:20 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 10 16:59:20 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 16:59:20 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).mds e5 new map
Jan 10 16:59:20 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).mds e5 print_map
                                           e5
                                           btime 2026-01-10T16:59:20:234282+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        5
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-01-10T16:59:09.517425+0000
                                           modified        2026-01-10T16:59:20.234278+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=14242}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 14242 members: 14242
                                           [mds.cephfs.compute-0.anmivh{0:14242} state up:active seq 2 join_fscid=1 addr [v2:192.168.122.100:6814/3831969488,v1:192.168.122.100:6815/3831969488] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
Jan 10 16:59:20 compute-0 ceph-mds[93917]: mds.cephfs.compute-0.anmivh Updating MDS map to version 5 from mon.0
Jan 10 16:59:20 compute-0 ceph-mds[93917]: mds.0.4 handle_mds_map I am now mds.0.4
Jan 10 16:59:20 compute-0 ceph-mds[93917]: mds.0.4 handle_mds_map state change up:creating --> up:active
Jan 10 16:59:20 compute-0 ceph-mds[93917]: mds.0.4 recovery_done -- successful recovery!
Jan 10 16:59:20 compute-0 ceph-mds[93917]: mds.0.4 active_start
Jan 10 16:59:20 compute-0 ceph-mon[75249]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6814/3831969488,v1:192.168.122.100:6815/3831969488] up:active
Jan 10 16:59:20 compute-0 ceph-mon[75249]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.anmivh=up:active}
Jan 10 16:59:20 compute-0 sudo[94434]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 16:59:20 compute-0 sudo[94434]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 16:59:20 compute-0 sudo[94434]: pam_unix(sudo:session): session closed for user root
Jan 10 16:59:20 compute-0 sudo[94484]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eansvctrtaxtgtdvzudojacflyzmznpv ; /usr/bin/python3'
Jan 10 16:59:20 compute-0 sudo[94484]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:59:20 compute-0 sudo[94488]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 10 16:59:20 compute-0 sudo[94488]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 16:59:20 compute-0 python3[94489]: ansible-ansible.legacy.async_status Invoked with jid=j957312711211.94058 mode=cleanup _async_dir=/root/.ansible_async
Jan 10 16:59:20 compute-0 sudo[94484]: pam_unix(sudo:session): session closed for user root
Jan 10 16:59:20 compute-0 podman[94527]: 2026-01-10 16:59:20.646968995 +0000 UTC m=+0.062403324 container create 7875bb46053375f919382a6973a44f498206ea5afd3043bdbaf09b0ecf346d6d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_hawking, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 16:59:20 compute-0 systemd[1]: Started libpod-conmon-7875bb46053375f919382a6973a44f498206ea5afd3043bdbaf09b0ecf346d6d.scope.
Jan 10 16:59:20 compute-0 podman[94527]: 2026-01-10 16:59:20.615255049 +0000 UTC m=+0.030689448 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 16:59:20 compute-0 systemd[1]: Started libcrun container.
Jan 10 16:59:20 compute-0 podman[94527]: 2026-01-10 16:59:20.753475672 +0000 UTC m=+0.168909981 container init 7875bb46053375f919382a6973a44f498206ea5afd3043bdbaf09b0ecf346d6d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_hawking, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 16:59:20 compute-0 podman[94527]: 2026-01-10 16:59:20.761549995 +0000 UTC m=+0.176984294 container start 7875bb46053375f919382a6973a44f498206ea5afd3043bdbaf09b0ecf346d6d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_hawking, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 16:59:20 compute-0 podman[94527]: 2026-01-10 16:59:20.766299932 +0000 UTC m=+0.181734241 container attach 7875bb46053375f919382a6973a44f498206ea5afd3043bdbaf09b0ecf346d6d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_hawking, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 16:59:20 compute-0 brave_hawking[94543]: 167 167
Jan 10 16:59:20 compute-0 systemd[1]: libpod-7875bb46053375f919382a6973a44f498206ea5afd3043bdbaf09b0ecf346d6d.scope: Deactivated successfully.
Jan 10 16:59:20 compute-0 podman[94527]: 2026-01-10 16:59:20.768862596 +0000 UTC m=+0.184296915 container died 7875bb46053375f919382a6973a44f498206ea5afd3043bdbaf09b0ecf346d6d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_hawking, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 16:59:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-b52c936a24d24a24ee7f37b92d605a12f143502a7889767db9151ee09343c342-merged.mount: Deactivated successfully.
Jan 10 16:59:20 compute-0 podman[94527]: 2026-01-10 16:59:20.810953542 +0000 UTC m=+0.226387841 container remove 7875bb46053375f919382a6973a44f498206ea5afd3043bdbaf09b0ecf346d6d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_hawking, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 10 16:59:20 compute-0 systemd[1]: libpod-conmon-7875bb46053375f919382a6973a44f498206ea5afd3043bdbaf09b0ecf346d6d.scope: Deactivated successfully.
Jan 10 16:59:20 compute-0 sudo[94584]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-whnveyswvpwsygsykbwmjfejqfkmaupo ; /usr/bin/python3'
Jan 10 16:59:20 compute-0 sudo[94584]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:59:21 compute-0 podman[94592]: 2026-01-10 16:59:21.023460951 +0000 UTC m=+0.058682686 container create e4bcf02da405e0bf206c3780b2522f516c20d0d917b32a7e6c1c3fd610151f96 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_bhaskara, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 10 16:59:21 compute-0 python3[94586]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 16:59:21 compute-0 systemd[1]: Started libpod-conmon-e4bcf02da405e0bf206c3780b2522f516c20d0d917b32a7e6c1c3fd610151f96.scope.
Jan 10 16:59:21 compute-0 systemd[1]: Started libcrun container.
Jan 10 16:59:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98cdb031cd40e01658fbe8c4f3bfcefb3086c8e548cf1673540708b14c0630bd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 16:59:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98cdb031cd40e01658fbe8c4f3bfcefb3086c8e548cf1673540708b14c0630bd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 16:59:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98cdb031cd40e01658fbe8c4f3bfcefb3086c8e548cf1673540708b14c0630bd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 16:59:21 compute-0 podman[94592]: 2026-01-10 16:59:20.998263543 +0000 UTC m=+0.033485308 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 16:59:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98cdb031cd40e01658fbe8c4f3bfcefb3086c8e548cf1673540708b14c0630bd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 16:59:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98cdb031cd40e01658fbe8c4f3bfcefb3086c8e548cf1673540708b14c0630bd/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 10 16:59:21 compute-0 podman[94607]: 2026-01-10 16:59:21.099459516 +0000 UTC m=+0.047951076 container create 41989e40d694b14e4da007a24f5b632d0309e3c74be7188da5ddb5eff5270e1c (image=quay.io/ceph/ceph:v20, name=frosty_heisenberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 10 16:59:21 compute-0 podman[94592]: 2026-01-10 16:59:21.104932675 +0000 UTC m=+0.140154450 container init e4bcf02da405e0bf206c3780b2522f516c20d0d917b32a7e6c1c3fd610151f96 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_bhaskara, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 10 16:59:21 compute-0 podman[94592]: 2026-01-10 16:59:21.115660514 +0000 UTC m=+0.150882239 container start e4bcf02da405e0bf206c3780b2522f516c20d0d917b32a7e6c1c3fd610151f96 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_bhaskara, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 16:59:21 compute-0 podman[94592]: 2026-01-10 16:59:21.119430583 +0000 UTC m=+0.154652318 container attach e4bcf02da405e0bf206c3780b2522f516c20d0d917b32a7e6c1c3fd610151f96 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_bhaskara, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 10 16:59:21 compute-0 systemd[1]: Started libpod-conmon-41989e40d694b14e4da007a24f5b632d0309e3c74be7188da5ddb5eff5270e1c.scope.
Jan 10 16:59:21 compute-0 systemd[1]: Started libcrun container.
Jan 10 16:59:21 compute-0 podman[94607]: 2026-01-10 16:59:21.075664649 +0000 UTC m=+0.024156219 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 16:59:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/721592a556aee1a06d2d603784a06a5bf911f031b779d7185e2d9c8d990d7e89/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 16:59:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/721592a556aee1a06d2d603784a06a5bf911f031b779d7185e2d9c8d990d7e89/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 16:59:21 compute-0 podman[94607]: 2026-01-10 16:59:21.192795123 +0000 UTC m=+0.141286723 container init 41989e40d694b14e4da007a24f5b632d0309e3c74be7188da5ddb5eff5270e1c (image=quay.io/ceph/ceph:v20, name=frosty_heisenberg, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 16:59:21 compute-0 podman[94607]: 2026-01-10 16:59:21.200105424 +0000 UTC m=+0.148596974 container start 41989e40d694b14e4da007a24f5b632d0309e3c74be7188da5ddb5eff5270e1c (image=quay.io/ceph/ceph:v20, name=frosty_heisenberg, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 16:59:21 compute-0 podman[94607]: 2026-01-10 16:59:21.205241122 +0000 UTC m=+0.153732722 container attach 41989e40d694b14e4da007a24f5b632d0309e3c74be7188da5ddb5eff5270e1c (image=quay.io/ceph/ceph:v20, name=frosty_heisenberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 16:59:21 compute-0 ceph-mon[75249]: from='client.14244 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 10 16:59:21 compute-0 ceph-mon[75249]: pgmap v75: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 10 16:59:21 compute-0 ceph-mon[75249]: mds.? [v2:192.168.122.100:6814/3831969488,v1:192.168.122.100:6815/3831969488] up:active
Jan 10 16:59:21 compute-0 ceph-mon[75249]: fsmap cephfs:1 {0=cephfs.compute-0.anmivh=up:active}
Jan 10 16:59:21 compute-0 epic_bhaskara[94620]: --> passed data devices: 0 physical, 3 LVM
Jan 10 16:59:21 compute-0 epic_bhaskara[94620]: --> All data devices are unavailable
Jan 10 16:59:21 compute-0 systemd[1]: libpod-e4bcf02da405e0bf206c3780b2522f516c20d0d917b32a7e6c1c3fd610151f96.scope: Deactivated successfully.
Jan 10 16:59:21 compute-0 podman[94592]: 2026-01-10 16:59:21.690663455 +0000 UTC m=+0.725885180 container died e4bcf02da405e0bf206c3780b2522f516c20d0d917b32a7e6c1c3fd610151f96 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_bhaskara, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 10 16:59:21 compute-0 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.14246 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 10 16:59:21 compute-0 frosty_heisenberg[94630]: 
Jan 10 16:59:21 compute-0 frosty_heisenberg[94630]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Jan 10 16:59:21 compute-0 systemd[1]: libpod-41989e40d694b14e4da007a24f5b632d0309e3c74be7188da5ddb5eff5270e1c.scope: Deactivated successfully.
Jan 10 16:59:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-98cdb031cd40e01658fbe8c4f3bfcefb3086c8e548cf1673540708b14c0630bd-merged.mount: Deactivated successfully.
Jan 10 16:59:21 compute-0 podman[94592]: 2026-01-10 16:59:21.746956221 +0000 UTC m=+0.782177956 container remove e4bcf02da405e0bf206c3780b2522f516c20d0d917b32a7e6c1c3fd610151f96 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_bhaskara, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 16:59:21 compute-0 podman[94677]: 2026-01-10 16:59:21.758157104 +0000 UTC m=+0.029033719 container died 41989e40d694b14e4da007a24f5b632d0309e3c74be7188da5ddb5eff5270e1c (image=quay.io/ceph/ceph:v20, name=frosty_heisenberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 10 16:59:21 compute-0 systemd[1]: libpod-conmon-e4bcf02da405e0bf206c3780b2522f516c20d0d917b32a7e6c1c3fd610151f96.scope: Deactivated successfully.
Jan 10 16:59:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-721592a556aee1a06d2d603784a06a5bf911f031b779d7185e2d9c8d990d7e89-merged.mount: Deactivated successfully.
Jan 10 16:59:21 compute-0 sudo[94488]: pam_unix(sudo:session): session closed for user root
Jan 10 16:59:21 compute-0 podman[94677]: 2026-01-10 16:59:21.800958121 +0000 UTC m=+0.071834716 container remove 41989e40d694b14e4da007a24f5b632d0309e3c74be7188da5ddb5eff5270e1c (image=quay.io/ceph/ceph:v20, name=frosty_heisenberg, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 16:59:21 compute-0 systemd[1]: libpod-conmon-41989e40d694b14e4da007a24f5b632d0309e3c74be7188da5ddb5eff5270e1c.scope: Deactivated successfully.
Jan 10 16:59:21 compute-0 sudo[94584]: pam_unix(sudo:session): session closed for user root
Jan 10 16:59:21 compute-0 sudo[94696]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 16:59:21 compute-0 sudo[94696]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 16:59:21 compute-0 sudo[94696]: pam_unix(sudo:session): session closed for user root
Jan 10 16:59:21 compute-0 sudo[94721]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 -- lvm list --format json
Jan 10 16:59:21 compute-0 sudo[94721]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 16:59:21 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v76: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 10 16:59:22 compute-0 ceph-mon[75249]: from='client.14246 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 10 16:59:22 compute-0 ceph-mon[75249]: pgmap v76: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 10 16:59:22 compute-0 podman[94758]: 2026-01-10 16:59:22.277120146 +0000 UTC m=+0.063081733 container create 18b22629bafd8cb612114cabbf9c4f211f717f21e12157c04240c7a84f683fbb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_neumann, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 16:59:22 compute-0 systemd[1]: Started libpod-conmon-18b22629bafd8cb612114cabbf9c4f211f717f21e12157c04240c7a84f683fbb.scope.
Jan 10 16:59:22 compute-0 podman[94758]: 2026-01-10 16:59:22.248789038 +0000 UTC m=+0.034750725 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 16:59:22 compute-0 systemd[1]: Started libcrun container.
Jan 10 16:59:22 compute-0 podman[94758]: 2026-01-10 16:59:22.371200544 +0000 UTC m=+0.157162221 container init 18b22629bafd8cb612114cabbf9c4f211f717f21e12157c04240c7a84f683fbb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_neumann, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 10 16:59:22 compute-0 podman[94758]: 2026-01-10 16:59:22.381867582 +0000 UTC m=+0.167829169 container start 18b22629bafd8cb612114cabbf9c4f211f717f21e12157c04240c7a84f683fbb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_neumann, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 10 16:59:22 compute-0 podman[94758]: 2026-01-10 16:59:22.386010201 +0000 UTC m=+0.171971878 container attach 18b22629bafd8cb612114cabbf9c4f211f717f21e12157c04240c7a84f683fbb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_neumann, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 16:59:22 compute-0 inspiring_neumann[94775]: 167 167
Jan 10 16:59:22 compute-0 systemd[1]: libpod-18b22629bafd8cb612114cabbf9c4f211f717f21e12157c04240c7a84f683fbb.scope: Deactivated successfully.
Jan 10 16:59:22 compute-0 podman[94780]: 2026-01-10 16:59:22.438462167 +0000 UTC m=+0.032501360 container died 18b22629bafd8cb612114cabbf9c4f211f717f21e12157c04240c7a84f683fbb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_neumann, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 10 16:59:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-018334cfd5195fb2fd77e890c5b605c6b4b4c3a8137aeaf85550501667f144e2-merged.mount: Deactivated successfully.
Jan 10 16:59:22 compute-0 podman[94780]: 2026-01-10 16:59:22.480417419 +0000 UTC m=+0.074456572 container remove 18b22629bafd8cb612114cabbf9c4f211f717f21e12157c04240c7a84f683fbb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_neumann, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 16:59:22 compute-0 systemd[1]: libpod-conmon-18b22629bafd8cb612114cabbf9c4f211f717f21e12157c04240c7a84f683fbb.scope: Deactivated successfully.
Jan 10 16:59:22 compute-0 sudo[94817]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pgpnmtkzsfpvujbemkitmtqdfyvhnrcq ; /usr/bin/python3'
Jan 10 16:59:22 compute-0 sudo[94817]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:59:22 compute-0 python3[94821]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ls --export -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 16:59:22 compute-0 podman[94827]: 2026-01-10 16:59:22.804994268 +0000 UTC m=+0.186874542 container create 302f807c9405cb64e262701f3c9d4cf5514e2a2a669f156651e8995c811c309f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_varahamihira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 10 16:59:22 compute-0 podman[94835]: 2026-01-10 16:59:22.826505678 +0000 UTC m=+0.177626774 container create 5b4444403ef53a64315d735df88c66e3c4ba966b490b71417f7d28940424c9fe (image=quay.io/ceph/ceph:v20, name=admiring_elion, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 16:59:22 compute-0 systemd[1]: Started libpod-conmon-302f807c9405cb64e262701f3c9d4cf5514e2a2a669f156651e8995c811c309f.scope.
Jan 10 16:59:22 compute-0 systemd[1]: Started libpod-conmon-5b4444403ef53a64315d735df88c66e3c4ba966b490b71417f7d28940424c9fe.scope.
Jan 10 16:59:22 compute-0 systemd[1]: Started libcrun container.
Jan 10 16:59:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3df806715b98d04a293a870ee9210dd29bef706064e8075429afa70f96ed6d66/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 16:59:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3df806715b98d04a293a870ee9210dd29bef706064e8075429afa70f96ed6d66/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 16:59:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3df806715b98d04a293a870ee9210dd29bef706064e8075429afa70f96ed6d66/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 16:59:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3df806715b98d04a293a870ee9210dd29bef706064e8075429afa70f96ed6d66/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 16:59:22 compute-0 systemd[1]: Started libcrun container.
Jan 10 16:59:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4fe69c2ae7d3af483529daa0109d454574cb0b82438f446906e992fa6178f4f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 16:59:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4fe69c2ae7d3af483529daa0109d454574cb0b82438f446906e992fa6178f4f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 16:59:22 compute-0 podman[94827]: 2026-01-10 16:59:22.786607758 +0000 UTC m=+0.168488062 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 16:59:22 compute-0 podman[94827]: 2026-01-10 16:59:22.886443787 +0000 UTC m=+0.268324061 container init 302f807c9405cb64e262701f3c9d4cf5514e2a2a669f156651e8995c811c309f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_varahamihira, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 10 16:59:22 compute-0 podman[94835]: 2026-01-10 16:59:22.891156133 +0000 UTC m=+0.242277259 container init 5b4444403ef53a64315d735df88c66e3c4ba966b490b71417f7d28940424c9fe (image=quay.io/ceph/ceph:v20, name=admiring_elion, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 10 16:59:22 compute-0 podman[94827]: 2026-01-10 16:59:22.896040414 +0000 UTC m=+0.277920698 container start 302f807c9405cb64e262701f3c9d4cf5514e2a2a669f156651e8995c811c309f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_varahamihira, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 16:59:22 compute-0 podman[94835]: 2026-01-10 16:59:22.800969972 +0000 UTC m=+0.152091088 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 16:59:22 compute-0 podman[94835]: 2026-01-10 16:59:22.901944664 +0000 UTC m=+0.253065760 container start 5b4444403ef53a64315d735df88c66e3c4ba966b490b71417f7d28940424c9fe (image=quay.io/ceph/ceph:v20, name=admiring_elion, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 16:59:22 compute-0 podman[94827]: 2026-01-10 16:59:22.906026992 +0000 UTC m=+0.287907296 container attach 302f807c9405cb64e262701f3c9d4cf5514e2a2a669f156651e8995c811c309f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_varahamihira, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 16:59:22 compute-0 podman[94835]: 2026-01-10 16:59:22.915095494 +0000 UTC m=+0.266216600 container attach 5b4444403ef53a64315d735df88c66e3c4ba966b490b71417f7d28940424c9fe (image=quay.io/ceph/ceph:v20, name=admiring_elion, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 10 16:59:23 compute-0 frosty_varahamihira[94861]: {
Jan 10 16:59:23 compute-0 frosty_varahamihira[94861]:     "0": [
Jan 10 16:59:23 compute-0 frosty_varahamihira[94861]:         {
Jan 10 16:59:23 compute-0 frosty_varahamihira[94861]:             "devices": [
Jan 10 16:59:23 compute-0 frosty_varahamihira[94861]:                 "/dev/loop3"
Jan 10 16:59:23 compute-0 frosty_varahamihira[94861]:             ],
Jan 10 16:59:23 compute-0 frosty_varahamihira[94861]:             "lv_name": "ceph_lv0",
Jan 10 16:59:23 compute-0 frosty_varahamihira[94861]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 10 16:59:23 compute-0 frosty_varahamihira[94861]:             "lv_size": "21470642176",
Jan 10 16:59:23 compute-0 frosty_varahamihira[94861]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9aa1dcc9-88f4-49c0-be40-744313964d3e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 16:59:23 compute-0 frosty_varahamihira[94861]:             "lv_uuid": "fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2",
Jan 10 16:59:23 compute-0 frosty_varahamihira[94861]:             "name": "ceph_lv0",
Jan 10 16:59:23 compute-0 frosty_varahamihira[94861]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 10 16:59:23 compute-0 frosty_varahamihira[94861]:             "tags": {
Jan 10 16:59:23 compute-0 frosty_varahamihira[94861]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 10 16:59:23 compute-0 frosty_varahamihira[94861]:                 "ceph.block_uuid": "fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2",
Jan 10 16:59:23 compute-0 frosty_varahamihira[94861]:                 "ceph.cephx_lockbox_secret": "",
Jan 10 16:59:23 compute-0 frosty_varahamihira[94861]:                 "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 16:59:23 compute-0 frosty_varahamihira[94861]:                 "ceph.cluster_name": "ceph",
Jan 10 16:59:23 compute-0 frosty_varahamihira[94861]:                 "ceph.crush_device_class": "",
Jan 10 16:59:23 compute-0 frosty_varahamihira[94861]:                 "ceph.encrypted": "0",
Jan 10 16:59:23 compute-0 frosty_varahamihira[94861]:                 "ceph.objectstore": "bluestore",
Jan 10 16:59:23 compute-0 frosty_varahamihira[94861]:                 "ceph.osd_fsid": "9aa1dcc9-88f4-49c0-be40-744313964d3e",
Jan 10 16:59:23 compute-0 frosty_varahamihira[94861]:                 "ceph.osd_id": "0",
Jan 10 16:59:23 compute-0 frosty_varahamihira[94861]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 10 16:59:23 compute-0 frosty_varahamihira[94861]:                 "ceph.type": "block",
Jan 10 16:59:23 compute-0 frosty_varahamihira[94861]:                 "ceph.vdo": "0",
Jan 10 16:59:23 compute-0 frosty_varahamihira[94861]:                 "ceph.with_tpm": "0"
Jan 10 16:59:23 compute-0 frosty_varahamihira[94861]:             },
Jan 10 16:59:23 compute-0 frosty_varahamihira[94861]:             "type": "block",
Jan 10 16:59:23 compute-0 frosty_varahamihira[94861]:             "vg_name": "ceph_vg0"
Jan 10 16:59:23 compute-0 frosty_varahamihira[94861]:         }
Jan 10 16:59:23 compute-0 frosty_varahamihira[94861]:     ],
Jan 10 16:59:23 compute-0 frosty_varahamihira[94861]:     "1": [
Jan 10 16:59:23 compute-0 frosty_varahamihira[94861]:         {
Jan 10 16:59:23 compute-0 frosty_varahamihira[94861]:             "devices": [
Jan 10 16:59:23 compute-0 frosty_varahamihira[94861]:                 "/dev/loop4"
Jan 10 16:59:23 compute-0 frosty_varahamihira[94861]:             ],
Jan 10 16:59:23 compute-0 frosty_varahamihira[94861]:             "lv_name": "ceph_lv1",
Jan 10 16:59:23 compute-0 frosty_varahamihira[94861]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 10 16:59:23 compute-0 frosty_varahamihira[94861]:             "lv_size": "21470642176",
Jan 10 16:59:23 compute-0 frosty_varahamihira[94861]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e8e31518-65ae-476c-891c-e2fc550d0a1c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 16:59:23 compute-0 frosty_varahamihira[94861]:             "lv_uuid": "jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE",
Jan 10 16:59:23 compute-0 frosty_varahamihira[94861]:             "name": "ceph_lv1",
Jan 10 16:59:23 compute-0 frosty_varahamihira[94861]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 10 16:59:23 compute-0 frosty_varahamihira[94861]:             "tags": {
Jan 10 16:59:23 compute-0 frosty_varahamihira[94861]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 10 16:59:23 compute-0 frosty_varahamihira[94861]:                 "ceph.block_uuid": "jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE",
Jan 10 16:59:23 compute-0 frosty_varahamihira[94861]:                 "ceph.cephx_lockbox_secret": "",
Jan 10 16:59:23 compute-0 frosty_varahamihira[94861]:                 "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 16:59:23 compute-0 frosty_varahamihira[94861]:                 "ceph.cluster_name": "ceph",
Jan 10 16:59:23 compute-0 frosty_varahamihira[94861]:                 "ceph.crush_device_class": "",
Jan 10 16:59:23 compute-0 frosty_varahamihira[94861]:                 "ceph.encrypted": "0",
Jan 10 16:59:23 compute-0 frosty_varahamihira[94861]:                 "ceph.objectstore": "bluestore",
Jan 10 16:59:23 compute-0 frosty_varahamihira[94861]:                 "ceph.osd_fsid": "e8e31518-65ae-476c-891c-e2fc550d0a1c",
Jan 10 16:59:23 compute-0 frosty_varahamihira[94861]:                 "ceph.osd_id": "1",
Jan 10 16:59:23 compute-0 frosty_varahamihira[94861]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 10 16:59:23 compute-0 frosty_varahamihira[94861]:                 "ceph.type": "block",
Jan 10 16:59:23 compute-0 frosty_varahamihira[94861]:                 "ceph.vdo": "0",
Jan 10 16:59:23 compute-0 frosty_varahamihira[94861]:                 "ceph.with_tpm": "0"
Jan 10 16:59:23 compute-0 frosty_varahamihira[94861]:             },
Jan 10 16:59:23 compute-0 frosty_varahamihira[94861]:             "type": "block",
Jan 10 16:59:23 compute-0 frosty_varahamihira[94861]:             "vg_name": "ceph_vg1"
Jan 10 16:59:23 compute-0 frosty_varahamihira[94861]:         }
Jan 10 16:59:23 compute-0 frosty_varahamihira[94861]:     ],
Jan 10 16:59:23 compute-0 frosty_varahamihira[94861]:     "2": [
Jan 10 16:59:23 compute-0 frosty_varahamihira[94861]:         {
Jan 10 16:59:23 compute-0 frosty_varahamihira[94861]:             "devices": [
Jan 10 16:59:23 compute-0 frosty_varahamihira[94861]:                 "/dev/loop5"
Jan 10 16:59:23 compute-0 frosty_varahamihira[94861]:             ],
Jan 10 16:59:23 compute-0 frosty_varahamihira[94861]:             "lv_name": "ceph_lv2",
Jan 10 16:59:23 compute-0 frosty_varahamihira[94861]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 10 16:59:23 compute-0 frosty_varahamihira[94861]:             "lv_size": "21470642176",
Jan 10 16:59:23 compute-0 frosty_varahamihira[94861]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=87473727-6468-4f68-8371-e0bf60edaa43,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 16:59:23 compute-0 frosty_varahamihira[94861]:             "lv_uuid": "47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL",
Jan 10 16:59:23 compute-0 frosty_varahamihira[94861]:             "name": "ceph_lv2",
Jan 10 16:59:23 compute-0 frosty_varahamihira[94861]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 10 16:59:23 compute-0 frosty_varahamihira[94861]:             "tags": {
Jan 10 16:59:23 compute-0 frosty_varahamihira[94861]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 10 16:59:23 compute-0 frosty_varahamihira[94861]:                 "ceph.block_uuid": "47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL",
Jan 10 16:59:23 compute-0 frosty_varahamihira[94861]:                 "ceph.cephx_lockbox_secret": "",
Jan 10 16:59:23 compute-0 frosty_varahamihira[94861]:                 "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 16:59:23 compute-0 frosty_varahamihira[94861]:                 "ceph.cluster_name": "ceph",
Jan 10 16:59:23 compute-0 frosty_varahamihira[94861]:                 "ceph.crush_device_class": "",
Jan 10 16:59:23 compute-0 frosty_varahamihira[94861]:                 "ceph.encrypted": "0",
Jan 10 16:59:23 compute-0 frosty_varahamihira[94861]:                 "ceph.objectstore": "bluestore",
Jan 10 16:59:23 compute-0 frosty_varahamihira[94861]:                 "ceph.osd_fsid": "87473727-6468-4f68-8371-e0bf60edaa43",
Jan 10 16:59:23 compute-0 frosty_varahamihira[94861]:                 "ceph.osd_id": "2",
Jan 10 16:59:23 compute-0 frosty_varahamihira[94861]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 10 16:59:23 compute-0 frosty_varahamihira[94861]:                 "ceph.type": "block",
Jan 10 16:59:23 compute-0 frosty_varahamihira[94861]:                 "ceph.vdo": "0",
Jan 10 16:59:23 compute-0 frosty_varahamihira[94861]:                 "ceph.with_tpm": "0"
Jan 10 16:59:23 compute-0 frosty_varahamihira[94861]:             },
Jan 10 16:59:23 compute-0 frosty_varahamihira[94861]:             "type": "block",
Jan 10 16:59:23 compute-0 frosty_varahamihira[94861]:             "vg_name": "ceph_vg2"
Jan 10 16:59:23 compute-0 frosty_varahamihira[94861]:         }
Jan 10 16:59:23 compute-0 frosty_varahamihira[94861]:     ]
Jan 10 16:59:23 compute-0 frosty_varahamihira[94861]: }
Jan 10 16:59:23 compute-0 systemd[1]: libpod-302f807c9405cb64e262701f3c9d4cf5514e2a2a669f156651e8995c811c309f.scope: Deactivated successfully.
Jan 10 16:59:23 compute-0 podman[94827]: 2026-01-10 16:59:23.181817459 +0000 UTC m=+0.563697763 container died 302f807c9405cb64e262701f3c9d4cf5514e2a2a669f156651e8995c811c309f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_varahamihira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True)
Jan 10 16:59:23 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e33 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 16:59:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-3df806715b98d04a293a870ee9210dd29bef706064e8075429afa70f96ed6d66-merged.mount: Deactivated successfully.
Jan 10 16:59:23 compute-0 podman[94827]: 2026-01-10 16:59:23.363394524 +0000 UTC m=+0.745274798 container remove 302f807c9405cb64e262701f3c9d4cf5514e2a2a669f156651e8995c811c309f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_varahamihira, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 16:59:23 compute-0 systemd[1]: libpod-conmon-302f807c9405cb64e262701f3c9d4cf5514e2a2a669f156651e8995c811c309f.scope: Deactivated successfully.
Jan 10 16:59:23 compute-0 sudo[94721]: pam_unix(sudo:session): session closed for user root
Jan 10 16:59:23 compute-0 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.14248 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 10 16:59:23 compute-0 admiring_elion[94863]: 
Jan 10 16:59:23 compute-0 admiring_elion[94863]: [{"placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash"}, {"placement": {"hosts": ["compute-0"]}, "service_id": "cephfs", "service_name": "mds.cephfs", "service_type": "mds"}, {"placement": {"hosts": ["compute-0"]}, "service_name": "mgr", "service_type": "mgr"}, {"placement": {"hosts": ["compute-0"]}, "service_name": "mon", "service_type": "mon"}, {"placement": {"hosts": ["compute-0"]}, "service_id": "default_drive_group", "service_name": "osd.default_drive_group", "service_type": "osd", "spec": {"data_devices": {"paths": ["/dev/ceph_vg0/ceph_lv0", "/dev/ceph_vg1/ceph_lv1", "/dev/ceph_vg2/ceph_lv2"]}, "filter_logic": "AND", "objectstore": "bluestore"}}]
Jan 10 16:59:23 compute-0 sudo[94905]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 16:59:23 compute-0 sudo[94905]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 16:59:23 compute-0 sudo[94905]: pam_unix(sudo:session): session closed for user root
Jan 10 16:59:23 compute-0 systemd[1]: libpod-5b4444403ef53a64315d735df88c66e3c4ba966b490b71417f7d28940424c9fe.scope: Deactivated successfully.
Jan 10 16:59:23 compute-0 podman[94835]: 2026-01-10 16:59:23.502140172 +0000 UTC m=+0.853261278 container died 5b4444403ef53a64315d735df88c66e3c4ba966b490b71417f7d28940424c9fe (image=quay.io/ceph/ceph:v20, name=admiring_elion, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 16:59:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-a4fe69c2ae7d3af483529daa0109d454574cb0b82438f446906e992fa6178f4f-merged.mount: Deactivated successfully.
Jan 10 16:59:23 compute-0 podman[94835]: 2026-01-10 16:59:23.544819856 +0000 UTC m=+0.895940952 container remove 5b4444403ef53a64315d735df88c66e3c4ba966b490b71417f7d28940424c9fe (image=quay.io/ceph/ceph:v20, name=admiring_elion, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 16:59:23 compute-0 sudo[94932]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 -- raw list --format json
Jan 10 16:59:23 compute-0 sudo[94932]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 16:59:23 compute-0 systemd[1]: libpod-conmon-5b4444403ef53a64315d735df88c66e3c4ba966b490b71417f7d28940424c9fe.scope: Deactivated successfully.
Jan 10 16:59:23 compute-0 sudo[94817]: pam_unix(sudo:session): session closed for user root
Jan 10 16:59:23 compute-0 ansible-async_wrapper.py[94088]: Done in kid B.
Jan 10 16:59:23 compute-0 podman[94981]: 2026-01-10 16:59:23.832099795 +0000 UTC m=+0.051285743 container create 7e7ffc6e0db245c8f90dc0a4d8c43357a9927e225d3ead9698ff4c75baa8c657 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_dijkstra, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 16:59:23 compute-0 systemd[1]: Started libpod-conmon-7e7ffc6e0db245c8f90dc0a4d8c43357a9927e225d3ead9698ff4c75baa8c657.scope.
Jan 10 16:59:23 compute-0 podman[94981]: 2026-01-10 16:59:23.80458822 +0000 UTC m=+0.023774158 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 16:59:23 compute-0 systemd[1]: Started libcrun container.
Jan 10 16:59:23 compute-0 podman[94981]: 2026-01-10 16:59:23.937091307 +0000 UTC m=+0.156277295 container init 7e7ffc6e0db245c8f90dc0a4d8c43357a9927e225d3ead9698ff4c75baa8c657 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_dijkstra, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 10 16:59:23 compute-0 podman[94981]: 2026-01-10 16:59:23.946500709 +0000 UTC m=+0.165686657 container start 7e7ffc6e0db245c8f90dc0a4d8c43357a9927e225d3ead9698ff4c75baa8c657 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_dijkstra, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 16:59:23 compute-0 podman[94981]: 2026-01-10 16:59:23.951331328 +0000 UTC m=+0.170517246 container attach 7e7ffc6e0db245c8f90dc0a4d8c43357a9927e225d3ead9698ff4c75baa8c657 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_dijkstra, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 16:59:23 compute-0 pedantic_dijkstra[94997]: 167 167
Jan 10 16:59:23 compute-0 systemd[1]: libpod-7e7ffc6e0db245c8f90dc0a4d8c43357a9927e225d3ead9698ff4c75baa8c657.scope: Deactivated successfully.
Jan 10 16:59:23 compute-0 podman[94981]: 2026-01-10 16:59:23.95691482 +0000 UTC m=+0.176100728 container died 7e7ffc6e0db245c8f90dc0a4d8c43357a9927e225d3ead9698ff4c75baa8c657 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_dijkstra, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 10 16:59:23 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v77: 7 pgs: 7 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s wr, 3 op/s
Jan 10 16:59:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-0bed1db7b85958fd102ac1e2e6089eecf49a4985f0b153bac1ffbf59c499b8f8-merged.mount: Deactivated successfully.
Jan 10 16:59:24 compute-0 podman[94981]: 2026-01-10 16:59:24.012490655 +0000 UTC m=+0.231676573 container remove 7e7ffc6e0db245c8f90dc0a4d8c43357a9927e225d3ead9698ff4c75baa8c657 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_dijkstra, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 10 16:59:24 compute-0 systemd[1]: libpod-conmon-7e7ffc6e0db245c8f90dc0a4d8c43357a9927e225d3ead9698ff4c75baa8c657.scope: Deactivated successfully.
Jan 10 16:59:24 compute-0 podman[95021]: 2026-01-10 16:59:24.224279473 +0000 UTC m=+0.100030751 container create ae2d4cb5369f91fb24cd6c72f829c216a0977367840cb1738de74a12708c9492 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_wescoff, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 16:59:24 compute-0 ceph-mds[93917]: mds.pinger is_rank_lagging: rank=0 was never sent ping request.
Jan 10 16:59:24 compute-0 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-mds-cephfs-compute-0-anmivh[93891]: 2026-01-10T16:59:24.228+0000 7f65855d3640 -1 mds.pinger is_rank_lagging: rank=0 was never sent ping request.
Jan 10 16:59:24 compute-0 podman[95021]: 2026-01-10 16:59:24.147104934 +0000 UTC m=+0.022856242 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 16:59:24 compute-0 systemd[1]: Started libpod-conmon-ae2d4cb5369f91fb24cd6c72f829c216a0977367840cb1738de74a12708c9492.scope.
Jan 10 16:59:24 compute-0 systemd[1]: Started libcrun container.
Jan 10 16:59:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4341065d2d006435ef2ccfeb568f9470d3d66c46feb841070805f96d9790b8fe/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 16:59:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4341065d2d006435ef2ccfeb568f9470d3d66c46feb841070805f96d9790b8fe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 16:59:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4341065d2d006435ef2ccfeb568f9470d3d66c46feb841070805f96d9790b8fe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 16:59:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4341065d2d006435ef2ccfeb568f9470d3d66c46feb841070805f96d9790b8fe/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 16:59:24 compute-0 podman[95021]: 2026-01-10 16:59:24.315328333 +0000 UTC m=+0.191079731 container init ae2d4cb5369f91fb24cd6c72f829c216a0977367840cb1738de74a12708c9492 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_wescoff, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 16:59:24 compute-0 podman[95021]: 2026-01-10 16:59:24.324947481 +0000 UTC m=+0.200698819 container start ae2d4cb5369f91fb24cd6c72f829c216a0977367840cb1738de74a12708c9492 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_wescoff, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 10 16:59:24 compute-0 podman[95021]: 2026-01-10 16:59:24.329142532 +0000 UTC m=+0.204893860 container attach ae2d4cb5369f91fb24cd6c72f829c216a0977367840cb1738de74a12708c9492 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_wescoff, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 16:59:24 compute-0 sudo[95065]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ksjyfydxgkszmdxdhxudeaqccyzheics ; /usr/bin/python3'
Jan 10 16:59:24 compute-0 sudo[95065]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:59:24 compute-0 python3[95067]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 16:59:24 compute-0 podman[95068]: 2026-01-10 16:59:24.561269648 +0000 UTC m=+0.050325745 container create 1fb668f9345566844b7a358b39161b988fff95fcc5dd2d8c07f7fb6a282403ae (image=quay.io/ceph/ceph:v20, name=vigorous_dijkstra, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 16:59:24 compute-0 systemd[1]: Started libpod-conmon-1fb668f9345566844b7a358b39161b988fff95fcc5dd2d8c07f7fb6a282403ae.scope.
Jan 10 16:59:24 compute-0 systemd[1]: Started libcrun container.
Jan 10 16:59:24 compute-0 podman[95068]: 2026-01-10 16:59:24.541013763 +0000 UTC m=+0.030069890 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 16:59:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75d90ba33becaf393201656eb3f385bb3988123d1e784180e7fc434c9f1a53bf/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 16:59:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75d90ba33becaf393201656eb3f385bb3988123d1e784180e7fc434c9f1a53bf/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 16:59:24 compute-0 podman[95068]: 2026-01-10 16:59:24.650628999 +0000 UTC m=+0.139685116 container init 1fb668f9345566844b7a358b39161b988fff95fcc5dd2d8c07f7fb6a282403ae (image=quay.io/ceph/ceph:v20, name=vigorous_dijkstra, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 16:59:24 compute-0 podman[95068]: 2026-01-10 16:59:24.657276972 +0000 UTC m=+0.146333069 container start 1fb668f9345566844b7a358b39161b988fff95fcc5dd2d8c07f7fb6a282403ae (image=quay.io/ceph/ceph:v20, name=vigorous_dijkstra, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 10 16:59:24 compute-0 podman[95068]: 2026-01-10 16:59:24.684154708 +0000 UTC m=+0.173211065 container attach 1fb668f9345566844b7a358b39161b988fff95fcc5dd2d8c07f7fb6a282403ae (image=quay.io/ceph/ceph:v20, name=vigorous_dijkstra, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 10 16:59:25 compute-0 ceph-mon[75249]: from='client.14248 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 10 16:59:25 compute-0 ceph-mon[75249]: pgmap v77: 7 pgs: 7 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s wr, 3 op/s
Jan 10 16:59:25 compute-0 lvm[95177]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 10 16:59:25 compute-0 lvm[95177]: VG ceph_vg0 finished
Jan 10 16:59:25 compute-0 lvm[95179]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 10 16:59:25 compute-0 lvm[95179]: VG ceph_vg1 finished
Jan 10 16:59:25 compute-0 lvm[95181]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 10 16:59:25 compute-0 lvm[95181]: VG ceph_vg2 finished
Jan 10 16:59:25 compute-0 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.14250 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 10 16:59:25 compute-0 vigorous_dijkstra[95092]: 
Jan 10 16:59:25 compute-0 vigorous_dijkstra[95092]: [{"container_id": "2d8e6ffc82d6", "container_image_digests": ["quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1", "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "0.23%", "created": "2026-01-10T16:58:04.726457Z", "daemon_id": "compute-0", "daemon_name": "crash.compute-0", "daemon_type": "crash", "events": ["2026-01-10T16:58:04.801858Z daemon:crash.compute-0 [INFO] \"Deployed crash.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-10T16:59:20.139779Z", "memory_usage": 7790919, "pending_daemon_config": false, "ports": [], "service_name": "crash", "started": "2026-01-10T16:58:04.605554Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4@crash.compute-0", "version": "20.2.0"}, {"container_id": "9a7a6ac38874", "container_image_digests": ["quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1", "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "7.25%", "created": "2026-01-10T16:59:18.346552Z", "daemon_id": "cephfs.compute-0.anmivh", "daemon_name": "mds.cephfs.compute-0.anmivh", "daemon_type": "mds", "events": ["2026-01-10T16:59:18.421424Z daemon:mds.cephfs.compute-0.anmivh [INFO] \"Deployed mds.cephfs.compute-0.anmivh on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-10T16:59:20.140198Z", "memory_usage": 15330181, "pending_daemon_config": false, "ports": [], "service_name": "mds.cephfs", "started": "2026-01-10T16:59:18.254836Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4@mds.cephfs.compute-0.anmivh", "version": "20.2.0"}, {"container_id": "1966a4894cf3", "container_image_digests": ["quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1", "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph:v20", "cpu_percentage": "18.61%", "created": "2026-01-10T16:57:20.132238Z", "daemon_id": "compute-0.mkxlpr", "daemon_name": "mgr.compute-0.mkxlpr", "daemon_type": "mgr", "events": ["2026-01-10T16:58:10.864841Z daemon:mgr.compute-0.mkxlpr [INFO] \"Reconfigured mgr.compute-0.mkxlpr on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-10T16:59:20.139647Z", "memory_usage": 547042099, "pending_daemon_config": false, "ports": [9283, 8765], "service_name": "mgr", "started": "2026-01-10T16:57:20.018328Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4@mgr.compute-0.mkxlpr", "version": "20.2.0"}, {"container_id": "69622407e4b3", "container_image_digests": ["quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1", "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph:v20", "cpu_percentage": "3.11%", "created": "2026-01-10T16:57:15.698113Z", "daemon_id": "compute-0", "daemon_name": "mon.compute-0", "daemon_type": "mon", "events": ["2026-01-10T16:58:10.158449Z daemon:mon.compute-0 [INFO] \"Reconfigured mon.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-10T16:59:20.139390Z", "memory_request": 2147483648, "memory_usage": 42498785, "pending_daemon_config": false, "ports": [], "service_name": "mon", "started": "2026-01-10T16:57:18.097137Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4@mon.compute-0", "version": "20.2.0"}, {"container_id": "8bba0bcac67d", "container_image_digests": ["quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1", "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "2.73%", "created": "2026-01-10T16:58:29.175131Z", "daemon_id": "0", "daemon_name": "osd.0", "daemon_type": "osd", "events": ["2026-01-10T16:58:29.274760Z daemon:osd.0 [INFO] \"Deployed osd.0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-10T16:59:20.139886Z", "memory_request": 4294967296, "memory_usage": 58174996, "pending_daemon_config": false, "ports": [], "service_name": "osd.default_drive_group", "started": "2026-01-10T16:58:29.056076Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4@osd.0", "version": "20.2.0"}, {"container_id": "2086bc4111bf", "container_image_digests": ["quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1", "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "3.82%", "created": "2026-01-10T16:58:34.721291Z", "daemon_id": "1", "daemon_name": "osd.1", "daemon_type": "osd", "events": ["2026-01-10T16:58:34.848772Z daemon:osd.1 [INFO] \"Deployed osd.1 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-10T16:59:20.139989Z", "memory_request": 4294967296, "memory_usage": 58678312, "pending_daemon_config": false, "ports": [], "service_name": "osd.default_drive_group", "started": "2026-01-10T16:58:34.481064Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4@osd.1", "version": "20.2.0"}, {"container_id": "d71926618b51", "container_image_digests": ["quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1", "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "4.06%", "created": "2026-01-10T16:58:42.340512Z", "daemon_id": "2", "daemon_name": "osd.2", "daemon_type": "osd", "events": ["2026-01-10T16:58:42.482816Z daemon:osd.2 [INFO] \"Deployed osd.2 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-10T16:59:20.140092Z", "memory_request": 4294967296, "memory_usage": 55710842, "pending_daemon_config": false, "ports": [], "service_name": "osd.default_drive_group", "started": "2026-01-10T16:58:42.048876Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4@osd.2", "version": "20.2.0"}]
Jan 10 16:59:25 compute-0 lvm[95183]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 10 16:59:25 compute-0 lvm[95183]: VG ceph_vg1 finished
Jan 10 16:59:25 compute-0 systemd[1]: libpod-1fb668f9345566844b7a358b39161b988fff95fcc5dd2d8c07f7fb6a282403ae.scope: Deactivated successfully.
Jan 10 16:59:25 compute-0 podman[95068]: 2026-01-10 16:59:25.155236047 +0000 UTC m=+0.644292144 container died 1fb668f9345566844b7a358b39161b988fff95fcc5dd2d8c07f7fb6a282403ae (image=quay.io/ceph/ceph:v20, name=vigorous_dijkstra, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2)
Jan 10 16:59:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-75d90ba33becaf393201656eb3f385bb3988123d1e784180e7fc434c9f1a53bf-merged.mount: Deactivated successfully.
Jan 10 16:59:25 compute-0 podman[95068]: 2026-01-10 16:59:25.19933209 +0000 UTC m=+0.688388187 container remove 1fb668f9345566844b7a358b39161b988fff95fcc5dd2d8c07f7fb6a282403ae (image=quay.io/ceph/ceph:v20, name=vigorous_dijkstra, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 10 16:59:25 compute-0 hopeful_wescoff[95037]: {}
Jan 10 16:59:25 compute-0 systemd[1]: libpod-conmon-1fb668f9345566844b7a358b39161b988fff95fcc5dd2d8c07f7fb6a282403ae.scope: Deactivated successfully.
Jan 10 16:59:25 compute-0 sudo[95065]: pam_unix(sudo:session): session closed for user root
Jan 10 16:59:25 compute-0 systemd[1]: libpod-ae2d4cb5369f91fb24cd6c72f829c216a0977367840cb1738de74a12708c9492.scope: Deactivated successfully.
Jan 10 16:59:25 compute-0 systemd[1]: libpod-ae2d4cb5369f91fb24cd6c72f829c216a0977367840cb1738de74a12708c9492.scope: Consumed 1.430s CPU time.
Jan 10 16:59:25 compute-0 conmon[95037]: conmon ae2d4cb5369f91fb24cd <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ae2d4cb5369f91fb24cd6c72f829c216a0977367840cb1738de74a12708c9492.scope/container/memory.events
Jan 10 16:59:25 compute-0 podman[95021]: 2026-01-10 16:59:25.252179497 +0000 UTC m=+1.127930825 container died ae2d4cb5369f91fb24cd6c72f829c216a0977367840cb1738de74a12708c9492 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_wescoff, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True)
Jan 10 16:59:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-4341065d2d006435ef2ccfeb568f9470d3d66c46feb841070805f96d9790b8fe-merged.mount: Deactivated successfully.
Jan 10 16:59:25 compute-0 podman[95021]: 2026-01-10 16:59:25.2986675 +0000 UTC m=+1.174418788 container remove ae2d4cb5369f91fb24cd6c72f829c216a0977367840cb1738de74a12708c9492 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_wescoff, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 10 16:59:25 compute-0 systemd[1]: libpod-conmon-ae2d4cb5369f91fb24cd6c72f829c216a0977367840cb1738de74a12708c9492.scope: Deactivated successfully.
Jan 10 16:59:25 compute-0 sudo[94932]: pam_unix(sudo:session): session closed for user root
Jan 10 16:59:25 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 10 16:59:25 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:59:25 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 10 16:59:25 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:59:25 compute-0 sudo[95211]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 10 16:59:25 compute-0 sudo[95211]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 16:59:25 compute-0 sudo[95211]: pam_unix(sudo:session): session closed for user root
Jan 10 16:59:25 compute-0 sudo[95236]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 16:59:25 compute-0 sudo[95236]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 16:59:25 compute-0 sudo[95236]: pam_unix(sudo:session): session closed for user root
Jan 10 16:59:25 compute-0 sudo[95261]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Jan 10 16:59:25 compute-0 sudo[95261]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 16:59:25 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v78: 7 pgs: 7 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s wr, 3 op/s
Jan 10 16:59:26 compute-0 ceph-mon[75249]: from='client.14250 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 10 16:59:26 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:59:26 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:59:26 compute-0 sudo[95365]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vglnzdnzjodmwwvyepwymesrsrkzzvcu ; /usr/bin/python3'
Jan 10 16:59:26 compute-0 podman[95328]: 2026-01-10 16:59:26.13395929 +0000 UTC m=+0.087005715 container exec 69622407e4b336ab6e593d34ac16bfb19f7f8835a32ed22c7a89e50ee8c8d8e7 (image=quay.io/ceph/ceph:v20, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-mon-compute-0, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 10 16:59:26 compute-0 sudo[95365]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:59:26 compute-0 podman[95328]: 2026-01-10 16:59:26.259136666 +0000 UTC m=+0.212183021 container exec_died 69622407e4b336ab6e593d34ac16bfb19f7f8835a32ed22c7a89e50ee8c8d8e7 (image=quay.io/ceph/ceph:v20, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-mon-compute-0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 10 16:59:26 compute-0 python3[95373]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   -s -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 16:59:26 compute-0 podman[95393]: 2026-01-10 16:59:26.372626584 +0000 UTC m=+0.050958163 container create f0d0d88a7b3e4adf4108001c87c1d66bd987cc6d810446fb0af0bb3b3427d93d (image=quay.io/ceph/ceph:v20, name=naughty_goldstine, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 16:59:26 compute-0 systemd[1]: Started libpod-conmon-f0d0d88a7b3e4adf4108001c87c1d66bd987cc6d810446fb0af0bb3b3427d93d.scope.
Jan 10 16:59:26 compute-0 systemd[1]: Started libcrun container.
Jan 10 16:59:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e33a62e7bd8717517d18281effd1cbbcb739b1934fd80f0e1eced6564b74da39/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 16:59:26 compute-0 podman[95393]: 2026-01-10 16:59:26.34339956 +0000 UTC m=+0.021731129 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 16:59:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e33a62e7bd8717517d18281effd1cbbcb739b1934fd80f0e1eced6564b74da39/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 16:59:26 compute-0 podman[95393]: 2026-01-10 16:59:26.463680963 +0000 UTC m=+0.142012592 container init f0d0d88a7b3e4adf4108001c87c1d66bd987cc6d810446fb0af0bb3b3427d93d (image=quay.io/ceph/ceph:v20, name=naughty_goldstine, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 16:59:26 compute-0 podman[95393]: 2026-01-10 16:59:26.476214345 +0000 UTC m=+0.154545934 container start f0d0d88a7b3e4adf4108001c87c1d66bd987cc6d810446fb0af0bb3b3427d93d (image=quay.io/ceph/ceph:v20, name=naughty_goldstine, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 16:59:26 compute-0 podman[95393]: 2026-01-10 16:59:26.480823928 +0000 UTC m=+0.159155517 container attach f0d0d88a7b3e4adf4108001c87c1d66bd987cc6d810446fb0af0bb3b3427d93d (image=quay.io/ceph/ceph:v20, name=naughty_goldstine, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 10 16:59:27 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Jan 10 16:59:27 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/820989455' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Jan 10 16:59:27 compute-0 naughty_goldstine[95428]: 
Jan 10 16:59:27 compute-0 naughty_goldstine[95428]: {"fsid":"a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":128,"monmap":{"epoch":1,"min_mon_release_name":"tentacle","num_mons":1},"osdmap":{"epoch":33,"num_osds":3,"num_up_osds":3,"osd_up_since":1768064329,"num_in_osds":3,"osd_in_since":1768064301,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":7}],"num_pgs":7,"num_pools":7,"num_objects":24,"data_bytes":461710,"bytes_used":83939328,"bytes_avail":64327987200,"bytes_total":64411926528,"write_bytes_sec":1194,"read_op_per_sec":0,"write_op_per_sec":3},"fsmap":{"epoch":5,"btime":"2026-01-10T16:59:20:234282+0000","id":1,"up":1,"in":1,"max":1,"by_rank":[{"filesystem_id":1,"rank":0,"name":"cephfs.compute-0.anmivh","status":"up:active","gid":14242}],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs"],"services":{}},"servicemap":{"epoch":2,"modified":"2026-01-10T16:58:41.970835+0000","services":{"osd":{"daemons":{"summary":"","0":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{}}
Jan 10 16:59:27 compute-0 systemd[1]: libpod-f0d0d88a7b3e4adf4108001c87c1d66bd987cc6d810446fb0af0bb3b3427d93d.scope: Deactivated successfully.
Jan 10 16:59:27 compute-0 podman[95393]: 2026-01-10 16:59:27.020094207 +0000 UTC m=+0.698425756 container died f0d0d88a7b3e4adf4108001c87c1d66bd987cc6d810446fb0af0bb3b3427d93d (image=quay.io/ceph/ceph:v20, name=naughty_goldstine, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 10 16:59:27 compute-0 ceph-mon[75249]: pgmap v78: 7 pgs: 7 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s wr, 3 op/s
Jan 10 16:59:27 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/820989455' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Jan 10 16:59:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-e33a62e7bd8717517d18281effd1cbbcb739b1934fd80f0e1eced6564b74da39-merged.mount: Deactivated successfully.
Jan 10 16:59:27 compute-0 podman[95393]: 2026-01-10 16:59:27.077456664 +0000 UTC m=+0.755788223 container remove f0d0d88a7b3e4adf4108001c87c1d66bd987cc6d810446fb0af0bb3b3427d93d (image=quay.io/ceph/ceph:v20, name=naughty_goldstine, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 16:59:27 compute-0 systemd[1]: libpod-conmon-f0d0d88a7b3e4adf4108001c87c1d66bd987cc6d810446fb0af0bb3b3427d93d.scope: Deactivated successfully.
Jan 10 16:59:27 compute-0 sudo[95365]: pam_unix(sudo:session): session closed for user root
Jan 10 16:59:27 compute-0 sudo[95261]: pam_unix(sudo:session): session closed for user root
Jan 10 16:59:27 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 10 16:59:27 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:59:27 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 10 16:59:27 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:59:27 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 16:59:27 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 16:59:27 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 10 16:59:27 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 10 16:59:27 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 10 16:59:27 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:59:27 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 10 16:59:27 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 10 16:59:27 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 10 16:59:27 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 10 16:59:27 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 16:59:27 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 16:59:27 compute-0 sudo[95572]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 16:59:27 compute-0 sudo[95572]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 16:59:27 compute-0 sudo[95572]: pam_unix(sudo:session): session closed for user root
Jan 10 16:59:27 compute-0 sudo[95597]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 10 16:59:27 compute-0 sudo[95597]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 16:59:27 compute-0 podman[95634]: 2026-01-10 16:59:27.648077167 +0000 UTC m=+0.053125445 container create d20bb39b4ce0e06fc87edcfb3bd743cb5a877ce5b8c6b7237acdbb129777d632 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_kowalevski, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 10 16:59:27 compute-0 systemd[1]: Started libpod-conmon-d20bb39b4ce0e06fc87edcfb3bd743cb5a877ce5b8c6b7237acdbb129777d632.scope.
Jan 10 16:59:27 compute-0 systemd[1]: Started libcrun container.
Jan 10 16:59:27 compute-0 podman[95634]: 2026-01-10 16:59:27.624848106 +0000 UTC m=+0.029896464 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 16:59:27 compute-0 podman[95634]: 2026-01-10 16:59:27.719068688 +0000 UTC m=+0.124117026 container init d20bb39b4ce0e06fc87edcfb3bd743cb5a877ce5b8c6b7237acdbb129777d632 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_kowalevski, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 16:59:27 compute-0 podman[95634]: 2026-01-10 16:59:27.730055585 +0000 UTC m=+0.135103874 container start d20bb39b4ce0e06fc87edcfb3bd743cb5a877ce5b8c6b7237acdbb129777d632 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_kowalevski, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 10 16:59:27 compute-0 agitated_kowalevski[95650]: 167 167
Jan 10 16:59:27 compute-0 podman[95634]: 2026-01-10 16:59:27.735274226 +0000 UTC m=+0.140322514 container attach d20bb39b4ce0e06fc87edcfb3bd743cb5a877ce5b8c6b7237acdbb129777d632 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_kowalevski, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 10 16:59:27 compute-0 systemd[1]: libpod-d20bb39b4ce0e06fc87edcfb3bd743cb5a877ce5b8c6b7237acdbb129777d632.scope: Deactivated successfully.
Jan 10 16:59:27 compute-0 podman[95634]: 2026-01-10 16:59:27.736121341 +0000 UTC m=+0.141169639 container died d20bb39b4ce0e06fc87edcfb3bd743cb5a877ce5b8c6b7237acdbb129777d632 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_kowalevski, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 10 16:59:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-11fd53482487e0862e6f2c94bb4b102b7c6a82a49faffde8fa3ea21212fd395a-merged.mount: Deactivated successfully.
Jan 10 16:59:27 compute-0 podman[95634]: 2026-01-10 16:59:27.790867962 +0000 UTC m=+0.195916220 container remove d20bb39b4ce0e06fc87edcfb3bd743cb5a877ce5b8c6b7237acdbb129777d632 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_kowalevski, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 10 16:59:27 compute-0 systemd[1]: libpod-conmon-d20bb39b4ce0e06fc87edcfb3bd743cb5a877ce5b8c6b7237acdbb129777d632.scope: Deactivated successfully.
Jan 10 16:59:27 compute-0 sudo[95691]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pcqpikpuxbpdsyhnhhsvchmfgejuqfdh ; /usr/bin/python3'
Jan 10 16:59:27 compute-0 sudo[95691]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:59:27 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v79: 7 pgs: 7 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s wr, 3 op/s
Jan 10 16:59:28 compute-0 podman[95699]: 2026-01-10 16:59:28.012777523 +0000 UTC m=+0.067265434 container create 16579247793eddade24258d6dfc6c16032b052d11ed3968f90499d9802c75c9f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_bassi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 10 16:59:28 compute-0 python3[95693]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config dump -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 16:59:28 compute-0 systemd[1]: Started libpod-conmon-16579247793eddade24258d6dfc6c16032b052d11ed3968f90499d9802c75c9f.scope.
Jan 10 16:59:28 compute-0 podman[95699]: 2026-01-10 16:59:27.980171931 +0000 UTC m=+0.034659912 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 16:59:28 compute-0 systemd[1]: Started libcrun container.
Jan 10 16:59:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53ea49a48a48f12a022b6e208ec8aeb94cd9c81886cce30e78c8edb60788067b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 16:59:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53ea49a48a48f12a022b6e208ec8aeb94cd9c81886cce30e78c8edb60788067b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 16:59:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53ea49a48a48f12a022b6e208ec8aeb94cd9c81886cce30e78c8edb60788067b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 16:59:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53ea49a48a48f12a022b6e208ec8aeb94cd9c81886cce30e78c8edb60788067b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 16:59:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53ea49a48a48f12a022b6e208ec8aeb94cd9c81886cce30e78c8edb60788067b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 10 16:59:28 compute-0 podman[95713]: 2026-01-10 16:59:28.114947764 +0000 UTC m=+0.055223256 container create 3b6c031b1731fbd218e87f384f437576934430966d2ba659ccc4dcac972a4546 (image=quay.io/ceph/ceph:v20, name=festive_brahmagupta, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 10 16:59:28 compute-0 podman[95699]: 2026-01-10 16:59:28.1210411 +0000 UTC m=+0.175529011 container init 16579247793eddade24258d6dfc6c16032b052d11ed3968f90499d9802c75c9f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_bassi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 16:59:28 compute-0 podman[95699]: 2026-01-10 16:59:28.133418298 +0000 UTC m=+0.187906199 container start 16579247793eddade24258d6dfc6c16032b052d11ed3968f90499d9802c75c9f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_bassi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 10 16:59:28 compute-0 podman[95699]: 2026-01-10 16:59:28.137937958 +0000 UTC m=+0.192425839 container attach 16579247793eddade24258d6dfc6c16032b052d11ed3968f90499d9802c75c9f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_bassi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 16:59:28 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:59:28 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:59:28 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 16:59:28 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 10 16:59:28 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:59:28 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 10 16:59:28 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 10 16:59:28 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 16:59:28 compute-0 systemd[1]: Started libpod-conmon-3b6c031b1731fbd218e87f384f437576934430966d2ba659ccc4dcac972a4546.scope.
Jan 10 16:59:28 compute-0 podman[95713]: 2026-01-10 16:59:28.089830809 +0000 UTC m=+0.030106381 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 16:59:28 compute-0 systemd[1]: Started libcrun container.
Jan 10 16:59:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c079d76d6c6c0fab25a745457e6541a962f731b8c4385676198d10f5224a872d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 16:59:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c079d76d6c6c0fab25a745457e6541a962f731b8c4385676198d10f5224a872d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 16:59:28 compute-0 podman[95713]: 2026-01-10 16:59:28.205801869 +0000 UTC m=+0.146077381 container init 3b6c031b1731fbd218e87f384f437576934430966d2ba659ccc4dcac972a4546 (image=quay.io/ceph/ceph:v20, name=festive_brahmagupta, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 16:59:28 compute-0 podman[95713]: 2026-01-10 16:59:28.212315807 +0000 UTC m=+0.152591339 container start 3b6c031b1731fbd218e87f384f437576934430966d2ba659ccc4dcac972a4546 (image=quay.io/ceph/ceph:v20, name=festive_brahmagupta, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 10 16:59:28 compute-0 podman[95713]: 2026-01-10 16:59:28.216869408 +0000 UTC m=+0.157144920 container attach 3b6c031b1731fbd218e87f384f437576934430966d2ba659ccc4dcac972a4546 (image=quay.io/ceph/ceph:v20, name=festive_brahmagupta, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 10 16:59:28 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e33 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 16:59:28 compute-0 epic_bassi[95721]: --> passed data devices: 0 physical, 3 LVM
Jan 10 16:59:28 compute-0 epic_bassi[95721]: --> All data devices are unavailable
Jan 10 16:59:28 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Jan 10 16:59:28 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/80309119' entity='client.admin' cmd={"prefix": "config dump", "format": "json"} : dispatch
Jan 10 16:59:28 compute-0 festive_brahmagupta[95735]: 
Jan 10 16:59:28 compute-0 systemd[1]: libpod-16579247793eddade24258d6dfc6c16032b052d11ed3968f90499d9802c75c9f.scope: Deactivated successfully.
Jan 10 16:59:28 compute-0 podman[95699]: 2026-01-10 16:59:28.690423308 +0000 UTC m=+0.744911199 container died 16579247793eddade24258d6dfc6c16032b052d11ed3968f90499d9802c75c9f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_bassi, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 16:59:28 compute-0 systemd[1]: libpod-3b6c031b1731fbd218e87f384f437576934430966d2ba659ccc4dcac972a4546.scope: Deactivated successfully.
Jan 10 16:59:28 compute-0 festive_brahmagupta[95735]: [{"section":"global","name":"cluster_network","value":"172.20.0.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"container_image","value":"quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"log_to_file","value":"true","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"global","name":"mon_cluster_log_to_file","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv4","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv6","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"osd_pool_default_size","value":"1","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"public_network","value":"192.168.122.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mon","name":"auth_allow_insecure_global_id_reclaim","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"mon_warn_on_pool_no_redundancy","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr/cephadm/container_init","value":"True","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/migration_current","value":"7","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/use_repo_digest","value":"false","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/orchestrator/orchestrator","value":"cephadm","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr_standby_modules","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"osd","name":"osd_memory_target_autotune","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mds.cephfs","name":"mds_join_fs","value":"cephfs","level":"basic","can_update_at_runtime":true,"mask":""}]
Jan 10 16:59:28 compute-0 podman[95713]: 2026-01-10 16:59:28.700834889 +0000 UTC m=+0.641110421 container died 3b6c031b1731fbd218e87f384f437576934430966d2ba659ccc4dcac972a4546 (image=quay.io/ceph/ceph:v20, name=festive_brahmagupta, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 16:59:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-53ea49a48a48f12a022b6e208ec8aeb94cd9c81886cce30e78c8edb60788067b-merged.mount: Deactivated successfully.
Jan 10 16:59:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-c079d76d6c6c0fab25a745457e6541a962f731b8c4385676198d10f5224a872d-merged.mount: Deactivated successfully.
Jan 10 16:59:28 compute-0 podman[95699]: 2026-01-10 16:59:28.761328926 +0000 UTC m=+0.815816807 container remove 16579247793eddade24258d6dfc6c16032b052d11ed3968f90499d9802c75c9f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_bassi, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 16:59:28 compute-0 podman[95713]: 2026-01-10 16:59:28.778744159 +0000 UTC m=+0.719019651 container remove 3b6c031b1731fbd218e87f384f437576934430966d2ba659ccc4dcac972a4546 (image=quay.io/ceph/ceph:v20, name=festive_brahmagupta, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 10 16:59:28 compute-0 systemd[1]: libpod-conmon-16579247793eddade24258d6dfc6c16032b052d11ed3968f90499d9802c75c9f.scope: Deactivated successfully.
Jan 10 16:59:28 compute-0 sudo[95691]: pam_unix(sudo:session): session closed for user root
Jan 10 16:59:28 compute-0 systemd[1]: libpod-conmon-3b6c031b1731fbd218e87f384f437576934430966d2ba659ccc4dcac972a4546.scope: Deactivated successfully.
Jan 10 16:59:28 compute-0 sudo[95597]: pam_unix(sudo:session): session closed for user root
Jan 10 16:59:28 compute-0 sudo[95796]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 16:59:28 compute-0 sudo[95796]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 16:59:28 compute-0 sudo[95796]: pam_unix(sudo:session): session closed for user root
Jan 10 16:59:28 compute-0 sudo[95821]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 -- lvm list --format json
Jan 10 16:59:28 compute-0 sudo[95821]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 16:59:29 compute-0 ceph-mon[75249]: pgmap v79: 7 pgs: 7 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s wr, 3 op/s
Jan 10 16:59:29 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/80309119' entity='client.admin' cmd={"prefix": "config dump", "format": "json"} : dispatch
Jan 10 16:59:29 compute-0 podman[95855]: 2026-01-10 16:59:29.246367188 +0000 UTC m=+0.044899788 container create dbec82f9937957a3c7a630c747491a935f191d86b2f9250db7fe69702dcf5ced (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_beaver, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle)
Jan 10 16:59:29 compute-0 systemd[1]: Started libpod-conmon-dbec82f9937957a3c7a630c747491a935f191d86b2f9250db7fe69702dcf5ced.scope.
Jan 10 16:59:29 compute-0 systemd[1]: Started libcrun container.
Jan 10 16:59:29 compute-0 podman[95855]: 2026-01-10 16:59:29.2249762 +0000 UTC m=+0.023508850 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 16:59:29 compute-0 podman[95855]: 2026-01-10 16:59:29.334339039 +0000 UTC m=+0.132871669 container init dbec82f9937957a3c7a630c747491a935f191d86b2f9250db7fe69702dcf5ced (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_beaver, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 16:59:29 compute-0 podman[95855]: 2026-01-10 16:59:29.341663961 +0000 UTC m=+0.140196581 container start dbec82f9937957a3c7a630c747491a935f191d86b2f9250db7fe69702dcf5ced (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_beaver, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 16:59:29 compute-0 friendly_beaver[95872]: 167 167
Jan 10 16:59:29 compute-0 podman[95855]: 2026-01-10 16:59:29.345924884 +0000 UTC m=+0.144457484 container attach dbec82f9937957a3c7a630c747491a935f191d86b2f9250db7fe69702dcf5ced (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_beaver, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 10 16:59:29 compute-0 systemd[1]: libpod-dbec82f9937957a3c7a630c747491a935f191d86b2f9250db7fe69702dcf5ced.scope: Deactivated successfully.
Jan 10 16:59:29 compute-0 podman[95855]: 2026-01-10 16:59:29.347576581 +0000 UTC m=+0.146109201 container died dbec82f9937957a3c7a630c747491a935f191d86b2f9250db7fe69702dcf5ced (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_beaver, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 16:59:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-6735e97678c94ca9c929dda53a2bbac83854cac029df94a8b4ad6da2a649b411-merged.mount: Deactivated successfully.
Jan 10 16:59:29 compute-0 podman[95855]: 2026-01-10 16:59:29.385811856 +0000 UTC m=+0.184344476 container remove dbec82f9937957a3c7a630c747491a935f191d86b2f9250db7fe69702dcf5ced (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_beaver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 10 16:59:29 compute-0 systemd[1]: libpod-conmon-dbec82f9937957a3c7a630c747491a935f191d86b2f9250db7fe69702dcf5ced.scope: Deactivated successfully.
Jan 10 16:59:29 compute-0 podman[95896]: 2026-01-10 16:59:29.536718805 +0000 UTC m=+0.039448590 container create f3a27df5ff5a588e2aa93cf9a1f48936848c79d50921a638d0d7d91361442423 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_knuth, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 16:59:29 compute-0 systemd[1]: Started libpod-conmon-f3a27df5ff5a588e2aa93cf9a1f48936848c79d50921a638d0d7d91361442423.scope.
Jan 10 16:59:29 compute-0 podman[95896]: 2026-01-10 16:59:29.519639842 +0000 UTC m=+0.022369647 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 16:59:29 compute-0 systemd[1]: Started libcrun container.
Jan 10 16:59:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfd380280750ce8ca92eb5d3337bcf24a65c599b88d61d13c1d247cc304cdf7b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 16:59:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfd380280750ce8ca92eb5d3337bcf24a65c599b88d61d13c1d247cc304cdf7b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 16:59:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfd380280750ce8ca92eb5d3337bcf24a65c599b88d61d13c1d247cc304cdf7b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 16:59:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfd380280750ce8ca92eb5d3337bcf24a65c599b88d61d13c1d247cc304cdf7b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 16:59:29 compute-0 podman[95896]: 2026-01-10 16:59:29.654766315 +0000 UTC m=+0.157496140 container init f3a27df5ff5a588e2aa93cf9a1f48936848c79d50921a638d0d7d91361442423 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_knuth, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 10 16:59:29 compute-0 podman[95896]: 2026-01-10 16:59:29.666508935 +0000 UTC m=+0.169238730 container start f3a27df5ff5a588e2aa93cf9a1f48936848c79d50921a638d0d7d91361442423 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_knuth, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 10 16:59:29 compute-0 podman[95896]: 2026-01-10 16:59:29.670433988 +0000 UTC m=+0.173163873 container attach f3a27df5ff5a588e2aa93cf9a1f48936848c79d50921a638d0d7d91361442423 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_knuth, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 10 16:59:29 compute-0 sudo[95941]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cwxopsjecxomagjxgkkmbpeblgxrstpv ; /usr/bin/python3'
Jan 10 16:59:29 compute-0 sudo[95941]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:59:29 compute-0 python3[95943]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd get-require-min-compat-client _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 16:59:29 compute-0 podman[95944]: 2026-01-10 16:59:29.903342526 +0000 UTC m=+0.045702021 container create b140f645b00e1049e3b58216adb77efdc923e995aef005c910ee31892bce93c6 (image=quay.io/ceph/ceph:v20, name=vigorous_khayyam, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 10 16:59:29 compute-0 systemd[1]: Started libpod-conmon-b140f645b00e1049e3b58216adb77efdc923e995aef005c910ee31892bce93c6.scope.
Jan 10 16:59:29 compute-0 systemd[1]: Started libcrun container.
Jan 10 16:59:29 compute-0 podman[95944]: 2026-01-10 16:59:29.884457861 +0000 UTC m=+0.026817396 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 16:59:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/770913b075e53cf18751791e9d4485c9a4771ac471c1376782647ed25fad5620/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 16:59:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/770913b075e53cf18751791e9d4485c9a4771ac471c1376782647ed25fad5620/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 16:59:29 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v80: 7 pgs: 7 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s wr, 3 op/s
Jan 10 16:59:29 compute-0 podman[95944]: 2026-01-10 16:59:29.992890842 +0000 UTC m=+0.135250347 container init b140f645b00e1049e3b58216adb77efdc923e995aef005c910ee31892bce93c6 (image=quay.io/ceph/ceph:v20, name=vigorous_khayyam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 16:59:29 compute-0 frosty_knuth[95913]: {
Jan 10 16:59:29 compute-0 frosty_knuth[95913]:     "0": [
Jan 10 16:59:29 compute-0 frosty_knuth[95913]:         {
Jan 10 16:59:29 compute-0 frosty_knuth[95913]:             "devices": [
Jan 10 16:59:29 compute-0 frosty_knuth[95913]:                 "/dev/loop3"
Jan 10 16:59:29 compute-0 frosty_knuth[95913]:             ],
Jan 10 16:59:29 compute-0 frosty_knuth[95913]:             "lv_name": "ceph_lv0",
Jan 10 16:59:29 compute-0 frosty_knuth[95913]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 10 16:59:29 compute-0 frosty_knuth[95913]:             "lv_size": "21470642176",
Jan 10 16:59:29 compute-0 frosty_knuth[95913]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9aa1dcc9-88f4-49c0-be40-744313964d3e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 16:59:29 compute-0 frosty_knuth[95913]:             "lv_uuid": "fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2",
Jan 10 16:59:29 compute-0 frosty_knuth[95913]:             "name": "ceph_lv0",
Jan 10 16:59:29 compute-0 frosty_knuth[95913]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 10 16:59:29 compute-0 frosty_knuth[95913]:             "tags": {
Jan 10 16:59:29 compute-0 frosty_knuth[95913]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 10 16:59:29 compute-0 frosty_knuth[95913]:                 "ceph.block_uuid": "fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2",
Jan 10 16:59:29 compute-0 frosty_knuth[95913]:                 "ceph.cephx_lockbox_secret": "",
Jan 10 16:59:29 compute-0 frosty_knuth[95913]:                 "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 16:59:29 compute-0 frosty_knuth[95913]:                 "ceph.cluster_name": "ceph",
Jan 10 16:59:29 compute-0 frosty_knuth[95913]:                 "ceph.crush_device_class": "",
Jan 10 16:59:29 compute-0 frosty_knuth[95913]:                 "ceph.encrypted": "0",
Jan 10 16:59:29 compute-0 frosty_knuth[95913]:                 "ceph.objectstore": "bluestore",
Jan 10 16:59:29 compute-0 frosty_knuth[95913]:                 "ceph.osd_fsid": "9aa1dcc9-88f4-49c0-be40-744313964d3e",
Jan 10 16:59:29 compute-0 frosty_knuth[95913]:                 "ceph.osd_id": "0",
Jan 10 16:59:29 compute-0 frosty_knuth[95913]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 10 16:59:29 compute-0 frosty_knuth[95913]:                 "ceph.type": "block",
Jan 10 16:59:29 compute-0 frosty_knuth[95913]:                 "ceph.vdo": "0",
Jan 10 16:59:29 compute-0 frosty_knuth[95913]:                 "ceph.with_tpm": "0"
Jan 10 16:59:29 compute-0 frosty_knuth[95913]:             },
Jan 10 16:59:29 compute-0 frosty_knuth[95913]:             "type": "block",
Jan 10 16:59:29 compute-0 frosty_knuth[95913]:             "vg_name": "ceph_vg0"
Jan 10 16:59:29 compute-0 frosty_knuth[95913]:         }
Jan 10 16:59:29 compute-0 frosty_knuth[95913]:     ],
Jan 10 16:59:29 compute-0 frosty_knuth[95913]:     "1": [
Jan 10 16:59:29 compute-0 frosty_knuth[95913]:         {
Jan 10 16:59:29 compute-0 frosty_knuth[95913]:             "devices": [
Jan 10 16:59:29 compute-0 frosty_knuth[95913]:                 "/dev/loop4"
Jan 10 16:59:29 compute-0 frosty_knuth[95913]:             ],
Jan 10 16:59:29 compute-0 frosty_knuth[95913]:             "lv_name": "ceph_lv1",
Jan 10 16:59:29 compute-0 frosty_knuth[95913]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 10 16:59:29 compute-0 frosty_knuth[95913]:             "lv_size": "21470642176",
Jan 10 16:59:29 compute-0 frosty_knuth[95913]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e8e31518-65ae-476c-891c-e2fc550d0a1c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 16:59:29 compute-0 frosty_knuth[95913]:             "lv_uuid": "jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE",
Jan 10 16:59:29 compute-0 frosty_knuth[95913]:             "name": "ceph_lv1",
Jan 10 16:59:29 compute-0 frosty_knuth[95913]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 10 16:59:29 compute-0 frosty_knuth[95913]:             "tags": {
Jan 10 16:59:29 compute-0 frosty_knuth[95913]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 10 16:59:29 compute-0 frosty_knuth[95913]:                 "ceph.block_uuid": "jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE",
Jan 10 16:59:29 compute-0 frosty_knuth[95913]:                 "ceph.cephx_lockbox_secret": "",
Jan 10 16:59:29 compute-0 frosty_knuth[95913]:                 "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 16:59:29 compute-0 frosty_knuth[95913]:                 "ceph.cluster_name": "ceph",
Jan 10 16:59:29 compute-0 frosty_knuth[95913]:                 "ceph.crush_device_class": "",
Jan 10 16:59:29 compute-0 frosty_knuth[95913]:                 "ceph.encrypted": "0",
Jan 10 16:59:29 compute-0 frosty_knuth[95913]:                 "ceph.objectstore": "bluestore",
Jan 10 16:59:29 compute-0 frosty_knuth[95913]:                 "ceph.osd_fsid": "e8e31518-65ae-476c-891c-e2fc550d0a1c",
Jan 10 16:59:29 compute-0 frosty_knuth[95913]:                 "ceph.osd_id": "1",
Jan 10 16:59:29 compute-0 frosty_knuth[95913]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 10 16:59:29 compute-0 frosty_knuth[95913]:                 "ceph.type": "block",
Jan 10 16:59:29 compute-0 frosty_knuth[95913]:                 "ceph.vdo": "0",
Jan 10 16:59:29 compute-0 frosty_knuth[95913]:                 "ceph.with_tpm": "0"
Jan 10 16:59:29 compute-0 frosty_knuth[95913]:             },
Jan 10 16:59:29 compute-0 frosty_knuth[95913]:             "type": "block",
Jan 10 16:59:29 compute-0 frosty_knuth[95913]:             "vg_name": "ceph_vg1"
Jan 10 16:59:29 compute-0 frosty_knuth[95913]:         }
Jan 10 16:59:29 compute-0 frosty_knuth[95913]:     ],
Jan 10 16:59:29 compute-0 frosty_knuth[95913]:     "2": [
Jan 10 16:59:29 compute-0 frosty_knuth[95913]:         {
Jan 10 16:59:29 compute-0 frosty_knuth[95913]:             "devices": [
Jan 10 16:59:29 compute-0 frosty_knuth[95913]:                 "/dev/loop5"
Jan 10 16:59:29 compute-0 frosty_knuth[95913]:             ],
Jan 10 16:59:29 compute-0 frosty_knuth[95913]:             "lv_name": "ceph_lv2",
Jan 10 16:59:29 compute-0 frosty_knuth[95913]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 10 16:59:29 compute-0 frosty_knuth[95913]:             "lv_size": "21470642176",
Jan 10 16:59:29 compute-0 frosty_knuth[95913]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=87473727-6468-4f68-8371-e0bf60edaa43,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 16:59:29 compute-0 frosty_knuth[95913]:             "lv_uuid": "47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL",
Jan 10 16:59:29 compute-0 frosty_knuth[95913]:             "name": "ceph_lv2",
Jan 10 16:59:29 compute-0 frosty_knuth[95913]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 10 16:59:29 compute-0 frosty_knuth[95913]:             "tags": {
Jan 10 16:59:29 compute-0 frosty_knuth[95913]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 10 16:59:29 compute-0 frosty_knuth[95913]:                 "ceph.block_uuid": "47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL",
Jan 10 16:59:29 compute-0 frosty_knuth[95913]:                 "ceph.cephx_lockbox_secret": "",
Jan 10 16:59:29 compute-0 frosty_knuth[95913]:                 "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 16:59:29 compute-0 frosty_knuth[95913]:                 "ceph.cluster_name": "ceph",
Jan 10 16:59:29 compute-0 frosty_knuth[95913]:                 "ceph.crush_device_class": "",
Jan 10 16:59:29 compute-0 frosty_knuth[95913]:                 "ceph.encrypted": "0",
Jan 10 16:59:29 compute-0 frosty_knuth[95913]:                 "ceph.objectstore": "bluestore",
Jan 10 16:59:29 compute-0 frosty_knuth[95913]:                 "ceph.osd_fsid": "87473727-6468-4f68-8371-e0bf60edaa43",
Jan 10 16:59:29 compute-0 frosty_knuth[95913]:                 "ceph.osd_id": "2",
Jan 10 16:59:29 compute-0 frosty_knuth[95913]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 10 16:59:29 compute-0 frosty_knuth[95913]:                 "ceph.type": "block",
Jan 10 16:59:29 compute-0 frosty_knuth[95913]:                 "ceph.vdo": "0",
Jan 10 16:59:29 compute-0 frosty_knuth[95913]:                 "ceph.with_tpm": "0"
Jan 10 16:59:29 compute-0 frosty_knuth[95913]:             },
Jan 10 16:59:29 compute-0 frosty_knuth[95913]:             "type": "block",
Jan 10 16:59:29 compute-0 frosty_knuth[95913]:             "vg_name": "ceph_vg2"
Jan 10 16:59:29 compute-0 frosty_knuth[95913]:         }
Jan 10 16:59:29 compute-0 frosty_knuth[95913]:     ]
Jan 10 16:59:29 compute-0 frosty_knuth[95913]: }
Jan 10 16:59:30 compute-0 podman[95944]: 2026-01-10 16:59:30.001855541 +0000 UTC m=+0.144215056 container start b140f645b00e1049e3b58216adb77efdc923e995aef005c910ee31892bce93c6 (image=quay.io/ceph/ceph:v20, name=vigorous_khayyam, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 16:59:30 compute-0 podman[95944]: 2026-01-10 16:59:30.006090843 +0000 UTC m=+0.148450348 container attach b140f645b00e1049e3b58216adb77efdc923e995aef005c910ee31892bce93c6 (image=quay.io/ceph/ceph:v20, name=vigorous_khayyam, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 16:59:30 compute-0 systemd[1]: libpod-f3a27df5ff5a588e2aa93cf9a1f48936848c79d50921a638d0d7d91361442423.scope: Deactivated successfully.
Jan 10 16:59:30 compute-0 podman[95967]: 2026-01-10 16:59:30.085684482 +0000 UTC m=+0.034083465 container died f3a27df5ff5a588e2aa93cf9a1f48936848c79d50921a638d0d7d91361442423 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_knuth, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 16:59:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-bfd380280750ce8ca92eb5d3337bcf24a65c599b88d61d13c1d247cc304cdf7b-merged.mount: Deactivated successfully.
Jan 10 16:59:30 compute-0 podman[95967]: 2026-01-10 16:59:30.13127824 +0000 UTC m=+0.079677223 container remove f3a27df5ff5a588e2aa93cf9a1f48936848c79d50921a638d0d7d91361442423 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_knuth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 10 16:59:30 compute-0 systemd[1]: libpod-conmon-f3a27df5ff5a588e2aa93cf9a1f48936848c79d50921a638d0d7d91361442423.scope: Deactivated successfully.
Jan 10 16:59:30 compute-0 sudo[95821]: pam_unix(sudo:session): session closed for user root
Jan 10 16:59:30 compute-0 sudo[95998]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 16:59:30 compute-0 sudo[95998]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 16:59:30 compute-0 sudo[95998]: pam_unix(sudo:session): session closed for user root
Jan 10 16:59:30 compute-0 sudo[96023]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 -- raw list --format json
Jan 10 16:59:30 compute-0 sudo[96023]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 16:59:30 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd get-require-min-compat-client"} v 0)
Jan 10 16:59:30 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1007974817' entity='client.admin' cmd={"prefix": "osd get-require-min-compat-client"} : dispatch
Jan 10 16:59:30 compute-0 vigorous_khayyam[95963]: mimic
Jan 10 16:59:30 compute-0 systemd[1]: libpod-b140f645b00e1049e3b58216adb77efdc923e995aef005c910ee31892bce93c6.scope: Deactivated successfully.
Jan 10 16:59:30 compute-0 podman[95944]: 2026-01-10 16:59:30.488351474 +0000 UTC m=+0.630711059 container died b140f645b00e1049e3b58216adb77efdc923e995aef005c910ee31892bce93c6 (image=quay.io/ceph/ceph:v20, name=vigorous_khayyam, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 10 16:59:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-770913b075e53cf18751791e9d4485c9a4771ac471c1376782647ed25fad5620-merged.mount: Deactivated successfully.
Jan 10 16:59:30 compute-0 podman[95944]: 2026-01-10 16:59:30.54080075 +0000 UTC m=+0.683160245 container remove b140f645b00e1049e3b58216adb77efdc923e995aef005c910ee31892bce93c6 (image=quay.io/ceph/ceph:v20, name=vigorous_khayyam, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 10 16:59:30 compute-0 systemd[1]: libpod-conmon-b140f645b00e1049e3b58216adb77efdc923e995aef005c910ee31892bce93c6.scope: Deactivated successfully.
Jan 10 16:59:30 compute-0 sudo[95941]: pam_unix(sudo:session): session closed for user root
Jan 10 16:59:30 compute-0 podman[96072]: 2026-01-10 16:59:30.631373686 +0000 UTC m=+0.043306602 container create aee1520777e755ca8702ed4690fb42e7ebf12c4889c409dc0055f8e093d58994 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_lewin, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 10 16:59:30 compute-0 systemd[1]: Started libpod-conmon-aee1520777e755ca8702ed4690fb42e7ebf12c4889c409dc0055f8e093d58994.scope.
Jan 10 16:59:30 compute-0 systemd[1]: Started libcrun container.
Jan 10 16:59:30 compute-0 podman[96072]: 2026-01-10 16:59:30.684330986 +0000 UTC m=+0.096263912 container init aee1520777e755ca8702ed4690fb42e7ebf12c4889c409dc0055f8e093d58994 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_lewin, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 10 16:59:30 compute-0 podman[96072]: 2026-01-10 16:59:30.689908117 +0000 UTC m=+0.101841023 container start aee1520777e755ca8702ed4690fb42e7ebf12c4889c409dc0055f8e093d58994 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_lewin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 10 16:59:30 compute-0 wonderful_lewin[96088]: 167 167
Jan 10 16:59:30 compute-0 podman[96072]: 2026-01-10 16:59:30.693537022 +0000 UTC m=+0.105469928 container attach aee1520777e755ca8702ed4690fb42e7ebf12c4889c409dc0055f8e093d58994 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_lewin, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 10 16:59:30 compute-0 systemd[1]: libpod-aee1520777e755ca8702ed4690fb42e7ebf12c4889c409dc0055f8e093d58994.scope: Deactivated successfully.
Jan 10 16:59:30 compute-0 podman[96072]: 2026-01-10 16:59:30.694889861 +0000 UTC m=+0.106822767 container died aee1520777e755ca8702ed4690fb42e7ebf12c4889c409dc0055f8e093d58994 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_lewin, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True)
Jan 10 16:59:30 compute-0 podman[96072]: 2026-01-10 16:59:30.615675383 +0000 UTC m=+0.027608289 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 16:59:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-b9468a2aa5caa98d3cb3b5bd452afc14516073fa2f2b4a23e0b47c0ef4d31a6a-merged.mount: Deactivated successfully.
Jan 10 16:59:30 compute-0 podman[96072]: 2026-01-10 16:59:30.72843101 +0000 UTC m=+0.140363916 container remove aee1520777e755ca8702ed4690fb42e7ebf12c4889c409dc0055f8e093d58994 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_lewin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 16:59:30 compute-0 systemd[1]: libpod-conmon-aee1520777e755ca8702ed4690fb42e7ebf12c4889c409dc0055f8e093d58994.scope: Deactivated successfully.
Jan 10 16:59:30 compute-0 podman[96110]: 2026-01-10 16:59:30.877527987 +0000 UTC m=+0.037300779 container create 47d9331e2bfe0c8c4af0901368f6de32c9e4ba4cae1400ef8c80d8d9984b88fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_lichterman, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 10 16:59:30 compute-0 systemd[1]: Started libpod-conmon-47d9331e2bfe0c8c4af0901368f6de32c9e4ba4cae1400ef8c80d8d9984b88fb.scope.
Jan 10 16:59:30 compute-0 podman[96110]: 2026-01-10 16:59:30.860555747 +0000 UTC m=+0.020328559 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 16:59:30 compute-0 systemd[1]: Started libcrun container.
Jan 10 16:59:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/978ba196a9d9dcbe378f860e0c4b7fd933b237c06bbc55bd00f83be3982feb29/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 16:59:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/978ba196a9d9dcbe378f860e0c4b7fd933b237c06bbc55bd00f83be3982feb29/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 16:59:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/978ba196a9d9dcbe378f860e0c4b7fd933b237c06bbc55bd00f83be3982feb29/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 16:59:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/978ba196a9d9dcbe378f860e0c4b7fd933b237c06bbc55bd00f83be3982feb29/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 16:59:30 compute-0 podman[96110]: 2026-01-10 16:59:30.99326895 +0000 UTC m=+0.153041772 container init 47d9331e2bfe0c8c4af0901368f6de32c9e4ba4cae1400ef8c80d8d9984b88fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_lichterman, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 10 16:59:30 compute-0 podman[96110]: 2026-01-10 16:59:30.999677475 +0000 UTC m=+0.159450267 container start 47d9331e2bfe0c8c4af0901368f6de32c9e4ba4cae1400ef8c80d8d9984b88fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_lichterman, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 10 16:59:31 compute-0 podman[96110]: 2026-01-10 16:59:31.003506106 +0000 UTC m=+0.163278918 container attach 47d9331e2bfe0c8c4af0901368f6de32c9e4ba4cae1400ef8c80d8d9984b88fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_lichterman, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 10 16:59:31 compute-0 ceph-mon[75249]: pgmap v80: 7 pgs: 7 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s wr, 3 op/s
Jan 10 16:59:31 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/1007974817' entity='client.admin' cmd={"prefix": "osd get-require-min-compat-client"} : dispatch
Jan 10 16:59:31 compute-0 sudo[96166]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ngwofitctbpetnetavqatztoirhfnwfc ; /usr/bin/python3'
Jan 10 16:59:31 compute-0 sudo[96166]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:59:31 compute-0 python3[96171]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   versions -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 16:59:31 compute-0 podman[96201]: 2026-01-10 16:59:31.596045243 +0000 UTC m=+0.057396389 container create 79e03de1f85459d0c198df58bac614f476493d75622690265683281b667faa7e (image=quay.io/ceph/ceph:v20, name=boring_zhukovsky, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 16:59:31 compute-0 systemd[1]: Started libpod-conmon-79e03de1f85459d0c198df58bac614f476493d75622690265683281b667faa7e.scope.
Jan 10 16:59:31 compute-0 systemd[1]: Started libcrun container.
Jan 10 16:59:31 compute-0 podman[96201]: 2026-01-10 16:59:31.573150672 +0000 UTC m=+0.034501838 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 16:59:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6347dbaed3930284afcda3747f9ac6e5a360e07fefff5e64f30e557b050a6a53/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 16:59:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6347dbaed3930284afcda3747f9ac6e5a360e07fefff5e64f30e557b050a6a53/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 16:59:31 compute-0 podman[96201]: 2026-01-10 16:59:31.693659943 +0000 UTC m=+0.155011119 container init 79e03de1f85459d0c198df58bac614f476493d75622690265683281b667faa7e (image=quay.io/ceph/ceph:v20, name=boring_zhukovsky, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 16:59:31 compute-0 podman[96201]: 2026-01-10 16:59:31.702095947 +0000 UTC m=+0.163447093 container start 79e03de1f85459d0c198df58bac614f476493d75622690265683281b667faa7e (image=quay.io/ceph/ceph:v20, name=boring_zhukovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 16:59:31 compute-0 podman[96201]: 2026-01-10 16:59:31.705036361 +0000 UTC m=+0.166387507 container attach 79e03de1f85459d0c198df58bac614f476493d75622690265683281b667faa7e (image=quay.io/ceph/ceph:v20, name=boring_zhukovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 16:59:31 compute-0 lvm[96249]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 10 16:59:31 compute-0 lvm[96249]: VG ceph_vg0 finished
Jan 10 16:59:31 compute-0 lvm[96253]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 10 16:59:31 compute-0 lvm[96253]: VG ceph_vg2 finished
Jan 10 16:59:31 compute-0 lvm[96251]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 10 16:59:31 compute-0 lvm[96251]: VG ceph_vg1 finished
Jan 10 16:59:31 compute-0 nifty_lichterman[96127]: {}
Jan 10 16:59:31 compute-0 systemd[1]: libpod-47d9331e2bfe0c8c4af0901368f6de32c9e4ba4cae1400ef8c80d8d9984b88fb.scope: Deactivated successfully.
Jan 10 16:59:31 compute-0 systemd[1]: libpod-47d9331e2bfe0c8c4af0901368f6de32c9e4ba4cae1400ef8c80d8d9984b88fb.scope: Consumed 1.576s CPU time.
Jan 10 16:59:31 compute-0 conmon[96127]: conmon 47d9331e2bfe0c8c4af0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-47d9331e2bfe0c8c4af0901368f6de32c9e4ba4cae1400ef8c80d8d9984b88fb.scope/container/memory.events
Jan 10 16:59:31 compute-0 podman[96110]: 2026-01-10 16:59:31.930757932 +0000 UTC m=+1.090530734 container died 47d9331e2bfe0c8c4af0901368f6de32c9e4ba4cae1400ef8c80d8d9984b88fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_lichterman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 16:59:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-978ba196a9d9dcbe378f860e0c4b7fd933b237c06bbc55bd00f83be3982feb29-merged.mount: Deactivated successfully.
Jan 10 16:59:31 compute-0 podman[96110]: 2026-01-10 16:59:31.976057701 +0000 UTC m=+1.135830493 container remove 47d9331e2bfe0c8c4af0901368f6de32c9e4ba4cae1400ef8c80d8d9984b88fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_lichterman, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 16:59:31 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v81: 7 pgs: 7 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s wr, 3 op/s
Jan 10 16:59:32 compute-0 systemd[1]: libpod-conmon-47d9331e2bfe0c8c4af0901368f6de32c9e4ba4cae1400ef8c80d8d9984b88fb.scope: Deactivated successfully.
Jan 10 16:59:32 compute-0 sudo[96023]: pam_unix(sudo:session): session closed for user root
Jan 10 16:59:32 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 10 16:59:32 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:59:32 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 10 16:59:32 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:59:32 compute-0 sudo[96286]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 10 16:59:32 compute-0 sudo[96286]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 16:59:32 compute-0 sudo[96286]: pam_unix(sudo:session): session closed for user root
Jan 10 16:59:32 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "versions", "format": "json"} v 0)
Jan 10 16:59:32 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/481888757' entity='client.admin' cmd={"prefix": "versions", "format": "json"} : dispatch
Jan 10 16:59:32 compute-0 boring_zhukovsky[96235]: 
Jan 10 16:59:32 compute-0 systemd[1]: libpod-79e03de1f85459d0c198df58bac614f476493d75622690265683281b667faa7e.scope: Deactivated successfully.
Jan 10 16:59:32 compute-0 conmon[96235]: conmon 79e03de1f85459d0c198 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-79e03de1f85459d0c198df58bac614f476493d75622690265683281b667faa7e.scope/container/memory.events
Jan 10 16:59:32 compute-0 boring_zhukovsky[96235]: {"mon":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":1},"mgr":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":1},"osd":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":3},"mds":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":1},"overall":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":6}}
Jan 10 16:59:32 compute-0 podman[96201]: 2026-01-10 16:59:32.248205292 +0000 UTC m=+0.709556468 container died 79e03de1f85459d0c198df58bac614f476493d75622690265683281b667faa7e (image=quay.io/ceph/ceph:v20, name=boring_zhukovsky, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 16:59:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-6347dbaed3930284afcda3747f9ac6e5a360e07fefff5e64f30e557b050a6a53-merged.mount: Deactivated successfully.
Jan 10 16:59:32 compute-0 podman[96201]: 2026-01-10 16:59:32.306599439 +0000 UTC m=+0.767950615 container remove 79e03de1f85459d0c198df58bac614f476493d75622690265683281b667faa7e (image=quay.io/ceph/ceph:v20, name=boring_zhukovsky, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 10 16:59:32 compute-0 systemd[1]: libpod-conmon-79e03de1f85459d0c198df58bac614f476493d75622690265683281b667faa7e.scope: Deactivated successfully.
Jan 10 16:59:32 compute-0 sudo[96166]: pam_unix(sudo:session): session closed for user root
Jan 10 16:59:33 compute-0 ceph-mon[75249]: pgmap v81: 7 pgs: 7 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s wr, 3 op/s
Jan 10 16:59:33 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:59:33 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:59:33 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/481888757' entity='client.admin' cmd={"prefix": "versions", "format": "json"} : dispatch
Jan 10 16:59:33 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e33 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 16:59:33 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v82: 7 pgs: 7 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s wr, 3 op/s
Jan 10 16:59:35 compute-0 ceph-mon[75249]: pgmap v82: 7 pgs: 7 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s wr, 3 op/s
Jan 10 16:59:35 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v83: 7 pgs: 7 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 10 16:59:37 compute-0 ceph-mon[75249]: pgmap v83: 7 pgs: 7 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 10 16:59:37 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v84: 7 pgs: 7 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 10 16:59:37 compute-0 ceph-mgr[75538]: [balancer INFO root] Optimize plan auto_2026-01-10_16:59:37
Jan 10 16:59:37 compute-0 ceph-mgr[75538]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 10 16:59:37 compute-0 ceph-mgr[75538]: [balancer INFO root] do_upmap
Jan 10 16:59:37 compute-0 ceph-mgr[75538]: [balancer INFO root] pools ['vms', '.mgr', 'backups', 'volumes', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'images']
Jan 10 16:59:37 compute-0 ceph-mgr[75538]: [balancer INFO root] prepared 0/10 upmap changes
Jan 10 16:59:38 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e33 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 16:59:38 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] _maybe_adjust
Jan 10 16:59:38 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 16:59:38 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 10 16:59:38 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 16:59:38 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 10 16:59:38 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 16:59:38 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 10 16:59:38 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 16:59:38 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 10 16:59:38 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 16:59:38 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 10 16:59:38 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 16:59:38 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 8.028637363845935e-07 of space, bias 4.0, pg target 0.0009634364836615122 quantized to 16 (current 1)
Jan 10 16:59:38 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 16:59:38 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 10 16:59:38 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} v 0)
Jan 10 16:59:38 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} : dispatch
Jan 10 16:59:38 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 16:59:38 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 16:59:38 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 16:59:38 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 16:59:38 compute-0 ceph-mgr[75538]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 10 16:59:38 compute-0 ceph-mgr[75538]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 10 16:59:38 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 16:59:38 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 16:59:38 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 10 16:59:38 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 10 16:59:38 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 10 16:59:38 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 10 16:59:38 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 10 16:59:38 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 10 16:59:38 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 10 16:59:38 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 10 16:59:39 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e33 do_prune osdmap full prune enabled
Jan 10 16:59:39 compute-0 ceph-mon[75249]: pgmap v84: 7 pgs: 7 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 10 16:59:39 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} : dispatch
Jan 10 16:59:39 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Jan 10 16:59:39 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e34 e34: 3 total, 3 up, 3 in
Jan 10 16:59:39 compute-0 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e34: 3 total, 3 up, 3 in
Jan 10 16:59:39 compute-0 ceph-mgr[75538]: [progress INFO root] update: starting ev a46ea606-41f4-4921-a552-5d2ac27c9fda (PG autoscaler increasing pool 2 PGs from 1 to 32)
Jan 10 16:59:39 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} v 0)
Jan 10 16:59:39 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} : dispatch
Jan 10 16:59:39 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v86: 7 pgs: 7 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 10 16:59:39 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} v 0)
Jan 10 16:59:39 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 10 16:59:40 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e34 do_prune osdmap full prune enabled
Jan 10 16:59:40 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Jan 10 16:59:40 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Jan 10 16:59:40 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e35 e35: 3 total, 3 up, 3 in
Jan 10 16:59:40 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Jan 10 16:59:40 compute-0 ceph-mon[75249]: osdmap e34: 3 total, 3 up, 3 in
Jan 10 16:59:40 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} : dispatch
Jan 10 16:59:40 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 10 16:59:40 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 35 pg[2.0( empty local-lis/les=18/19 n=0 ec=17/17 lis/c=18/18 les/c/f=19/19/0 sis=35 pruub=14.661141396s) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active pruub 71.834594727s@ mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 16:59:40 compute-0 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e35: 3 total, 3 up, 3 in
Jan 10 16:59:40 compute-0 ceph-mgr[75538]: [progress INFO root] update: starting ev 3dd6f5ab-0a05-494e-96e3-019574b3283c (PG autoscaler increasing pool 3 PGs from 1 to 32)
Jan 10 16:59:40 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} v 0)
Jan 10 16:59:40 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} : dispatch
Jan 10 16:59:40 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 35 pg[2.0( empty local-lis/les=18/19 n=0 ec=17/17 lis/c=18/18 les/c/f=19/19/0 sis=35 pruub=14.661141396s) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown pruub 71.834594727s@ mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:41 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e35 do_prune osdmap full prune enabled
Jan 10 16:59:41 compute-0 ceph-mon[75249]: pgmap v86: 7 pgs: 7 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 10 16:59:41 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Jan 10 16:59:41 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Jan 10 16:59:41 compute-0 ceph-mon[75249]: osdmap e35: 3 total, 3 up, 3 in
Jan 10 16:59:41 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} : dispatch
Jan 10 16:59:41 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Jan 10 16:59:41 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e36 e36: 3 total, 3 up, 3 in
Jan 10 16:59:41 compute-0 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e36: 3 total, 3 up, 3 in
Jan 10 16:59:41 compute-0 ceph-mgr[75538]: [progress INFO root] update: starting ev 610b1bc9-e7d4-41a4-a3a0-adc17524e440 (PG autoscaler increasing pool 4 PGs from 1 to 32)
Jan 10 16:59:41 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} v 0)
Jan 10 16:59:41 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} : dispatch
Jan 10 16:59:41 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 36 pg[2.1f( empty local-lis/les=18/19 n=0 ec=35/17 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:41 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 36 pg[2.1d( empty local-lis/les=18/19 n=0 ec=35/17 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:41 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 36 pg[2.b( empty local-lis/les=18/19 n=0 ec=35/17 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:41 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 36 pg[2.1c( empty local-lis/les=18/19 n=0 ec=35/17 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:41 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 36 pg[2.1e( empty local-lis/les=18/19 n=0 ec=35/17 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:41 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 36 pg[2.a( empty local-lis/les=18/19 n=0 ec=35/17 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:41 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 36 pg[2.9( empty local-lis/les=18/19 n=0 ec=35/17 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:41 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 36 pg[2.6( empty local-lis/les=18/19 n=0 ec=35/17 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:41 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 36 pg[2.5( empty local-lis/les=18/19 n=0 ec=35/17 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:41 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 36 pg[2.4( empty local-lis/les=18/19 n=0 ec=35/17 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:41 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 36 pg[2.3( empty local-lis/les=18/19 n=0 ec=35/17 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:41 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 36 pg[2.2( empty local-lis/les=18/19 n=0 ec=35/17 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:41 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 36 pg[2.1( empty local-lis/les=18/19 n=0 ec=35/17 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:41 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 36 pg[2.8( empty local-lis/les=18/19 n=0 ec=35/17 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:41 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 36 pg[2.d( empty local-lis/les=18/19 n=0 ec=35/17 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:41 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 36 pg[2.c( empty local-lis/les=18/19 n=0 ec=35/17 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:41 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 36 pg[2.e( empty local-lis/les=18/19 n=0 ec=35/17 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:41 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 36 pg[2.f( empty local-lis/les=18/19 n=0 ec=35/17 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:41 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 36 pg[2.7( empty local-lis/les=18/19 n=0 ec=35/17 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:41 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 36 pg[2.12( empty local-lis/les=18/19 n=0 ec=35/17 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:41 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 36 pg[2.10( empty local-lis/les=18/19 n=0 ec=35/17 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:41 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 36 pg[2.11( empty local-lis/les=18/19 n=0 ec=35/17 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:41 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 36 pg[2.13( empty local-lis/les=18/19 n=0 ec=35/17 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:41 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 36 pg[2.14( empty local-lis/les=18/19 n=0 ec=35/17 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:41 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 36 pg[2.15( empty local-lis/les=18/19 n=0 ec=35/17 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:41 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 36 pg[2.16( empty local-lis/les=18/19 n=0 ec=35/17 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:41 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 36 pg[2.17( empty local-lis/les=18/19 n=0 ec=35/17 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:41 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 36 pg[2.18( empty local-lis/les=18/19 n=0 ec=35/17 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:41 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 36 pg[2.19( empty local-lis/les=18/19 n=0 ec=35/17 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:41 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 36 pg[2.1a( empty local-lis/les=18/19 n=0 ec=35/17 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:41 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 36 pg[2.1b( empty local-lis/les=18/19 n=0 ec=35/17 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:41 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 36 pg[2.1c( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:41 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 36 pg[2.a( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:41 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 36 pg[2.b( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:41 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 36 pg[2.1f( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:41 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 36 pg[2.9( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:41 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 36 pg[2.1d( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:41 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 36 pg[2.6( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:41 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 36 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:41 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 36 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:41 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 36 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:41 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 36 pg[2.2( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:41 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 36 pg[2.1( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:41 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 36 pg[2.8( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:41 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 36 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:41 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 36 pg[2.e( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:41 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 36 pg[2.f( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:41 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 36 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:41 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 36 pg[2.c( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:41 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 36 pg[2.12( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:41 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 36 pg[2.10( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:41 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 36 pg[2.13( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:41 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 36 pg[2.0( empty local-lis/les=35/36 n=0 ec=17/17 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:41 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 36 pg[2.14( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:41 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 36 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:41 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 36 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:41 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 36 pg[2.1a( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:41 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 36 pg[2.18( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:41 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 36 pg[2.11( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:41 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 36 pg[2.19( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:41 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 36 pg[2.16( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:41 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 36 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:41 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 36 pg[2.1e( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:41 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 2.1c scrub starts
Jan 10 16:59:41 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 2.1c scrub ok
Jan 10 16:59:41 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v89: 38 pgs: 1 peering, 31 unknown, 6 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 10 16:59:41 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} v 0)
Jan 10 16:59:41 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 10 16:59:41 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} v 0)
Jan 10 16:59:41 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 10 16:59:42 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e36 do_prune osdmap full prune enabled
Jan 10 16:59:42 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Jan 10 16:59:42 compute-0 ceph-mon[75249]: osdmap e36: 3 total, 3 up, 3 in
Jan 10 16:59:42 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} : dispatch
Jan 10 16:59:42 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 10 16:59:42 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 10 16:59:42 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Jan 10 16:59:42 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Jan 10 16:59:42 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Jan 10 16:59:42 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e37 e37: 3 total, 3 up, 3 in
Jan 10 16:59:42 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 37 pg[3.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=37 pruub=12.631856918s) [1] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active pruub 79.404685974s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 16:59:42 compute-0 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e37: 3 total, 3 up, 3 in
Jan 10 16:59:42 compute-0 ceph-mgr[75538]: [progress INFO root] update: starting ev b0b60259-8b16-4f62-958b-7d70bbc65ea2 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Jan 10 16:59:42 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"} v 0)
Jan 10 16:59:42 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"} : dispatch
Jan 10 16:59:42 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 37 pg[3.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=37 pruub=12.631856918s) [1] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown pruub 79.404685974s@ mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:42 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 37 pg[4.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=37 pruub=14.402929306s) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active pruub 87.063102722s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 16:59:42 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 37 pg[4.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=37 pruub=14.402929306s) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown pruub 87.063102722s@ mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:43 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e37 do_prune osdmap full prune enabled
Jan 10 16:59:43 compute-0 ceph-mon[75249]: 2.1c scrub starts
Jan 10 16:59:43 compute-0 ceph-mon[75249]: 2.1c scrub ok
Jan 10 16:59:43 compute-0 ceph-mon[75249]: pgmap v89: 38 pgs: 1 peering, 31 unknown, 6 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 10 16:59:43 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Jan 10 16:59:43 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Jan 10 16:59:43 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Jan 10 16:59:43 compute-0 ceph-mon[75249]: osdmap e37: 3 total, 3 up, 3 in
Jan 10 16:59:43 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"} : dispatch
Jan 10 16:59:43 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Jan 10 16:59:43 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e38 e38: 3 total, 3 up, 3 in
Jan 10 16:59:43 compute-0 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e38: 3 total, 3 up, 3 in
Jan 10 16:59:43 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 38 pg[3.1b( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [1] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:43 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 38 pg[3.1c( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [1] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:43 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 38 pg[3.1e( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [1] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:43 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 38 pg[3.1d( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [1] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:43 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 38 pg[3.a( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [1] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:43 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 38 pg[3.9( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [1] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:43 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 38 pg[3.8( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [1] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:43 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 38 pg[3.1f( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [1] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:43 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 38 pg[3.7( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [1] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:43 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 38 pg[3.6( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [1] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:43 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 38 pg[3.5( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [1] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:43 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 38 pg[3.1( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [1] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:43 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 38 pg[3.4( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [1] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:43 compute-0 ceph-mgr[75538]: [progress INFO root] update: starting ev 421bf9b0-54aa-4ee0-8769-b1782640febe (PG autoscaler increasing pool 6 PGs from 1 to 16)
Jan 10 16:59:43 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 38 pg[3.b( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [1] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:43 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 38 pg[3.3( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [1] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:43 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 38 pg[3.2( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [1] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:43 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 38 pg[3.c( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [1] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:43 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 38 pg[3.d( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [1] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:43 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 38 pg[3.e( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [1] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:43 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} v 0)
Jan 10 16:59:43 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 38 pg[3.10( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [1] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:43 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} : dispatch
Jan 10 16:59:43 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 38 pg[3.14( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [1] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:43 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 38 pg[3.12( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [1] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:43 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 38 pg[3.11( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [1] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:43 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 38 pg[3.15( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [1] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:43 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 38 pg[3.13( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [1] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:43 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 38 pg[3.17( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [1] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:43 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 38 pg[3.16( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [1] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:43 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 38 pg[3.18( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [1] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:43 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 38 pg[3.f( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [1] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:43 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 38 pg[3.19( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [1] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:43 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 38 pg[3.1a( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [1] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:43 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 38 pg[3.1c( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [1] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:43 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 38 pg[4.1e( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:43 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 38 pg[4.1c( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:43 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 38 pg[4.1f( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:43 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 38 pg[4.8( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:43 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 38 pg[4.1d( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:43 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 38 pg[4.7( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:43 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 38 pg[4.1b( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:43 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 38 pg[4.a( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:43 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 38 pg[4.b( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:43 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 38 pg[4.6( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:43 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 38 pg[4.5( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:43 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 38 pg[4.1a( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:43 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 38 pg[4.9( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:43 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 38 pg[4.19( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:43 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 38 pg[4.3( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:43 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 38 pg[4.4( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:43 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 38 pg[4.1( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:43 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 38 pg[4.2( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:43 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 38 pg[4.c( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:43 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 38 pg[4.d( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:43 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 38 pg[4.e( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:43 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 38 pg[4.f( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:43 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 38 pg[4.10( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:43 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 38 pg[4.11( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:43 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 38 pg[4.13( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:43 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 38 pg[4.12( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:43 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 38 pg[4.14( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:43 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 38 pg[4.15( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:43 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 38 pg[4.16( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:43 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 38 pg[4.17( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:43 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 38 pg[4.18( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:43 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 38 pg[3.1e( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [1] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:43 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 38 pg[3.a( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [1] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:43 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 38 pg[3.1f( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [1] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:43 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 38 pg[3.8( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [1] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:43 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 38 pg[3.9( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [1] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:43 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 38 pg[3.7( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [1] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:43 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 38 pg[3.6( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [1] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:43 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 38 pg[3.5( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [1] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:43 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 38 pg[4.1f( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:43 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 38 pg[3.1( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [1] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:43 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 38 pg[3.1d( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [1] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:43 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 38 pg[3.4( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [1] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:43 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 38 pg[3.0( empty local-lis/les=37/38 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [1] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:43 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 38 pg[3.2( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [1] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:43 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 38 pg[3.c( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [1] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:43 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 38 pg[3.d( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [1] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:43 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 38 pg[3.e( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [1] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:43 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 38 pg[3.1b( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [1] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:43 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 38 pg[3.3( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [1] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:43 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 38 pg[3.b( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [1] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:43 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 38 pg[3.10( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [1] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:43 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 38 pg[3.14( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [1] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:43 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 38 pg[3.11( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [1] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:43 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 38 pg[3.12( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [1] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:43 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 38 pg[3.15( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [1] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:43 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 38 pg[3.18( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [1] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:43 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 38 pg[3.17( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [1] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:43 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 38 pg[3.16( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [1] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:43 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 38 pg[3.19( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [1] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:43 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 38 pg[3.1a( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [1] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:43 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 38 pg[3.13( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [1] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:43 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 38 pg[3.f( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [1] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:43 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 38 pg[4.7( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:43 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 38 pg[4.1e( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:43 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 38 pg[4.1c( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:43 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 38 pg[4.1d( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:43 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 38 pg[4.a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:43 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 38 pg[4.b( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:43 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 38 pg[4.6( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:43 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 38 pg[4.1a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:43 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 38 pg[4.5( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:43 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 38 pg[4.1b( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:43 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 38 pg[4.9( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:43 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 38 pg[4.19( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:43 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 38 pg[4.4( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:43 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 38 pg[4.1( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:43 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 38 pg[4.0( empty local-lis/les=37/38 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:43 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 38 pg[4.2( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:43 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 38 pg[4.c( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:43 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 38 pg[4.d( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:43 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 38 pg[4.e( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:43 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 38 pg[4.3( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:43 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 38 pg[4.10( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:43 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 38 pg[4.11( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:43 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 38 pg[4.12( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:43 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 38 pg[4.13( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:43 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 38 pg[4.14( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:43 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 38 pg[4.15( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:43 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 38 pg[4.16( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:43 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 38 pg[4.f( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:43 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 38 pg[4.18( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:43 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 38 pg[4.17( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:43 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 38 pg[4.8( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:43 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 4.1f scrub starts
Jan 10 16:59:43 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 4.1f scrub ok
Jan 10 16:59:43 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e38 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 16:59:43 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v92: 100 pgs: 1 peering, 93 unknown, 6 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 10 16:59:43 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"} v 0)
Jan 10 16:59:43 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"} : dispatch
Jan 10 16:59:43 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} v 0)
Jan 10 16:59:43 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 10 16:59:44 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e38 do_prune osdmap full prune enabled
Jan 10 16:59:44 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Jan 10 16:59:44 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Jan 10 16:59:44 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Jan 10 16:59:44 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e39 e39: 3 total, 3 up, 3 in
Jan 10 16:59:44 compute-0 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e39: 3 total, 3 up, 3 in
Jan 10 16:59:44 compute-0 ceph-mgr[75538]: [progress INFO root] update: starting ev d2f0cba0-11b1-40dc-8eb1-a074c7118ba1 (PG autoscaler increasing pool 7 PGs from 1 to 32)
Jan 10 16:59:44 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Jan 10 16:59:44 compute-0 ceph-mon[75249]: osdmap e38: 3 total, 3 up, 3 in
Jan 10 16:59:44 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} : dispatch
Jan 10 16:59:44 compute-0 ceph-mon[75249]: 4.1f scrub starts
Jan 10 16:59:44 compute-0 ceph-mon[75249]: 4.1f scrub ok
Jan 10 16:59:44 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"} : dispatch
Jan 10 16:59:44 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 10 16:59:44 compute-0 ceph-mgr[75538]: [progress INFO root] complete: finished ev a46ea606-41f4-4921-a552-5d2ac27c9fda (PG autoscaler increasing pool 2 PGs from 1 to 32)
Jan 10 16:59:44 compute-0 ceph-mgr[75538]: [progress INFO root] Completed event a46ea606-41f4-4921-a552-5d2ac27c9fda (PG autoscaler increasing pool 2 PGs from 1 to 32) in 5 seconds
Jan 10 16:59:44 compute-0 ceph-mgr[75538]: [progress INFO root] complete: finished ev 3dd6f5ab-0a05-494e-96e3-019574b3283c (PG autoscaler increasing pool 3 PGs from 1 to 32)
Jan 10 16:59:44 compute-0 ceph-mgr[75538]: [progress INFO root] Completed event 3dd6f5ab-0a05-494e-96e3-019574b3283c (PG autoscaler increasing pool 3 PGs from 1 to 32) in 4 seconds
Jan 10 16:59:44 compute-0 ceph-mgr[75538]: [progress INFO root] complete: finished ev 610b1bc9-e7d4-41a4-a3a0-adc17524e440 (PG autoscaler increasing pool 4 PGs from 1 to 32)
Jan 10 16:59:44 compute-0 ceph-mgr[75538]: [progress INFO root] Completed event 610b1bc9-e7d4-41a4-a3a0-adc17524e440 (PG autoscaler increasing pool 4 PGs from 1 to 32) in 3 seconds
Jan 10 16:59:44 compute-0 ceph-mgr[75538]: [progress INFO root] complete: finished ev b0b60259-8b16-4f62-958b-7d70bbc65ea2 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Jan 10 16:59:44 compute-0 ceph-mgr[75538]: [progress INFO root] Completed event b0b60259-8b16-4f62-958b-7d70bbc65ea2 (PG autoscaler increasing pool 5 PGs from 1 to 32) in 2 seconds
Jan 10 16:59:44 compute-0 ceph-mgr[75538]: [progress INFO root] complete: finished ev 421bf9b0-54aa-4ee0-8769-b1782640febe (PG autoscaler increasing pool 6 PGs from 1 to 16)
Jan 10 16:59:44 compute-0 ceph-mgr[75538]: [progress INFO root] Completed event 421bf9b0-54aa-4ee0-8769-b1782640febe (PG autoscaler increasing pool 6 PGs from 1 to 16) in 1 seconds
Jan 10 16:59:44 compute-0 ceph-mgr[75538]: [progress INFO root] complete: finished ev d2f0cba0-11b1-40dc-8eb1-a074c7118ba1 (PG autoscaler increasing pool 7 PGs from 1 to 32)
Jan 10 16:59:44 compute-0 ceph-mgr[75538]: [progress INFO root] Completed event d2f0cba0-11b1-40dc-8eb1-a074c7118ba1 (PG autoscaler increasing pool 7 PGs from 1 to 32) in 0 seconds
Jan 10 16:59:44 compute-0 ceph-mgr[75538]: [progress INFO root] Writing back 10 completed events
Jan 10 16:59:44 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 10 16:59:44 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:59:44 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 2.b scrub starts
Jan 10 16:59:44 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 2.b scrub ok
Jan 10 16:59:45 compute-0 ceph-mon[75249]: pgmap v92: 100 pgs: 1 peering, 93 unknown, 6 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 10 16:59:45 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Jan 10 16:59:45 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Jan 10 16:59:45 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Jan 10 16:59:45 compute-0 ceph-mon[75249]: osdmap e39: 3 total, 3 up, 3 in
Jan 10 16:59:45 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 16:59:45 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 3.1c scrub starts
Jan 10 16:59:45 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 3.1c scrub ok
Jan 10 16:59:45 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v94: 146 pgs: 77 unknown, 69 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 10 16:59:45 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} v 0)
Jan 10 16:59:45 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 10 16:59:46 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e39 do_prune osdmap full prune enabled
Jan 10 16:59:46 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Jan 10 16:59:46 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e40 e40: 3 total, 3 up, 3 in
Jan 10 16:59:46 compute-0 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e40: 3 total, 3 up, 3 in
Jan 10 16:59:46 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 39 pg[5.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=39 pruub=12.634835243s) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active pruub 75.884468079s@ mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 16:59:46 compute-0 ceph-mon[75249]: 2.b scrub starts
Jan 10 16:59:46 compute-0 ceph-mon[75249]: 2.b scrub ok
Jan 10 16:59:46 compute-0 ceph-mon[75249]: 3.1c scrub starts
Jan 10 16:59:46 compute-0 ceph-mon[75249]: 3.1c scrub ok
Jan 10 16:59:46 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 10 16:59:46 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=39 pruub=12.634835243s) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown pruub 75.884468079s@ mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:46 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.1b( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:46 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.13( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:46 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.18( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:46 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.16( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:46 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.1c( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:46 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.17( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:46 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.1d( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:46 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.15( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:46 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.3( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:46 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.2( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:46 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.5( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:46 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.4( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:46 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.6( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:46 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.8( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:46 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.7( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:46 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.9( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:46 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.b( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:46 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.a( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:46 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.d( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:46 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.c( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:46 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.f( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:46 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.e( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:46 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.11( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:46 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.10( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:46 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.1a( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:46 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.19( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:46 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.14( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:46 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.12( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:46 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.1e( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:46 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.1f( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:46 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.1( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:46 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 3.1e scrub starts
Jan 10 16:59:46 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 3.1e scrub ok
Jan 10 16:59:47 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e40 do_prune osdmap full prune enabled
Jan 10 16:59:47 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e41 e41: 3 total, 3 up, 3 in
Jan 10 16:59:47 compute-0 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e41: 3 total, 3 up, 3 in
Jan 10 16:59:47 compute-0 ceph-mon[75249]: pgmap v94: 146 pgs: 77 unknown, 69 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 10 16:59:47 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Jan 10 16:59:47 compute-0 ceph-mon[75249]: osdmap e40: 3 total, 3 up, 3 in
Jan 10 16:59:47 compute-0 ceph-mon[75249]: 3.1e scrub starts
Jan 10 16:59:47 compute-0 ceph-mon[75249]: 3.1e scrub ok
Jan 10 16:59:47 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.1c( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:47 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 39 pg[6.0( v 33'39 (0'0,33'39] local-lis/les=23/24 n=22 ec=23/23 lis/c=23/23 les/c/f=24/24/0 sis=39 pruub=12.622920990s) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 33'38 mlcod 33'38 active pruub 90.103996277s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 16:59:47 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.1e( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:47 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.1d( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:47 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.11( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:47 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.13( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:47 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.1f( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:47 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.12( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:47 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.10( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:47 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.14( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:47 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.16( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:47 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.8( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:47 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.15( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:47 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.9( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:47 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.b( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:47 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.7( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:47 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.6( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:47 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.5( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:47 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.0( empty local-lis/les=39/41 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:47 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.3( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:47 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.4( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:47 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.2( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:47 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.e( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:47 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.d( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:47 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.f( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:47 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.1b( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:47 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.c( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:47 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.1( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:47 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.1a( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:47 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.19( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:47 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.a( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:47 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.18( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:47 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 41 pg[6.0( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=23/23 lis/c=23/23 les/c/f=24/24/0 sis=39 pruub=12.622920990s) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 33'38 mlcod 0'0 unknown pruub 90.103996277s@ mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:47 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.17( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:47 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 41 pg[6.5( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:47 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 41 pg[6.6( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:47 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 41 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:47 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 41 pg[6.c( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:47 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 41 pg[6.7( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:47 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 41 pg[6.8( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:47 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 41 pg[6.4( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:47 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 41 pg[6.9( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:47 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 41 pg[6.a( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:47 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 41 pg[6.d( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:47 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 41 pg[6.e( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:47 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 41 pg[6.f( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:47 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 41 pg[6.2( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:47 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 41 pg[6.3( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:47 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 41 pg[6.1( v 33'39 (0'0,33'39] local-lis/les=23/24 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:47 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 3.1f scrub starts
Jan 10 16:59:47 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 3.1f scrub ok
Jan 10 16:59:47 compute-0 sshd-session[96325]: Accepted publickey for zuul from 192.168.122.30 port 36624 ssh2: ECDSA SHA256:YYROLJW/JwZAyyZtyl+88gzuUs1GqrQIhGb+AzXg9yc
Jan 10 16:59:47 compute-0 systemd-logind[798]: New session 34 of user zuul.
Jan 10 16:59:47 compute-0 systemd[1]: Started Session 34 of User zuul.
Jan 10 16:59:47 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 10 16:59:47 compute-0 sshd-session[96325]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 10 16:59:47 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v97: 177 pgs: 16 peering, 62 unknown, 99 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 10 16:59:48 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 2.9 scrub starts
Jan 10 16:59:48 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 2.9 scrub ok
Jan 10 16:59:48 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 4.7 scrub starts
Jan 10 16:59:48 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 4.7 scrub ok
Jan 10 16:59:48 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e41 do_prune osdmap full prune enabled
Jan 10 16:59:48 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 40 pg[7.0( empty local-lis/les=25/26 n=0 ec=25/25 lis/c=25/25 les/c/f=26/26/0 sis=40 pruub=13.497598648s) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active pruub 86.496665955s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 16:59:48 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 41 pg[7.0( empty local-lis/les=25/26 n=0 ec=25/25 lis/c=25/25 les/c/f=26/26/0 sis=40 pruub=13.497598648s) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown pruub 86.496665955s@ mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:48 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 41 pg[7.7( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:48 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 41 pg[7.8( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:48 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 41 pg[7.1( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:48 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 41 pg[7.9( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:48 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 41 pg[7.a( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:48 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 41 pg[7.b( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:48 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 41 pg[7.d( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:48 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 41 pg[7.11( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:48 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 41 pg[7.12( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:48 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 41 pg[7.13( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:48 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 41 pg[7.2( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:48 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 41 pg[7.3( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:48 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 41 pg[7.4( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:48 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 41 pg[7.5( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:48 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 41 pg[7.6( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:48 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 41 pg[7.c( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:48 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 41 pg[7.14( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:48 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 41 pg[7.15( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:48 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 41 pg[7.16( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:48 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 41 pg[7.e( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:48 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 41 pg[7.f( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:48 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 41 pg[7.17( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:48 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 41 pg[7.10( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:48 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 41 pg[7.18( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:48 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 41 pg[7.19( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:48 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 41 pg[7.1a( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:48 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 41 pg[7.1b( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:48 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 41 pg[7.1c( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:48 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 41 pg[7.1d( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:48 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 41 pg[7.1e( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:48 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 41 pg[7.1f( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:48 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e42 e42: 3 total, 3 up, 3 in
Jan 10 16:59:48 compute-0 python3.9[96479]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 10 16:59:48 compute-0 ceph-mon[75249]: osdmap e41: 3 total, 3 up, 3 in
Jan 10 16:59:48 compute-0 ceph-mon[75249]: 3.1f scrub starts
Jan 10 16:59:48 compute-0 ceph-mon[75249]: 3.1f scrub ok
Jan 10 16:59:48 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 42 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:48 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 42 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:48 compute-0 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e42: 3 total, 3 up, 3 in
Jan 10 16:59:48 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 42 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:48 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 42 pg[6.4( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:48 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 42 pg[6.8( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:48 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 42 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:48 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 42 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:48 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 42 pg[6.6( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:48 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 42 pg[6.1( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:48 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 42 pg[6.0( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=23/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 33'38 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:48 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 42 pg[6.e( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:48 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 42 pg[6.2( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:48 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 42 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:48 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 42 pg[6.c( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:48 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 42 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:48 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 42 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:49 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e42 do_prune osdmap full prune enabled
Jan 10 16:59:49 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e43 e43: 3 total, 3 up, 3 in
Jan 10 16:59:49 compute-0 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e43: 3 total, 3 up, 3 in
Jan 10 16:59:49 compute-0 ceph-mon[75249]: pgmap v97: 177 pgs: 16 peering, 62 unknown, 99 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 10 16:59:49 compute-0 ceph-mon[75249]: 2.9 scrub starts
Jan 10 16:59:49 compute-0 ceph-mon[75249]: 2.9 scrub ok
Jan 10 16:59:49 compute-0 ceph-mon[75249]: 4.7 scrub starts
Jan 10 16:59:49 compute-0 ceph-mon[75249]: 4.7 scrub ok
Jan 10 16:59:49 compute-0 ceph-mon[75249]: osdmap e42: 3 total, 3 up, 3 in
Jan 10 16:59:49 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 43 pg[7.1d( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:49 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 43 pg[7.1e( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:49 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 43 pg[7.12( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:49 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 43 pg[7.13( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:49 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 43 pg[7.10( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:49 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 43 pg[7.16( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:49 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 43 pg[7.15( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:49 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 43 pg[7.11( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:49 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 43 pg[7.17( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:49 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 43 pg[7.b( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:49 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 43 pg[7.1c( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:49 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 43 pg[7.14( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:49 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 43 pg[7.8( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:49 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 43 pg[7.a( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:49 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 43 pg[7.9( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:49 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 43 pg[7.4( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:49 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 43 pg[7.f( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:49 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 43 pg[7.0( empty local-lis/les=40/43 n=0 ec=25/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:49 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 43 pg[7.7( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:49 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 43 pg[7.1( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:49 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 43 pg[7.6( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:49 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 43 pg[7.3( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:49 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 43 pg[7.2( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:49 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 43 pg[7.5( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:49 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 43 pg[7.e( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:49 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 43 pg[7.d( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:49 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 43 pg[7.1f( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:49 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 43 pg[7.1a( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:49 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 43 pg[7.19( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:49 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 43 pg[7.1b( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:49 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 43 pg[7.c( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:49 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 43 pg[7.18( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:49 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v100: 177 pgs: 16 peering, 31 unknown, 130 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 10 16:59:50 compute-0 sudo[96695]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vqldomshxhwucguyjkrzenzvapkflwvo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064389.8450692-27-213036446281042/AnsiballZ_command.py'
Jan 10 16:59:50 compute-0 sudo[96695]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 16:59:50 compute-0 python3.9[96697]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                            pushd /var/tmp
                                            curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                            pushd repo-setup-main
                                            python3 -m venv ./venv
                                            PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                            ./venv/bin/repo-setup current-podified -b antelope
                                            popd
                                            rm -rf repo-setup-main
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 16:59:50 compute-0 ceph-mon[75249]: osdmap e43: 3 total, 3 up, 3 in
Jan 10 16:59:50 compute-0 ceph-mon[75249]: pgmap v100: 177 pgs: 16 peering, 31 unknown, 130 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 10 16:59:51 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 2.1f scrub starts
Jan 10 16:59:51 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 2.1f scrub ok
Jan 10 16:59:51 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v101: 177 pgs: 16 peering, 161 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 10 16:59:52 compute-0 ceph-mon[75249]: 2.1f scrub starts
Jan 10 16:59:52 compute-0 ceph-mon[75249]: 2.1f scrub ok
Jan 10 16:59:52 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 4.1e scrub starts
Jan 10 16:59:52 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 4.1e scrub ok
Jan 10 16:59:52 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 3.a scrub starts
Jan 10 16:59:52 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 3.a scrub ok
Jan 10 16:59:53 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 2.1d scrub starts
Jan 10 16:59:53 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 4.1c scrub starts
Jan 10 16:59:53 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 2.1d scrub ok
Jan 10 16:59:53 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 4.1c scrub ok
Jan 10 16:59:53 compute-0 ceph-mon[75249]: pgmap v101: 177 pgs: 16 peering, 161 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 10 16:59:53 compute-0 ceph-mon[75249]: 4.1e scrub starts
Jan 10 16:59:53 compute-0 ceph-mon[75249]: 4.1e scrub ok
Jan 10 16:59:53 compute-0 ceph-mon[75249]: 3.a scrub starts
Jan 10 16:59:53 compute-0 ceph-mon[75249]: 3.a scrub ok
Jan 10 16:59:53 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 16:59:53 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v102: 177 pgs: 177 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 10 16:59:53 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 10 16:59:53 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 10 16:59:54 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 10 16:59:54 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 10 16:59:54 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"} v 0)
Jan 10 16:59:54 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"} : dispatch
Jan 10 16:59:54 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 10 16:59:54 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 10 16:59:54 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 10 16:59:54 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 10 16:59:54 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 10 16:59:54 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 10 16:59:54 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 2.a scrub starts
Jan 10 16:59:54 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 2.a scrub ok
Jan 10 16:59:54 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e43 do_prune osdmap full prune enabled
Jan 10 16:59:54 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 10 16:59:54 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 10 16:59:54 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]': finished
Jan 10 16:59:54 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 10 16:59:54 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 10 16:59:54 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 10 16:59:54 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e44 e44: 3 total, 3 up, 3 in
Jan 10 16:59:54 compute-0 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e44: 3 total, 3 up, 3 in
Jan 10 16:59:54 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.946036339s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 active pruub 82.196174622s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 16:59:54 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.1e( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.025462151s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 active pruub 80.275505066s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 16:59:54 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.19( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.945967674s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 active pruub 82.196235657s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 16:59:54 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.1d( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.025725365s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 active pruub 80.275756836s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 16:59:54 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.19( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.945830345s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.196235657s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 16:59:54 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.1e( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.024790764s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.275505066s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 16:59:54 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.1d( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.025278091s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.275756836s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 16:59:54 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.18( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.944757462s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 active pruub 82.195938110s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 16:59:54 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.18( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.944697380s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.195938110s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 16:59:54 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.16( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.944560051s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 active pruub 82.196243286s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 16:59:54 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.16( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.944519043s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.196243286s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 16:59:54 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.943988800s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 active pruub 82.195838928s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 16:59:54 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.11( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.023788452s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 active pruub 80.275848389s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 16:59:54 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.943914413s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.195838928s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 16:59:54 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.11( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.023736954s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.275848389s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 16:59:54 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.945515633s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.196174622s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 16:59:54 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.12( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.023344040s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 active pruub 80.276062012s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 16:59:54 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.12( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.023289680s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276062012s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 16:59:54 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.942822456s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 active pruub 82.195831299s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 16:59:54 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.13( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.022882462s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 active pruub 80.276046753s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 16:59:54 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.13( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.022849083s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276046753s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 16:59:54 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.942521095s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.195831299s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 16:59:54 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.14( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.022719383s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 active pruub 80.276260376s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 16:59:54 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.14( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.022646904s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276260376s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 16:59:54 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.13( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.941881180s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 active pruub 82.195747375s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 16:59:54 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.13( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.941844940s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.195747375s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 16:59:54 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.15( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.022457123s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 active pruub 80.276466370s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 16:59:54 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.15( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.022409439s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276466370s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 16:59:54 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.11( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.941703796s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 active pruub 82.196022034s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 16:59:54 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.11( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.941620827s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.196022034s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 16:59:54 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.16( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.021822929s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 active pruub 80.276275635s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 16:59:54 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.16( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.021756172s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276275635s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 16:59:54 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.9( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.021719933s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 active pruub 80.276496887s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 16:59:54 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.9( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.021435738s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276496887s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 16:59:54 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.939940453s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 active pruub 82.194961548s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 16:59:54 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.7( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.021199226s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 active pruub 80.276519775s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 16:59:54 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.7( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.021158218s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276519775s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 16:59:54 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.939641953s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.194961548s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 16:59:54 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.939948082s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 active pruub 82.195663452s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 16:59:54 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.939791679s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.195663452s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 16:59:54 compute-0 ceph-mon[75249]: 2.1d scrub starts
Jan 10 16:59:54 compute-0 ceph-mon[75249]: 4.1c scrub starts
Jan 10 16:59:54 compute-0 ceph-mon[75249]: 2.1d scrub ok
Jan 10 16:59:54 compute-0 ceph-mon[75249]: 4.1c scrub ok
Jan 10 16:59:54 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 10 16:59:54 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 10 16:59:54 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"} : dispatch
Jan 10 16:59:54 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 10 16:59:54 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 10 16:59:54 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 10 16:59:54 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.1d( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:54 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.4( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.020483971s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 active pruub 80.276596069s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 16:59:54 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.4( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.020464897s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276596069s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 16:59:54 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.938565254s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 active pruub 82.194801331s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 16:59:54 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.938516617s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.194801331s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 16:59:54 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.937766075s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 active pruub 82.194831848s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 16:59:54 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.5( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.020411491s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 active pruub 80.276550293s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 16:59:54 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.5( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.019139290s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276550293s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 16:59:54 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.937311172s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.194831848s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 16:59:54 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.16( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:54 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.12( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:54 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.2( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.937079430s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 active pruub 82.194824219s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 16:59:54 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.15( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:54 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.2( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.937025070s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.194824219s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 16:59:54 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.936864853s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 active pruub 82.194839478s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 16:59:54 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.936842918s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.194839478s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 16:59:54 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.2( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.018429756s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 active pruub 80.276603699s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 16:59:54 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.2( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.017952919s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276603699s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 16:59:54 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.13( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:54 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.1( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.018333435s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 active pruub 80.277122498s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 16:59:54 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.1( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.018314362s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.277122498s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 16:59:54 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.8( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.936121941s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 active pruub 82.194946289s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 16:59:54 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.8( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.936101913s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.194946289s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 16:59:54 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.17( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:54 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.f( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.017609596s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 active pruub 80.276664734s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 16:59:54 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.f( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.017589569s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276664734s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 16:59:54 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.9( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.934791565s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 active pruub 82.194023132s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 16:59:54 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.9( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.934760094s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.194023132s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 16:59:54 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.a( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.934700012s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 active pruub 82.194023132s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 16:59:54 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.a( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.934677124s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.194023132s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 16:59:54 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.11( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:54 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.3( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.017175674s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 active pruub 80.276603699s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 16:59:54 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.1b( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:54 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.3( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.017109871s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276603699s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 16:59:54 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.b( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.934269905s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 active pruub 82.193969727s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 16:59:54 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.b( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.934208870s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.193969727s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 16:59:54 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.c( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.017211914s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 active pruub 80.277099609s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 16:59:54 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.1c( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.930295944s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 active pruub 82.190406799s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 16:59:54 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.1c( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.661448479s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 90.491836548s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 16:59:54 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.1c( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.661420822s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.491836548s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 16:59:54 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.1c( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.930240631s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.190406799s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 16:59:54 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.18( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.967473984s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 91.798049927s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 16:59:54 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.18( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.967448235s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.798049927s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 16:59:54 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.1a( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.016772270s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 active pruub 80.277183533s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 16:59:54 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.1a( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.016730309s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.277183533s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 16:59:54 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.17( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.967326164s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 91.798057556s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 16:59:54 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.17( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.967307091s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.798057556s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 16:59:54 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.1d( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.933568954s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 active pruub 82.194084167s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 16:59:54 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.1d( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.933501244s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.194084167s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 16:59:54 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.c( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.016199112s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.277099609s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 16:59:54 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.13( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.660369873s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 90.491561890s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 16:59:54 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.13( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.660344124s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.491561890s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 16:59:54 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.19( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.016240120s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 active pruub 80.277198792s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 16:59:54 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.19( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.016202927s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.277198792s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 16:59:54 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.1f( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.932877541s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 active pruub 82.193984985s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 16:59:54 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.16( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.966688156s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 91.798217773s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 16:59:54 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.1f( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.932753563s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.193984985s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 16:59:54 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.16( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.966665268s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.798217773s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 16:59:54 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.18( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.015887260s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 active pruub 80.277290344s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 16:59:54 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.18( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.015865326s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.277290344s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 16:59:54 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.f( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.933269501s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 active pruub 82.195274353s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 16:59:54 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.15( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.966203690s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 91.798019409s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 16:59:54 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.15( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.966114044s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.798019409s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 16:59:54 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.11( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.659540176s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 90.491722107s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 16:59:54 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.11( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.659503937s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.491722107s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 16:59:54 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.9( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:54 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.12( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.965217590s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 91.798019409s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 16:59:54 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.15( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.658833504s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 90.491676331s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 16:59:54 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.15( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.658818245s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.491676331s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 16:59:54 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.12( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.965171814s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.798019409s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 16:59:54 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.f( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.965347290s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 91.798385620s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 16:59:54 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.f( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.965298653s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.798385620s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 16:59:54 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.11( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.964876175s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 91.798011780s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 16:59:54 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.11( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.964852333s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.798011780s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 16:59:54 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.6( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.932654381s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 active pruub 82.194847107s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 16:59:54 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.6( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.932610512s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.194847107s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 16:59:54 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.1c( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:54 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.f( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.933234215s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.195274353s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 16:59:54 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.18( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:54 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.1e( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:54 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.19( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:54 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.d( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:54 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.7( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:54 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.16( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:54 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.3( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:54 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.4( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:54 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.5( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:54 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.18( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:54 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.f( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:54 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.7( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:54 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.1( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:54 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.f( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:54 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.1d( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:54 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.4( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:54 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.1c( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:54 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.9( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:54 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.a( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:54 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.f( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:54 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.1a( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:54 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.c( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:54 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.2( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:54 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.5( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:54 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.1f( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:54 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.2( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:54 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.3( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:54 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.b( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:54 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.8( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:54 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.16( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:54 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.17( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:54 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.13( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:54 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.15( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:54 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.15( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:54 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.13( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:54 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.14( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:54 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.12( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:54 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.11( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:54 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.19( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:54 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.18( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:54 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.e( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.955799103s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 91.796836853s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 16:59:54 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.e( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.955755234s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796836853s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 16:59:54 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.a( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.650593758s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 90.491889954s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 16:59:54 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.a( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.650562286s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.491889954s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 16:59:54 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.9( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.650234222s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 90.491912842s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 16:59:54 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.9( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.650205612s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.491912842s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 16:59:54 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.c( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.954932213s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 91.796813965s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 16:59:54 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.c( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.954910278s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796813965s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 16:59:54 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.8( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.649713516s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 90.491882324s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 16:59:54 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.8( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.649688721s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.491882324s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 16:59:54 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.f( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.649621010s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 90.492034912s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 16:59:54 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.f( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.649598122s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492034912s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 16:59:54 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.6( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.649418831s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 90.492172241s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 16:59:54 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.6( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.649394989s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492172241s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 16:59:54 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.4( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.648921013s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 90.491943359s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 16:59:54 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.4( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.648898125s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.491943359s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 16:59:54 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.1( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.953218460s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 91.796546936s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 16:59:54 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.1( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.953183174s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796546936s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 16:59:54 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.5( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.648669243s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 90.492225647s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 16:59:54 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.5( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.648649216s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492225647s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 16:59:54 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.3( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.953178406s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 91.796897888s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 16:59:54 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.3( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.953158379s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796897888s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 16:59:54 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.5( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.952614784s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 91.796539307s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 16:59:54 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.5( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.952584267s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796539307s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 16:59:54 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.1( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.648002625s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 90.492103577s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 16:59:54 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.1( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.647982597s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492103577s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 16:59:54 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.6( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.952114105s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 91.796447754s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 16:59:54 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.6( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.952090263s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796447754s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 16:59:54 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.18( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.956947327s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 97.440971375s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 16:59:54 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.18( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.956913948s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440971375s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 16:59:54 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.14( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.956763268s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 97.441024780s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 16:59:54 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.13( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.956607819s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 97.440879822s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 16:59:54 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.14( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.956736565s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.441024780s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 16:59:54 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.13( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.956587791s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440879822s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 16:59:54 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.12( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.956534386s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 97.440910339s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 16:59:54 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.11( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.956438065s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 97.440856934s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 16:59:54 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.12( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.956491470s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440910339s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 16:59:54 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.11( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.956419945s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440856934s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 16:59:54 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.10( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.956384659s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 97.440849304s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 16:59:54 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.10( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.956365585s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440849304s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 16:59:54 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.e( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.956201553s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 97.440826416s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 16:59:54 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.f( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.956261635s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 97.440933228s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 16:59:54 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.e( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.956164360s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440826416s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 16:59:54 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.f( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.956238747s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440933228s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 16:59:54 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44 pruub=10.570457458s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 active pruub 95.055160522s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 16:59:54 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44 pruub=10.570424080s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 95.055160522s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 16:59:54 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.2( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.955455780s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 97.440795898s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 16:59:54 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.2( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.955396652s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440795898s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 16:59:54 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.d( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.955231667s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 97.440811157s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 16:59:54 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.1( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.955141068s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 97.440750122s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 16:59:54 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44 pruub=10.569484711s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 active pruub 95.055030823s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 16:59:54 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.d( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.955207825s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440811157s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 16:59:54 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44 pruub=10.569366455s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 95.055030823s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 16:59:54 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.1( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.955098152s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440750122s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 16:59:54 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44 pruub=10.569361687s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 active pruub 95.055160522s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 16:59:54 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44 pruub=10.569333076s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 95.055160522s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 16:59:54 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[6.1( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44 pruub=10.568908691s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 active pruub 95.054847717s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 16:59:54 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[6.1( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44 pruub=10.568881989s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 95.054847717s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 16:59:54 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.9( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.954612732s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 97.440727234s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 16:59:54 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.4( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.954586029s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 97.440734863s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 16:59:54 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.4( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.954518318s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440734863s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 16:59:54 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.9( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.954583168s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440727234s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 16:59:54 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44 pruub=10.568431854s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 active pruub 95.054718018s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 16:59:54 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.2( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.645599365s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 90.492210388s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 16:59:54 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44 pruub=10.568410873s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 95.054718018s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 16:59:54 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.2( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.645558357s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492210388s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 16:59:54 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.5( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.954238892s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 97.440666199s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 16:59:54 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.7( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.949820518s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 91.796524048s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 16:59:54 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.7( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.949773788s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796524048s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 16:59:54 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.1a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.954300880s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 97.440750122s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 16:59:54 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.5( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.954220772s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440666199s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 16:59:54 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.3( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.645345688s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 90.492187500s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 16:59:54 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.1a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.954264641s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440750122s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 16:59:54 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.3( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.645316124s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492187500s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 16:59:54 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44 pruub=10.567855835s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 active pruub 95.054450989s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 16:59:54 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.8( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.949461937s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 91.796409607s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 16:59:54 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44 pruub=10.567838669s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 95.054450989s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 16:59:54 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.8( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.949433327s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796409607s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 16:59:54 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.953746796s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 97.440574646s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 16:59:54 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.c( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.645701408s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 90.492774963s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 16:59:54 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.953726768s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440574646s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 16:59:54 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.c( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.645682335s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492774963s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 16:59:54 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.9( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.949236870s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 91.796409607s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 16:59:54 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.a( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.948904037s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 91.796211243s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 16:59:54 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.1b( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.953638077s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 97.440696716s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 16:59:54 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.a( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.948876381s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796211243s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 16:59:54 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.1b( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.953615189s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440696716s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 16:59:54 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.1f( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.644659996s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 90.492225647s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 16:59:54 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.1f( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.644639015s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492225647s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 16:59:54 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.7( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.953246117s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 97.440460205s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 16:59:54 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.1b( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.949141502s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 91.796890259s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 16:59:54 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.7( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.953224182s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440460205s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 16:59:54 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.1b( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.949123383s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796890259s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 16:59:54 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.18( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.644832611s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 90.492797852s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 16:59:54 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.18( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.644811630s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492797852s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 16:59:54 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.1d( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.948488235s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 91.796607971s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 16:59:54 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.1d( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.948470116s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796607971s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 16:59:54 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.1a( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.644400597s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 90.492721558s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 16:59:54 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.1a( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.644385338s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492721558s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 16:59:54 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.1e( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.947714806s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 91.796150208s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 16:59:54 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.1e( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.947671890s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796150208s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 16:59:54 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44 pruub=10.567257881s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 active pruub 95.054718018s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 16:59:54 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.1b( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.644159317s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 90.492759705s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 16:59:54 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.1b( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.644144058s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492759705s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 16:59:54 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.1f( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.947475433s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 91.796226501s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 16:59:54 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.1f( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.947455406s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796226501s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 16:59:54 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.6( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:54 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44 pruub=10.562947273s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 active pruub 95.050613403s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 16:59:54 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44 pruub=10.562922478s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 95.050613403s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 16:59:54 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.1c( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.952595711s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 97.440460205s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 16:59:54 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44 pruub=10.566867828s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 95.054718018s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 16:59:54 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.1c( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.952573776s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440460205s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 16:59:54 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.8( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.953011513s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 97.440956116s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 16:59:54 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.8( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.952985764s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440956116s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 16:59:54 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.e( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.642189980s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 90.492256165s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 16:59:54 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.e( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.642154694s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492256165s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 16:59:54 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.14( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:54 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.9( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:54 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.11( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:54 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.15( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:54 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.11( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:54 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.9( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.949159622s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796409607s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 16:59:54 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.12( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:54 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.c( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:54 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.f( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:54 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.6( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:54 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.10( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:54 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.4( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:54 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.f( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:54 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:54 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.2( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:54 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.e( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:54 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.d( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:54 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:54 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.a( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:54 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.8( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:54 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.3( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:54 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.1( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:54 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.3( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:54 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.6( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:54 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.3( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:54 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.a( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:54 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.1f( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:54 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.1b( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:54 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.18( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:54 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.1b( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:54 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.1f( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:54 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.5( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:54 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.1( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:54 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.5( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:54 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.4( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:54 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.5( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:54 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.1( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:54 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.7( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:54 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.2( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:54 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.7( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:54 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.9( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:54 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.5( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:54 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.7( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:54 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.9( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:54 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.8( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:54 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.8( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:54 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.c( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:54 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.9( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:54 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.1d( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:54 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.1a( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:54 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.1e( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:54 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:54 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.e( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:54 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.18( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:54 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 7.1e scrub starts
Jan 10 16:59:54 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.13( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:54 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.11( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:54 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.e( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:54 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.1( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:54 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.1a( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:54 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.a( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:54 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.1b( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:54 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.1c( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 16:59:54 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 7.1e scrub ok
Jan 10 16:59:55 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e44 do_prune osdmap full prune enabled
Jan 10 16:59:55 compute-0 ceph-mon[75249]: pgmap v102: 177 pgs: 177 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 10 16:59:55 compute-0 ceph-mon[75249]: 2.a scrub starts
Jan 10 16:59:55 compute-0 ceph-mon[75249]: 2.a scrub ok
Jan 10 16:59:55 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 10 16:59:55 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 10 16:59:55 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]': finished
Jan 10 16:59:55 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 10 16:59:55 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 10 16:59:55 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 10 16:59:55 compute-0 ceph-mon[75249]: osdmap e44: 3 total, 3 up, 3 in
Jan 10 16:59:55 compute-0 ceph-mon[75249]: 7.1e scrub starts
Jan 10 16:59:55 compute-0 ceph-mon[75249]: 7.1e scrub ok
Jan 10 16:59:55 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e45 e45: 3 total, 3 up, 3 in
Jan 10 16:59:55 compute-0 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e45: 3 total, 3 up, 3 in
Jan 10 16:59:55 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[3.1f( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:55 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.d( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:55 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.f( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:55 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[7.1b( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:55 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[2.11( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:55 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[3.12( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:55 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[5.14( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:55 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[5.15( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:55 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[3.15( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:55 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[7.13( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:55 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[2.13( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:55 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[3.17( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:55 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[3.9( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:55 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[2.8( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:55 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[2.16( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:55 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[3.a( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:55 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[7.f( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:55 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[2.b( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:55 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[5.3( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:55 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[3.6( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:55 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[5.2( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:55 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[2.2( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:55 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.1b( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:55 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.1a( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:55 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.e( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:55 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.1( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:55 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.13( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:55 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.a( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:55 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.11( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:55 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.1c( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:55 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[7.3( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:55 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[3.3( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:55 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[5.5( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:55 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[2.f( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:55 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[2.1c( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:55 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[2.1f( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:55 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[7.18( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:55 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[7.6( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:55 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[5.4( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:55 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[3.1( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:55 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[3.c( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:55 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[2.1d( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:55 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[7.4( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:55 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[3.f( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:55 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[5.7( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:55 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[3.1b( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:55 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[7.1f( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:55 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[2.18( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:55 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[2.19( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:55 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[7.9( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:55 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[5.1e( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:55 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.f( v 33'39 lc 33'1 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:55 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.1( v 33'39 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:55 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.4( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:55 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.d( v 33'39 lc 33'13 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:55 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.3( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:55 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.7( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:55 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.5( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:55 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:55 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.5( v 33'39 lc 33'11 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:55 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.9( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:55 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:55 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.1b( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:55 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.10( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:55 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.11( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:55 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.2( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:55 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.17( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:55 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.13( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:55 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.15( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:55 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.12( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:55 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.16( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:55 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.9( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:55 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.12( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:55 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.a( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:55 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.d( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:55 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.3( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:55 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.4( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:55 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.5( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:55 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.7( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:55 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.1( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:55 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.6( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:55 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.9( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:55 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.f( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:55 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.1d( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:55 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.c( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:55 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.7( v 33'39 lc 33'21 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:55 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.18( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:55 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.11( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:55 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.18( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:55 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.16( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:55 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.15( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:55 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.1c( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:55 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.11( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:55 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.e( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:55 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.5( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:55 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.a( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:55 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.2( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:55 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.8( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:55 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.5( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:55 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.7( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:55 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.c( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:55 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.1a( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:55 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.18( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:55 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.19( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:55 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.14( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:55 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.8( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:55 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.1d( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:55 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.e( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:55 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.8( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:55 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.1e( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:55 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.1( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:55 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.1a( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 16:59:55 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v105: 177 pgs: 42 peering, 135 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 10 16:59:56 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 4.17 scrub starts
Jan 10 16:59:56 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 4.17 scrub ok
Jan 10 16:59:56 compute-0 ceph-mon[75249]: osdmap e45: 3 total, 3 up, 3 in
Jan 10 16:59:56 compute-0 ceph-mon[75249]: pgmap v105: 177 pgs: 42 peering, 135 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 10 16:59:56 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 3.1a scrub starts
Jan 10 16:59:56 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 3.1a scrub ok
Jan 10 16:59:57 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 7.1d scrub starts
Jan 10 16:59:57 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 7.1d scrub ok
Jan 10 16:59:57 compute-0 ceph-mon[75249]: 4.17 scrub starts
Jan 10 16:59:57 compute-0 ceph-mon[75249]: 4.17 scrub ok
Jan 10 16:59:58 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v106: 177 pgs: 42 peering, 135 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 10 16:59:58 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 4.15 scrub starts
Jan 10 16:59:58 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 4.15 scrub ok
Jan 10 16:59:58 compute-0 sudo[96695]: pam_unix(sudo:session): session closed for user root
Jan 10 16:59:58 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e45 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 16:59:58 compute-0 ceph-mon[75249]: 3.1a scrub starts
Jan 10 16:59:58 compute-0 ceph-mon[75249]: 3.1a scrub ok
Jan 10 16:59:58 compute-0 ceph-mon[75249]: 7.1d scrub starts
Jan 10 16:59:58 compute-0 ceph-mon[75249]: 7.1d scrub ok
Jan 10 16:59:58 compute-0 ceph-mon[75249]: pgmap v106: 177 pgs: 42 peering, 135 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 10 16:59:58 compute-0 sshd-session[96329]: Connection closed by 192.168.122.30 port 36624
Jan 10 16:59:58 compute-0 sshd-session[96325]: pam_unix(sshd:session): session closed for user zuul
Jan 10 16:59:58 compute-0 systemd[1]: session-34.scope: Deactivated successfully.
Jan 10 16:59:58 compute-0 systemd[1]: session-34.scope: Consumed 9.111s CPU time.
Jan 10 16:59:58 compute-0 systemd-logind[798]: Session 34 logged out. Waiting for processes to exit.
Jan 10 16:59:58 compute-0 systemd-logind[798]: Removed session 34.
Jan 10 16:59:59 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 4.16 scrub starts
Jan 10 16:59:59 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 4.16 scrub ok
Jan 10 16:59:59 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 3.19 scrub starts
Jan 10 16:59:59 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 3.19 scrub ok
Jan 10 16:59:59 compute-0 ceph-mon[75249]: 4.15 scrub starts
Jan 10 16:59:59 compute-0 ceph-mon[75249]: 4.15 scrub ok
Jan 10 17:00:00 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v107: 177 pgs: 177 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 100 B/s, 1 keys/s, 1 objects/s recovering
Jan 10 17:00:00 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"} v 0)
Jan 10 17:00:00 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"} : dispatch
Jan 10 17:00:00 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 2.1a scrub starts
Jan 10 17:00:00 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 2.1a scrub ok
Jan 10 17:00:00 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 7.12 scrub starts
Jan 10 17:00:00 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 7.12 scrub ok
Jan 10 17:00:00 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e45 do_prune osdmap full prune enabled
Jan 10 17:00:00 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]': finished
Jan 10 17:00:00 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e46 e46: 3 total, 3 up, 3 in
Jan 10 17:00:00 compute-0 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e46: 3 total, 3 up, 3 in
Jan 10 17:00:00 compute-0 ceph-mon[75249]: 4.16 scrub starts
Jan 10 17:00:00 compute-0 ceph-mon[75249]: 4.16 scrub ok
Jan 10 17:00:00 compute-0 ceph-mon[75249]: 3.19 scrub starts
Jan 10 17:00:00 compute-0 ceph-mon[75249]: 3.19 scrub ok
Jan 10 17:00:00 compute-0 ceph-mon[75249]: pgmap v107: 177 pgs: 177 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 100 B/s, 1 keys/s, 1 objects/s recovering
Jan 10 17:00:00 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"} : dispatch
Jan 10 17:00:00 compute-0 ceph-mon[75249]: 7.12 scrub starts
Jan 10 17:00:00 compute-0 ceph-mon[75249]: 7.12 scrub ok
Jan 10 17:00:00 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 46 pg[6.e( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46 pruub=11.918597221s) [1] r=-1 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 active pruub 103.055480957s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 17:00:00 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 46 pg[6.e( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46 pruub=11.918471336s) [1] r=-1 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 103.055480957s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 17:00:00 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 46 pg[6.6( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46 pruub=11.918116570s) [1] r=-1 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 active pruub 103.055183411s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 17:00:00 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 46 pg[6.6( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46 pruub=11.918060303s) [1] r=-1 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 103.055183411s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 17:00:00 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 46 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46 pruub=11.913496971s) [1] r=-1 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 active pruub 103.050857544s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 17:00:00 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 46 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46 pruub=11.913447380s) [1] r=-1 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 103.050857544s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 17:00:00 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 46 pg[6.2( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46 pruub=11.917839050s) [1] r=-1 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 active pruub 103.055328369s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 17:00:00 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 46 pg[6.2( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46 pruub=11.917801857s) [1] r=-1 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 103.055328369s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 17:00:00 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 46 pg[6.e( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:00:00 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 46 pg[6.6( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:00:00 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 46 pg[6.a( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:00:00 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 46 pg[6.2( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:00:01 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 4.c scrub starts
Jan 10 17:00:01 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 4.c scrub ok
Jan 10 17:00:01 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 5.1c scrub starts
Jan 10 17:00:01 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 5.1c scrub ok
Jan 10 17:00:01 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e46 do_prune osdmap full prune enabled
Jan 10 17:00:01 compute-0 ceph-mon[75249]: 2.1a scrub starts
Jan 10 17:00:01 compute-0 ceph-mon[75249]: 2.1a scrub ok
Jan 10 17:00:01 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]': finished
Jan 10 17:00:01 compute-0 ceph-mon[75249]: osdmap e46: 3 total, 3 up, 3 in
Jan 10 17:00:01 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e47 e47: 3 total, 3 up, 3 in
Jan 10 17:00:01 compute-0 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e47: 3 total, 3 up, 3 in
Jan 10 17:00:01 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 47 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=46/47 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:00:01 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 47 pg[6.6( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=46/47 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:00:01 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 47 pg[6.2( v 33'39 (0'0,33'39] local-lis/les=46/47 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:00:01 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 47 pg[6.e( v 33'39 lc 33'19 (0'0,33'39] local-lis/les=46/47 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:00:02 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v110: 177 pgs: 177 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 121 B/s, 1 keys/s, 1 objects/s recovering
Jan 10 17:00:02 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"} v 0)
Jan 10 17:00:02 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"} : dispatch
Jan 10 17:00:02 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 4.0 scrub starts
Jan 10 17:00:02 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 4.0 scrub ok
Jan 10 17:00:02 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e47 do_prune osdmap full prune enabled
Jan 10 17:00:02 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Jan 10 17:00:02 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e48 e48: 3 total, 3 up, 3 in
Jan 10 17:00:02 compute-0 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e48: 3 total, 3 up, 3 in
Jan 10 17:00:02 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 48 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48 pruub=8.926258087s) [0] r=-1 lpr=48 pi=[44,48)/1 crt=33'39 active pruub 95.995513916s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 17:00:02 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 48 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48 pruub=8.926201820s) [0] r=-1 lpr=48 pi=[44,48)/1 crt=33'39 unknown NOTIFY pruub 95.995513916s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 17:00:02 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 48 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48 pruub=8.925914764s) [0] r=-1 lpr=48 pi=[44,48)/1 crt=33'39 active pruub 95.995262146s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 17:00:02 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 48 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48 pruub=8.925878525s) [0] r=-1 lpr=48 pi=[44,48)/1 crt=33'39 unknown NOTIFY pruub 95.995262146s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 17:00:02 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 48 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48 pruub=8.925272942s) [0] r=-1 lpr=48 pi=[44,48)/1 crt=33'39 active pruub 95.995002747s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 17:00:02 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 48 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48 pruub=8.925237656s) [0] r=-1 lpr=48 pi=[44,48)/1 crt=33'39 unknown NOTIFY pruub 95.995002747s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 17:00:02 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 48 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48 pruub=8.924705505s) [0] r=-1 lpr=48 pi=[44,48)/1 crt=33'39 active pruub 95.994689941s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 17:00:02 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 48 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48 pruub=8.924655914s) [0] r=-1 lpr=48 pi=[44,48)/1 crt=33'39 unknown NOTIFY pruub 95.994689941s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 17:00:02 compute-0 ceph-mon[75249]: 4.c scrub starts
Jan 10 17:00:02 compute-0 ceph-mon[75249]: 4.c scrub ok
Jan 10 17:00:02 compute-0 ceph-mon[75249]: 5.1c scrub starts
Jan 10 17:00:02 compute-0 ceph-mon[75249]: 5.1c scrub ok
Jan 10 17:00:02 compute-0 ceph-mon[75249]: osdmap e47: 3 total, 3 up, 3 in
Jan 10 17:00:02 compute-0 ceph-mon[75249]: pgmap v110: 177 pgs: 177 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 121 B/s, 1 keys/s, 1 objects/s recovering
Jan 10 17:00:02 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"} : dispatch
Jan 10 17:00:02 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 48 pg[6.7( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=0 lpr=48 pi=[44,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:00:02 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 48 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=0 lpr=48 pi=[44,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:00:02 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 48 pg[6.3( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=0 lpr=48 pi=[44,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:00:02 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 48 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=0 lpr=48 pi=[44,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:00:03 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 4.3 scrub starts
Jan 10 17:00:03 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 4.3 scrub ok
Jan 10 17:00:03 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 5.1f scrub starts
Jan 10 17:00:03 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 5.1f scrub ok
Jan 10 17:00:03 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e48 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:00:03 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e48 do_prune osdmap full prune enabled
Jan 10 17:00:03 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e49 e49: 3 total, 3 up, 3 in
Jan 10 17:00:03 compute-0 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e49: 3 total, 3 up, 3 in
Jan 10 17:00:03 compute-0 ceph-mon[75249]: 4.0 scrub starts
Jan 10 17:00:03 compute-0 ceph-mon[75249]: 4.0 scrub ok
Jan 10 17:00:03 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Jan 10 17:00:03 compute-0 ceph-mon[75249]: osdmap e48: 3 total, 3 up, 3 in
Jan 10 17:00:03 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 49 pg[6.3( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=48/49 n=2 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=0 lpr=48 pi=[44,48)/1 crt=33'39 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:00:03 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 49 pg[6.7( v 33'39 lc 33'21 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=0 lpr=48 pi=[44,48)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:00:03 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 49 pg[6.f( v 33'39 lc 33'1 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=0 lpr=48 pi=[44,48)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:00:03 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 49 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=0 lpr=48 pi=[44,48)/1 crt=33'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:00:04 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v113: 177 pgs: 177 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:00:04 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"} v 0)
Jan 10 17:00:04 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"} : dispatch
Jan 10 17:00:04 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 4.19 scrub starts
Jan 10 17:00:04 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 4.19 scrub ok
Jan 10 17:00:04 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 2.14 scrub starts
Jan 10 17:00:04 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 2.14 scrub ok
Jan 10 17:00:04 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 3.14 scrub starts
Jan 10 17:00:04 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 3.14 scrub ok
Jan 10 17:00:04 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e49 do_prune osdmap full prune enabled
Jan 10 17:00:04 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]': finished
Jan 10 17:00:04 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e50 e50: 3 total, 3 up, 3 in
Jan 10 17:00:04 compute-0 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e50: 3 total, 3 up, 3 in
Jan 10 17:00:04 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 50 pg[6.c( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=50 pruub=8.313828468s) [1] r=-1 lpr=50 pi=[39,50)/1 crt=33'39 lcod 0'0 active pruub 103.055412292s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 17:00:04 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 50 pg[6.c( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=50 pruub=8.313767433s) [1] r=-1 lpr=50 pi=[39,50)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 103.055412292s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 17:00:04 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 50 pg[6.4( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=50 pruub=8.312836647s) [1] r=-1 lpr=50 pi=[39,50)/1 crt=33'39 lcod 0'0 active pruub 103.054672241s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 17:00:04 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 50 pg[6.4( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=50 pruub=8.312801361s) [1] r=-1 lpr=50 pi=[39,50)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 103.054672241s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 17:00:04 compute-0 ceph-mon[75249]: 4.3 scrub starts
Jan 10 17:00:04 compute-0 ceph-mon[75249]: 4.3 scrub ok
Jan 10 17:00:04 compute-0 ceph-mon[75249]: 5.1f scrub starts
Jan 10 17:00:04 compute-0 ceph-mon[75249]: 5.1f scrub ok
Jan 10 17:00:04 compute-0 ceph-mon[75249]: osdmap e49: 3 total, 3 up, 3 in
Jan 10 17:00:04 compute-0 ceph-mon[75249]: pgmap v113: 177 pgs: 177 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:00:04 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"} : dispatch
Jan 10 17:00:04 compute-0 ceph-mon[75249]: 3.14 scrub starts
Jan 10 17:00:04 compute-0 ceph-mon[75249]: 3.14 scrub ok
Jan 10 17:00:04 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 50 pg[6.c( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=50) [1] r=0 lpr=50 pi=[39,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:00:04 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 50 pg[6.4( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=50) [1] r=0 lpr=50 pi=[39,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:00:05 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 3.13 scrub starts
Jan 10 17:00:05 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 3.13 scrub ok
Jan 10 17:00:05 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e50 do_prune osdmap full prune enabled
Jan 10 17:00:05 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e51 e51: 3 total, 3 up, 3 in
Jan 10 17:00:05 compute-0 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e51: 3 total, 3 up, 3 in
Jan 10 17:00:05 compute-0 ceph-mon[75249]: 4.19 scrub starts
Jan 10 17:00:05 compute-0 ceph-mon[75249]: 4.19 scrub ok
Jan 10 17:00:05 compute-0 ceph-mon[75249]: 2.14 scrub starts
Jan 10 17:00:05 compute-0 ceph-mon[75249]: 2.14 scrub ok
Jan 10 17:00:05 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]': finished
Jan 10 17:00:05 compute-0 ceph-mon[75249]: osdmap e50: 3 total, 3 up, 3 in
Jan 10 17:00:05 compute-0 ceph-mon[75249]: 3.13 scrub starts
Jan 10 17:00:05 compute-0 ceph-mon[75249]: 3.13 scrub ok
Jan 10 17:00:05 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 51 pg[6.c( v 33'39 lc 33'17 (0'0,33'39] local-lis/les=50/51 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=50) [1] r=0 lpr=50 pi=[39,50)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:00:05 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 51 pg[6.4( v 33'39 lc 33'15 (0'0,33'39] local-lis/les=50/51 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=50) [1] r=0 lpr=50 pi=[39,50)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=4 mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:00:06 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v116: 177 pgs: 2 peering, 175 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Jan 10 17:00:06 compute-0 ceph-mon[75249]: osdmap e51: 3 total, 3 up, 3 in
Jan 10 17:00:06 compute-0 ceph-mon[75249]: pgmap v116: 177 pgs: 2 peering, 175 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Jan 10 17:00:06 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 4.6 scrub starts
Jan 10 17:00:07 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 4.6 scrub ok
Jan 10 17:00:07 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 2.12 scrub starts
Jan 10 17:00:07 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 2.12 scrub ok
Jan 10 17:00:07 compute-0 ceph-mon[75249]: 4.6 scrub starts
Jan 10 17:00:07 compute-0 ceph-mon[75249]: 4.6 scrub ok
Jan 10 17:00:08 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v117: 177 pgs: 2 peering, 175 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 116 B/s, 1 keys/s, 1 objects/s recovering
Jan 10 17:00:08 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:00:08 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 7.17 scrub starts
Jan 10 17:00:08 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 7.17 scrub ok
Jan 10 17:00:08 compute-0 ceph-mon[75249]: 2.12 scrub starts
Jan 10 17:00:08 compute-0 ceph-mon[75249]: 2.12 scrub ok
Jan 10 17:00:08 compute-0 ceph-mon[75249]: pgmap v117: 177 pgs: 2 peering, 175 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 116 B/s, 1 keys/s, 1 objects/s recovering
Jan 10 17:00:08 compute-0 ceph-mon[75249]: 7.17 scrub starts
Jan 10 17:00:08 compute-0 ceph-mon[75249]: 7.17 scrub ok
Jan 10 17:00:08 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:00:08 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:00:08 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:00:08 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:00:08 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:00:08 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:00:09 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 7.16 scrub starts
Jan 10 17:00:09 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 7.16 scrub ok
Jan 10 17:00:09 compute-0 ceph-mon[75249]: 7.16 scrub starts
Jan 10 17:00:09 compute-0 ceph-mon[75249]: 7.16 scrub ok
Jan 10 17:00:10 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v118: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 346 B/s, 1 keys/s, 2 objects/s recovering
Jan 10 17:00:10 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"} v 0)
Jan 10 17:00:10 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"} : dispatch
Jan 10 17:00:10 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 4.b scrub starts
Jan 10 17:00:10 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 4.b scrub ok
Jan 10 17:00:10 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e51 do_prune osdmap full prune enabled
Jan 10 17:00:10 compute-0 ceph-mon[75249]: pgmap v118: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 346 B/s, 1 keys/s, 2 objects/s recovering
Jan 10 17:00:10 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"} : dispatch
Jan 10 17:00:10 compute-0 ceph-mon[75249]: 4.b scrub starts
Jan 10 17:00:10 compute-0 ceph-mon[75249]: 4.b scrub ok
Jan 10 17:00:10 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]': finished
Jan 10 17:00:10 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e52 e52: 3 total, 3 up, 3 in
Jan 10 17:00:10 compute-0 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e52: 3 total, 3 up, 3 in
Jan 10 17:00:11 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 52 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=15.995003700s) [0] r=-1 lpr=52 pi=[44,52)/1 crt=33'39 active pruub 111.995697021s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 17:00:11 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 52 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=15.994950294s) [0] r=-1 lpr=52 pi=[44,52)/1 crt=33'39 unknown NOTIFY pruub 111.995697021s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 17:00:11 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 52 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=15.994361877s) [0] r=-1 lpr=52 pi=[44,52)/1 crt=33'39 active pruub 111.995353699s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 17:00:11 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 52 pg[6.5( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [0] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:00:11 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 52 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=15.993425369s) [0] r=-1 lpr=52 pi=[44,52)/1 crt=33'39 unknown NOTIFY pruub 111.995353699s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 17:00:11 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 52 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [0] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:00:11 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e52 do_prune osdmap full prune enabled
Jan 10 17:00:11 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 4.1d scrub starts
Jan 10 17:00:12 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v120: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 301 B/s, 1 keys/s, 1 objects/s recovering
Jan 10 17:00:12 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 4.1d scrub ok
Jan 10 17:00:12 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"} v 0)
Jan 10 17:00:12 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"} : dispatch
Jan 10 17:00:12 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 5.10 scrub starts
Jan 10 17:00:12 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 5.10 scrub ok
Jan 10 17:00:12 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e53 e53: 3 total, 3 up, 3 in
Jan 10 17:00:12 compute-0 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e53: 3 total, 3 up, 3 in
Jan 10 17:00:12 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 53 pg[6.d( v 33'39 lc 33'13 (0'0,33'39] local-lis/les=52/53 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [0] r=0 lpr=52 pi=[44,52)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:00:12 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 53 pg[6.5( v 33'39 lc 33'11 (0'0,33'39] local-lis/les=52/53 n=2 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [0] r=0 lpr=52 pi=[44,52)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:00:12 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]': finished
Jan 10 17:00:12 compute-0 ceph-mon[75249]: osdmap e52: 3 total, 3 up, 3 in
Jan 10 17:00:12 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 7.10 scrub starts
Jan 10 17:00:12 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 7.10 scrub ok
Jan 10 17:00:12 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 7.1b scrub starts
Jan 10 17:00:12 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 7.1b scrub ok
Jan 10 17:00:13 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e53 do_prune osdmap full prune enabled
Jan 10 17:00:13 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]': finished
Jan 10 17:00:13 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e54 e54: 3 total, 3 up, 3 in
Jan 10 17:00:13 compute-0 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e54: 3 total, 3 up, 3 in
Jan 10 17:00:13 compute-0 ceph-mon[75249]: 4.1d scrub starts
Jan 10 17:00:13 compute-0 ceph-mon[75249]: pgmap v120: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 301 B/s, 1 keys/s, 1 objects/s recovering
Jan 10 17:00:13 compute-0 ceph-mon[75249]: 4.1d scrub ok
Jan 10 17:00:13 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"} : dispatch
Jan 10 17:00:13 compute-0 ceph-mon[75249]: 5.10 scrub starts
Jan 10 17:00:13 compute-0 ceph-mon[75249]: 5.10 scrub ok
Jan 10 17:00:13 compute-0 ceph-mon[75249]: osdmap e53: 3 total, 3 up, 3 in
Jan 10 17:00:13 compute-0 ceph-mon[75249]: 7.10 scrub starts
Jan 10 17:00:13 compute-0 ceph-mon[75249]: 7.10 scrub ok
Jan 10 17:00:13 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]': finished
Jan 10 17:00:13 compute-0 ceph-mon[75249]: osdmap e54: 3 total, 3 up, 3 in
Jan 10 17:00:13 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e54 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:00:14 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v123: 177 pgs: 2 peering, 175 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 270 B/s, 0 objects/s recovering
Jan 10 17:00:14 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 7.14 scrub starts
Jan 10 17:00:14 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 7.14 scrub ok
Jan 10 17:00:14 compute-0 sshd-session[96754]: Accepted publickey for zuul from 192.168.122.30 port 40626 ssh2: ECDSA SHA256:YYROLJW/JwZAyyZtyl+88gzuUs1GqrQIhGb+AzXg9yc
Jan 10 17:00:14 compute-0 ceph-mon[75249]: 7.1b scrub starts
Jan 10 17:00:14 compute-0 ceph-mon[75249]: 7.1b scrub ok
Jan 10 17:00:14 compute-0 systemd-logind[798]: New session 35 of user zuul.
Jan 10 17:00:14 compute-0 systemd[1]: Started Session 35 of User zuul.
Jan 10 17:00:14 compute-0 sshd-session[96754]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 10 17:00:15 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 2.10 scrub starts
Jan 10 17:00:15 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 2.10 scrub ok
Jan 10 17:00:15 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 3.10 scrub starts
Jan 10 17:00:15 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 3.10 scrub ok
Jan 10 17:00:15 compute-0 python3.9[96907]: ansible-ansible.legacy.ping Invoked with data=pong
Jan 10 17:00:15 compute-0 ceph-mon[75249]: pgmap v123: 177 pgs: 2 peering, 175 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 270 B/s, 0 objects/s recovering
Jan 10 17:00:15 compute-0 ceph-mon[75249]: 7.14 scrub starts
Jan 10 17:00:15 compute-0 ceph-mon[75249]: 7.14 scrub ok
Jan 10 17:00:15 compute-0 ceph-mon[75249]: 3.10 scrub starts
Jan 10 17:00:15 compute-0 ceph-mon[75249]: 3.10 scrub ok
Jan 10 17:00:15 compute-0 systemd[76625]: Starting Mark boot as successful...
Jan 10 17:00:15 compute-0 systemd[76625]: Finished Mark boot as successful.
Jan 10 17:00:16 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v124: 177 pgs: 2 peering, 175 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:00:16 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 7.b scrub starts
Jan 10 17:00:16 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 7.b scrub ok
Jan 10 17:00:16 compute-0 ceph-mon[75249]: 2.10 scrub starts
Jan 10 17:00:16 compute-0 ceph-mon[75249]: 2.10 scrub ok
Jan 10 17:00:16 compute-0 ceph-mon[75249]: pgmap v124: 177 pgs: 2 peering, 175 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:00:16 compute-0 ceph-mon[75249]: 7.b scrub starts
Jan 10 17:00:16 compute-0 ceph-mon[75249]: 7.b scrub ok
Jan 10 17:00:16 compute-0 python3.9[97082]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 10 17:00:17 compute-0 sudo[97236]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rnjpwtzexnkeboelwtgxtimatlcucghv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064417.1570668-40-135923810359156/AnsiballZ_command.py'
Jan 10 17:00:17 compute-0 sudo[97236]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:00:17 compute-0 python3.9[97238]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 17:00:17 compute-0 sudo[97236]: pam_unix(sudo:session): session closed for user root
Jan 10 17:00:18 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v125: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 22 B/s, 0 objects/s recovering
Jan 10 17:00:18 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"} v 0)
Jan 10 17:00:18 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"} : dispatch
Jan 10 17:00:18 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e54 do_prune osdmap full prune enabled
Jan 10 17:00:18 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]': finished
Jan 10 17:00:18 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e55 e55: 3 total, 3 up, 3 in
Jan 10 17:00:18 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"} : dispatch
Jan 10 17:00:18 compute-0 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e55: 3 total, 3 up, 3 in
Jan 10 17:00:18 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 5.17 scrub starts
Jan 10 17:00:18 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 5.17 scrub ok
Jan 10 17:00:18 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e55 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:00:18 compute-0 sudo[97389]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dziaotuayyrtywupsntfofqfgnyxfgjg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064418.2180314-52-265209276968205/AnsiballZ_stat.py'
Jan 10 17:00:18 compute-0 sudo[97389]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:00:18 compute-0 python3.9[97391]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 10 17:00:18 compute-0 sudo[97389]: pam_unix(sudo:session): session closed for user root
Jan 10 17:00:19 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 2.11 scrub starts
Jan 10 17:00:19 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 2.11 scrub ok
Jan 10 17:00:19 compute-0 ceph-mon[75249]: pgmap v125: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 22 B/s, 0 objects/s recovering
Jan 10 17:00:19 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]': finished
Jan 10 17:00:19 compute-0 ceph-mon[75249]: osdmap e55: 3 total, 3 up, 3 in
Jan 10 17:00:19 compute-0 ceph-mon[75249]: 5.17 scrub starts
Jan 10 17:00:19 compute-0 ceph-mon[75249]: 5.17 scrub ok
Jan 10 17:00:19 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 5.8 scrub starts
Jan 10 17:00:19 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 5.8 scrub ok
Jan 10 17:00:19 compute-0 sudo[97543]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qqguwsjjpetmdhamrypehfqzapwphxxw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064419.1728013-63-253597201528000/AnsiballZ_file.py'
Jan 10 17:00:19 compute-0 sudo[97543]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:00:19 compute-0 python3.9[97545]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 10 17:00:19 compute-0 sudo[97543]: pam_unix(sudo:session): session closed for user root
Jan 10 17:00:19 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 3.12 scrub starts
Jan 10 17:00:19 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 3.12 scrub ok
Jan 10 17:00:20 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v127: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 20 B/s, 0 objects/s recovering
Jan 10 17:00:20 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"} v 0)
Jan 10 17:00:20 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"} : dispatch
Jan 10 17:00:20 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e55 do_prune osdmap full prune enabled
Jan 10 17:00:20 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]': finished
Jan 10 17:00:20 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e56 e56: 3 total, 3 up, 3 in
Jan 10 17:00:20 compute-0 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e56: 3 total, 3 up, 3 in
Jan 10 17:00:20 compute-0 sudo[97695]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kpczuxmsxqjhmdkrutmmkftgiffrfnrm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064419.992081-72-141802707034636/AnsiballZ_file.py'
Jan 10 17:00:20 compute-0 sudo[97695]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:00:20 compute-0 ceph-mon[75249]: 2.11 scrub starts
Jan 10 17:00:20 compute-0 ceph-mon[75249]: 2.11 scrub ok
Jan 10 17:00:20 compute-0 ceph-mon[75249]: 5.8 scrub starts
Jan 10 17:00:20 compute-0 ceph-mon[75249]: 5.8 scrub ok
Jan 10 17:00:20 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"} : dispatch
Jan 10 17:00:20 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 56 pg[6.8( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=56 pruub=8.209659576s) [2] r=-1 lpr=56 pi=[39,56)/1 crt=33'39 lcod 0'0 active pruub 119.055152893s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 17:00:20 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 56 pg[6.8( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=56 pruub=8.209583282s) [2] r=-1 lpr=56 pi=[39,56)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 119.055152893s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 17:00:20 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 56 pg[6.8( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=56) [2] r=0 lpr=56 pi=[39,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:00:20 compute-0 python3.9[97697]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 10 17:00:20 compute-0 sudo[97695]: pam_unix(sudo:session): session closed for user root
Jan 10 17:00:21 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 3.d scrub starts
Jan 10 17:00:21 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 3.d scrub ok
Jan 10 17:00:21 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 2.e scrub starts
Jan 10 17:00:21 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 2.e scrub ok
Jan 10 17:00:21 compute-0 python3.9[97847]: ansible-ansible.builtin.service_facts Invoked
Jan 10 17:00:21 compute-0 network[97864]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 10 17:00:21 compute-0 network[97865]: 'network-scripts' will be removed from distribution in near future.
Jan 10 17:00:21 compute-0 network[97866]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 10 17:00:21 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e56 do_prune osdmap full prune enabled
Jan 10 17:00:21 compute-0 ceph-mon[75249]: 3.12 scrub starts
Jan 10 17:00:21 compute-0 ceph-mon[75249]: 3.12 scrub ok
Jan 10 17:00:21 compute-0 ceph-mon[75249]: pgmap v127: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 20 B/s, 0 objects/s recovering
Jan 10 17:00:21 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]': finished
Jan 10 17:00:21 compute-0 ceph-mon[75249]: osdmap e56: 3 total, 3 up, 3 in
Jan 10 17:00:21 compute-0 ceph-mon[75249]: 3.d scrub starts
Jan 10 17:00:21 compute-0 ceph-mon[75249]: 3.d scrub ok
Jan 10 17:00:21 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e57 e57: 3 total, 3 up, 3 in
Jan 10 17:00:21 compute-0 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e57: 3 total, 3 up, 3 in
Jan 10 17:00:21 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 57 pg[6.8( v 33'39 (0'0,33'39] local-lis/les=56/57 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=56) [2] r=0 lpr=56 pi=[39,56)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:00:22 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 5.14 scrub starts
Jan 10 17:00:22 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v130: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 26 B/s, 0 objects/s recovering
Jan 10 17:00:22 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"} v 0)
Jan 10 17:00:22 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"} : dispatch
Jan 10 17:00:22 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 5.14 scrub ok
Jan 10 17:00:22 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e57 do_prune osdmap full prune enabled
Jan 10 17:00:22 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]': finished
Jan 10 17:00:22 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e58 e58: 3 total, 3 up, 3 in
Jan 10 17:00:22 compute-0 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e58: 3 total, 3 up, 3 in
Jan 10 17:00:22 compute-0 ceph-mon[75249]: 2.e scrub starts
Jan 10 17:00:22 compute-0 ceph-mon[75249]: 2.e scrub ok
Jan 10 17:00:22 compute-0 ceph-mon[75249]: osdmap e57: 3 total, 3 up, 3 in
Jan 10 17:00:22 compute-0 ceph-mon[75249]: pgmap v130: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 26 B/s, 0 objects/s recovering
Jan 10 17:00:22 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"} : dispatch
Jan 10 17:00:22 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 58 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=58 pruub=12.692907333s) [0] r=-1 lpr=58 pi=[44,58)/1 crt=33'39 lcod 0'0 active pruub 119.995796204s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 17:00:22 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 58 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=58 pruub=12.692836761s) [0] r=-1 lpr=58 pi=[44,58)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 119.995796204s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 17:00:22 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 58 pg[6.9( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=58) [0] r=0 lpr=58 pi=[44,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:00:23 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 3.b scrub starts
Jan 10 17:00:23 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 3.b scrub ok
Jan 10 17:00:23 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e58 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:00:23 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e58 do_prune osdmap full prune enabled
Jan 10 17:00:23 compute-0 ceph-mon[75249]: 5.14 scrub starts
Jan 10 17:00:23 compute-0 ceph-mon[75249]: 5.14 scrub ok
Jan 10 17:00:23 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]': finished
Jan 10 17:00:23 compute-0 ceph-mon[75249]: osdmap e58: 3 total, 3 up, 3 in
Jan 10 17:00:23 compute-0 ceph-mon[75249]: 3.b scrub starts
Jan 10 17:00:23 compute-0 ceph-mon[75249]: 3.b scrub ok
Jan 10 17:00:23 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e59 e59: 3 total, 3 up, 3 in
Jan 10 17:00:23 compute-0 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e59: 3 total, 3 up, 3 in
Jan 10 17:00:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 59 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=58/59 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=58) [0] r=0 lpr=58 pi=[44,58)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:00:24 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v133: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:00:24 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"} v 0)
Jan 10 17:00:24 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"} : dispatch
Jan 10 17:00:24 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 5.15 scrub starts
Jan 10 17:00:24 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 5.15 scrub ok
Jan 10 17:00:24 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 5.a scrub starts
Jan 10 17:00:24 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 5.a scrub ok
Jan 10 17:00:24 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e59 do_prune osdmap full prune enabled
Jan 10 17:00:24 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]': finished
Jan 10 17:00:24 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e60 e60: 3 total, 3 up, 3 in
Jan 10 17:00:24 compute-0 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e60: 3 total, 3 up, 3 in
Jan 10 17:00:24 compute-0 ceph-mon[75249]: osdmap e59: 3 total, 3 up, 3 in
Jan 10 17:00:24 compute-0 ceph-mon[75249]: pgmap v133: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:00:24 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"} : dispatch
Jan 10 17:00:24 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 60 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=46/47 n=1 ec=39/23 lis/c=46/46 les/c/f=47/47/0 sis=60 pruub=8.564128876s) [0] r=-1 lpr=60 pi=[46,60)/1 crt=33'39 lcod 0'0 active pruub 118.069427490s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 17:00:24 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 60 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=46/47 n=1 ec=39/23 lis/c=46/46 les/c/f=47/47/0 sis=60 pruub=8.564027786s) [0] r=-1 lpr=60 pi=[46,60)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 118.069427490s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 17:00:24 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 60 pg[6.a( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=46/46 les/c/f=47/47/0 sis=60) [0] r=0 lpr=60 pi=[46,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:00:25 compute-0 python3.9[98126]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:00:25 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 3.2 scrub starts
Jan 10 17:00:25 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 3.2 scrub ok
Jan 10 17:00:25 compute-0 python3.9[98276]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 10 17:00:25 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e60 do_prune osdmap full prune enabled
Jan 10 17:00:25 compute-0 ceph-mon[75249]: 5.15 scrub starts
Jan 10 17:00:25 compute-0 ceph-mon[75249]: 5.15 scrub ok
Jan 10 17:00:25 compute-0 ceph-mon[75249]: 5.a scrub starts
Jan 10 17:00:25 compute-0 ceph-mon[75249]: 5.a scrub ok
Jan 10 17:00:25 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]': finished
Jan 10 17:00:25 compute-0 ceph-mon[75249]: osdmap e60: 3 total, 3 up, 3 in
Jan 10 17:00:25 compute-0 ceph-mon[75249]: 3.2 scrub starts
Jan 10 17:00:25 compute-0 ceph-mon[75249]: 3.2 scrub ok
Jan 10 17:00:25 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e61 e61: 3 total, 3 up, 3 in
Jan 10 17:00:25 compute-0 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e61: 3 total, 3 up, 3 in
Jan 10 17:00:25 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 61 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=60/61 n=1 ec=39/23 lis/c=46/46 les/c/f=47/47/0 sis=60) [0] r=0 lpr=60 pi=[46,60)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:00:26 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v136: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:00:26 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"} v 0)
Jan 10 17:00:26 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"} : dispatch
Jan 10 17:00:26 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 2.c scrub starts
Jan 10 17:00:26 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 2.c scrub ok
Jan 10 17:00:26 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e61 do_prune osdmap full prune enabled
Jan 10 17:00:26 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]': finished
Jan 10 17:00:26 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e62 e62: 3 total, 3 up, 3 in
Jan 10 17:00:26 compute-0 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e62: 3 total, 3 up, 3 in
Jan 10 17:00:26 compute-0 ceph-mon[75249]: osdmap e61: 3 total, 3 up, 3 in
Jan 10 17:00:26 compute-0 ceph-mon[75249]: pgmap v136: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:00:26 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"} : dispatch
Jan 10 17:00:26 compute-0 python3.9[98430]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 10 17:00:27 compute-0 sudo[98586]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zcowvutyyygmsovhbvsuxbjbsjoanewq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064427.3448913-120-12945798005863/AnsiballZ_setup.py'
Jan 10 17:00:27 compute-0 sudo[98586]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:00:27 compute-0 ceph-mon[75249]: 2.c scrub starts
Jan 10 17:00:27 compute-0 ceph-mon[75249]: 2.c scrub ok
Jan 10 17:00:27 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]': finished
Jan 10 17:00:27 compute-0 ceph-mon[75249]: osdmap e62: 3 total, 3 up, 3 in
Jan 10 17:00:27 compute-0 python3.9[98588]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 10 17:00:28 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v138: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:00:28 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"} v 0)
Jan 10 17:00:28 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"} : dispatch
Jan 10 17:00:28 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 62 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=62 pruub=15.380357742s) [1] r=-1 lpr=62 pi=[48,62)/1 crt=33'39 active pruub 133.736038208s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 17:00:28 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 62 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=62 pruub=15.380233765s) [1] r=-1 lpr=62 pi=[48,62)/1 crt=33'39 unknown NOTIFY pruub 133.736038208s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 17:00:28 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 62 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=62) [1] r=0 lpr=62 pi=[48,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:00:28 compute-0 sudo[98586]: pam_unix(sudo:session): session closed for user root
Jan 10 17:00:28 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:00:28 compute-0 sudo[98670]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xgpmxuqkhufyseyaqypjbnjuzyrjgccp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064427.3448913-120-12945798005863/AnsiballZ_dnf.py'
Jan 10 17:00:28 compute-0 sudo[98670]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:00:28 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e62 do_prune osdmap full prune enabled
Jan 10 17:00:28 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]': finished
Jan 10 17:00:28 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e63 e63: 3 total, 3 up, 3 in
Jan 10 17:00:28 compute-0 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e63: 3 total, 3 up, 3 in
Jan 10 17:00:28 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 63 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=62/63 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=62) [1] r=0 lpr=62 pi=[48,62)/1 crt=33'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:00:28 compute-0 ceph-mon[75249]: pgmap v138: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:00:28 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"} : dispatch
Jan 10 17:00:28 compute-0 python3.9[98672]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 10 17:00:29 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]': finished
Jan 10 17:00:29 compute-0 ceph-mon[75249]: osdmap e63: 3 total, 3 up, 3 in
Jan 10 17:00:30 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v140: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Jan 10 17:00:30 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"} v 0)
Jan 10 17:00:30 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"} : dispatch
Jan 10 17:00:30 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 3.0 scrub starts
Jan 10 17:00:30 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 3.0 scrub ok
Jan 10 17:00:30 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e63 do_prune osdmap full prune enabled
Jan 10 17:00:30 compute-0 ceph-mon[75249]: pgmap v140: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Jan 10 17:00:30 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"} : dispatch
Jan 10 17:00:30 compute-0 ceph-mon[75249]: 3.0 scrub starts
Jan 10 17:00:30 compute-0 ceph-mon[75249]: 3.0 scrub ok
Jan 10 17:00:30 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]': finished
Jan 10 17:00:30 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e64 e64: 3 total, 3 up, 3 in
Jan 10 17:00:30 compute-0 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e64: 3 total, 3 up, 3 in
Jan 10 17:00:31 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 7.0 scrub starts
Jan 10 17:00:31 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 7.0 scrub ok
Jan 10 17:00:31 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 64 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=52/53 n=1 ec=39/23 lis/c=52/52 les/c/f=53/53/0 sis=64 pruub=13.018323898s) [1] r=-1 lpr=64 pi=[52,64)/1 crt=33'39 active pruub 134.575500488s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 17:00:31 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 64 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=52/53 n=1 ec=39/23 lis/c=52/52 les/c/f=53/53/0 sis=64 pruub=13.018235207s) [1] r=-1 lpr=64 pi=[52,64)/1 crt=33'39 unknown NOTIFY pruub 134.575500488s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 17:00:31 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 64 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=52/52 les/c/f=53/53/0 sis=64) [1] r=0 lpr=64 pi=[52,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:00:31 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e64 do_prune osdmap full prune enabled
Jan 10 17:00:31 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]': finished
Jan 10 17:00:31 compute-0 ceph-mon[75249]: osdmap e64: 3 total, 3 up, 3 in
Jan 10 17:00:31 compute-0 ceph-mon[75249]: 7.0 scrub starts
Jan 10 17:00:31 compute-0 ceph-mon[75249]: 7.0 scrub ok
Jan 10 17:00:31 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e65 e65: 3 total, 3 up, 3 in
Jan 10 17:00:31 compute-0 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e65: 3 total, 3 up, 3 in
Jan 10 17:00:31 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 65 pg[6.d( v 33'39 lc 33'13 (0'0,33'39] local-lis/les=64/65 n=1 ec=39/23 lis/c=52/52 les/c/f=53/53/0 sis=64) [1] r=0 lpr=64 pi=[52,64)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:00:32 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v143: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Jan 10 17:00:32 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"} v 0)
Jan 10 17:00:32 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"} : dispatch
Jan 10 17:00:32 compute-0 sudo[98736]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 17:00:32 compute-0 sudo[98736]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:00:32 compute-0 sudo[98736]: pam_unix(sudo:session): session closed for user root
Jan 10 17:00:32 compute-0 sudo[98761]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 10 17:00:32 compute-0 sudo[98761]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:00:32 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e65 do_prune osdmap full prune enabled
Jan 10 17:00:32 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]': finished
Jan 10 17:00:32 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e66 e66: 3 total, 3 up, 3 in
Jan 10 17:00:32 compute-0 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e66: 3 total, 3 up, 3 in
Jan 10 17:00:32 compute-0 ceph-mon[75249]: osdmap e65: 3 total, 3 up, 3 in
Jan 10 17:00:32 compute-0 ceph-mon[75249]: pgmap v143: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Jan 10 17:00:32 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"} : dispatch
Jan 10 17:00:33 compute-0 sudo[98761]: pam_unix(sudo:session): session closed for user root
Jan 10 17:00:33 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 5.b scrub starts
Jan 10 17:00:33 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 5.b scrub ok
Jan 10 17:00:33 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 17:00:33 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 17:00:33 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 10 17:00:33 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 10 17:00:33 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 10 17:00:33 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:00:33 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 10 17:00:33 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 10 17:00:33 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 10 17:00:33 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 10 17:00:33 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 17:00:33 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 17:00:33 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e66 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:00:33 compute-0 sudo[98822]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 17:00:33 compute-0 sudo[98822]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:00:33 compute-0 sudo[98822]: pam_unix(sudo:session): session closed for user root
Jan 10 17:00:33 compute-0 sudo[98847]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 10 17:00:33 compute-0 sudo[98847]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:00:33 compute-0 podman[98884]: 2026-01-10 17:00:33.78621491 +0000 UTC m=+0.045880311 container create 7806112114f7577c65d32611f71a614d2d2695b201fd0bc784997e9c849f03f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_franklin, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 10 17:00:33 compute-0 systemd[1]: Started libpod-conmon-7806112114f7577c65d32611f71a614d2d2695b201fd0bc784997e9c849f03f3.scope.
Jan 10 17:00:33 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:00:33 compute-0 podman[98884]: 2026-01-10 17:00:33.76751309 +0000 UTC m=+0.027178521 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:00:33 compute-0 podman[98884]: 2026-01-10 17:00:33.878124024 +0000 UTC m=+0.137789455 container init 7806112114f7577c65d32611f71a614d2d2695b201fd0bc784997e9c849f03f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_franklin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 17:00:33 compute-0 podman[98884]: 2026-01-10 17:00:33.885911364 +0000 UTC m=+0.145576785 container start 7806112114f7577c65d32611f71a614d2d2695b201fd0bc784997e9c849f03f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_franklin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 10 17:00:33 compute-0 podman[98884]: 2026-01-10 17:00:33.889745963 +0000 UTC m=+0.149411384 container attach 7806112114f7577c65d32611f71a614d2d2695b201fd0bc784997e9c849f03f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_franklin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 10 17:00:33 compute-0 musing_franklin[98900]: 167 167
Jan 10 17:00:33 compute-0 systemd[1]: libpod-7806112114f7577c65d32611f71a614d2d2695b201fd0bc784997e9c849f03f3.scope: Deactivated successfully.
Jan 10 17:00:33 compute-0 podman[98884]: 2026-01-10 17:00:33.89459624 +0000 UTC m=+0.154261661 container died 7806112114f7577c65d32611f71a614d2d2695b201fd0bc784997e9c849f03f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_franklin, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 10 17:00:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-4b9d4240e6d10a56ccea08051c60a74f670d48f4a819c1c2626c18c447482bec-merged.mount: Deactivated successfully.
Jan 10 17:00:33 compute-0 podman[98884]: 2026-01-10 17:00:33.933220005 +0000 UTC m=+0.192885406 container remove 7806112114f7577c65d32611f71a614d2d2695b201fd0bc784997e9c849f03f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_franklin, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 17:00:33 compute-0 systemd[1]: libpod-conmon-7806112114f7577c65d32611f71a614d2d2695b201fd0bc784997e9c849f03f3.scope: Deactivated successfully.
Jan 10 17:00:33 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]': finished
Jan 10 17:00:33 compute-0 ceph-mon[75249]: osdmap e66: 3 total, 3 up, 3 in
Jan 10 17:00:33 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 17:00:33 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 10 17:00:33 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:00:33 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 10 17:00:33 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 10 17:00:33 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 17:00:34 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v145: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Jan 10 17:00:34 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"} v 0)
Jan 10 17:00:34 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"} : dispatch
Jan 10 17:00:34 compute-0 podman[98924]: 2026-01-10 17:00:34.089202294 +0000 UTC m=+0.045675935 container create 260702f257bf71109f54292c1b2b5b044f2deb70ef365050711224ec8da3d4cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_grothendieck, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 17:00:34 compute-0 systemd[1]: Started libpod-conmon-260702f257bf71109f54292c1b2b5b044f2deb70ef365050711224ec8da3d4cf.scope.
Jan 10 17:00:34 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:00:34 compute-0 podman[98924]: 2026-01-10 17:00:34.071155353 +0000 UTC m=+0.027629034 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:00:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d8be473c39483e49c2ba33b38deef63751ae3d47d0e066db40b3b48459268cc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 17:00:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d8be473c39483e49c2ba33b38deef63751ae3d47d0e066db40b3b48459268cc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 17:00:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d8be473c39483e49c2ba33b38deef63751ae3d47d0e066db40b3b48459268cc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 17:00:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d8be473c39483e49c2ba33b38deef63751ae3d47d0e066db40b3b48459268cc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 17:00:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d8be473c39483e49c2ba33b38deef63751ae3d47d0e066db40b3b48459268cc/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 10 17:00:34 compute-0 podman[98924]: 2026-01-10 17:00:34.181764507 +0000 UTC m=+0.138238148 container init 260702f257bf71109f54292c1b2b5b044f2deb70ef365050711224ec8da3d4cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_grothendieck, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 10 17:00:34 compute-0 podman[98924]: 2026-01-10 17:00:34.190443482 +0000 UTC m=+0.146917143 container start 260702f257bf71109f54292c1b2b5b044f2deb70ef365050711224ec8da3d4cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_grothendieck, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 10 17:00:34 compute-0 podman[98924]: 2026-01-10 17:00:34.194221699 +0000 UTC m=+0.150695390 container attach 260702f257bf71109f54292c1b2b5b044f2deb70ef365050711224ec8da3d4cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_grothendieck, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 10 17:00:34 compute-0 condescending_grothendieck[98941]: --> passed data devices: 0 physical, 3 LVM
Jan 10 17:00:34 compute-0 condescending_grothendieck[98941]: --> All data devices are unavailable
Jan 10 17:00:34 compute-0 systemd[1]: libpod-260702f257bf71109f54292c1b2b5b044f2deb70ef365050711224ec8da3d4cf.scope: Deactivated successfully.
Jan 10 17:00:34 compute-0 podman[98924]: 2026-01-10 17:00:34.698268099 +0000 UTC m=+0.654741750 container died 260702f257bf71109f54292c1b2b5b044f2deb70ef365050711224ec8da3d4cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_grothendieck, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 10 17:00:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-7d8be473c39483e49c2ba33b38deef63751ae3d47d0e066db40b3b48459268cc-merged.mount: Deactivated successfully.
Jan 10 17:00:34 compute-0 podman[98924]: 2026-01-10 17:00:34.750916691 +0000 UTC m=+0.707390332 container remove 260702f257bf71109f54292c1b2b5b044f2deb70ef365050711224ec8da3d4cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_grothendieck, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 10 17:00:34 compute-0 systemd[1]: libpod-conmon-260702f257bf71109f54292c1b2b5b044f2deb70ef365050711224ec8da3d4cf.scope: Deactivated successfully.
Jan 10 17:00:34 compute-0 sudo[98847]: pam_unix(sudo:session): session closed for user root
Jan 10 17:00:34 compute-0 sudo[98977]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 17:00:34 compute-0 sudo[98977]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:00:34 compute-0 sudo[98977]: pam_unix(sudo:session): session closed for user root
Jan 10 17:00:34 compute-0 sudo[99005]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 -- lvm list --format json
Jan 10 17:00:34 compute-0 sudo[99005]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:00:34 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e66 do_prune osdmap full prune enabled
Jan 10 17:00:34 compute-0 ceph-mon[75249]: 5.b scrub starts
Jan 10 17:00:34 compute-0 ceph-mon[75249]: 5.b scrub ok
Jan 10 17:00:34 compute-0 ceph-mon[75249]: pgmap v145: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Jan 10 17:00:34 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"} : dispatch
Jan 10 17:00:34 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Jan 10 17:00:34 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e67 e67: 3 total, 3 up, 3 in
Jan 10 17:00:34 compute-0 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e67: 3 total, 3 up, 3 in
Jan 10 17:00:35 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 3.4 scrub starts
Jan 10 17:00:35 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 3.4 scrub ok
Jan 10 17:00:35 compute-0 podman[99043]: 2026-01-10 17:00:35.272159589 +0000 UTC m=+0.044785500 container create c20e266ab48459e7579c3079b5c434980abdbe446a214d80e3bdfed46aa8ac68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_nobel, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 10 17:00:35 compute-0 systemd[1]: Started libpod-conmon-c20e266ab48459e7579c3079b5c434980abdbe446a214d80e3bdfed46aa8ac68.scope.
Jan 10 17:00:35 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 2.0 scrub starts
Jan 10 17:00:35 compute-0 podman[99043]: 2026-01-10 17:00:35.255072985 +0000 UTC m=+0.027698866 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:00:35 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:00:35 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 2.0 scrub ok
Jan 10 17:00:35 compute-0 podman[99043]: 2026-01-10 17:00:35.387386543 +0000 UTC m=+0.160012424 container init c20e266ab48459e7579c3079b5c434980abdbe446a214d80e3bdfed46aa8ac68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_nobel, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 10 17:00:35 compute-0 podman[99043]: 2026-01-10 17:00:35.393539828 +0000 UTC m=+0.166165729 container start c20e266ab48459e7579c3079b5c434980abdbe446a214d80e3bdfed46aa8ac68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_nobel, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 17:00:35 compute-0 relaxed_nobel[99060]: 167 167
Jan 10 17:00:35 compute-0 systemd[1]: libpod-c20e266ab48459e7579c3079b5c434980abdbe446a214d80e3bdfed46aa8ac68.scope: Deactivated successfully.
Jan 10 17:00:35 compute-0 podman[99043]: 2026-01-10 17:00:35.399073175 +0000 UTC m=+0.171699046 container attach c20e266ab48459e7579c3079b5c434980abdbe446a214d80e3bdfed46aa8ac68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_nobel, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 10 17:00:35 compute-0 podman[99043]: 2026-01-10 17:00:35.399509087 +0000 UTC m=+0.172134968 container died c20e266ab48459e7579c3079b5c434980abdbe446a214d80e3bdfed46aa8ac68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_nobel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 10 17:00:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-6b5bbd021456820ce5268344c82a626950efe0d699502a6ab4d061983e522d43-merged.mount: Deactivated successfully.
Jan 10 17:00:35 compute-0 podman[99043]: 2026-01-10 17:00:35.444838521 +0000 UTC m=+0.217464382 container remove c20e266ab48459e7579c3079b5c434980abdbe446a214d80e3bdfed46aa8ac68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_nobel, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 17:00:35 compute-0 systemd[1]: libpod-conmon-c20e266ab48459e7579c3079b5c434980abdbe446a214d80e3bdfed46aa8ac68.scope: Deactivated successfully.
Jan 10 17:00:35 compute-0 podman[99084]: 2026-01-10 17:00:35.602740965 +0000 UTC m=+0.040759826 container create f45824ee63c6d787265729f153515d164ed8d30eeec4f6ee280ed20e7d570a5d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_lichterman, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 10 17:00:35 compute-0 systemd[1]: Started libpod-conmon-f45824ee63c6d787265729f153515d164ed8d30eeec4f6ee280ed20e7d570a5d.scope.
Jan 10 17:00:35 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:00:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6a149a7755436951a72da4edf0acfdb4e878994a38cc0e219c2d0896595767e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 17:00:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6a149a7755436951a72da4edf0acfdb4e878994a38cc0e219c2d0896595767e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 17:00:35 compute-0 podman[99084]: 2026-01-10 17:00:35.584639252 +0000 UTC m=+0.022658133 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:00:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6a149a7755436951a72da4edf0acfdb4e878994a38cc0e219c2d0896595767e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 17:00:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6a149a7755436951a72da4edf0acfdb4e878994a38cc0e219c2d0896595767e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 17:00:35 compute-0 podman[99084]: 2026-01-10 17:00:35.69751302 +0000 UTC m=+0.135531911 container init f45824ee63c6d787265729f153515d164ed8d30eeec4f6ee280ed20e7d570a5d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_lichterman, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 10 17:00:35 compute-0 podman[99084]: 2026-01-10 17:00:35.712881855 +0000 UTC m=+0.150900716 container start f45824ee63c6d787265729f153515d164ed8d30eeec4f6ee280ed20e7d570a5d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_lichterman, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 17:00:35 compute-0 podman[99084]: 2026-01-10 17:00:35.717894737 +0000 UTC m=+0.155913598 container attach f45824ee63c6d787265729f153515d164ed8d30eeec4f6ee280ed20e7d570a5d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_lichterman, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 10 17:00:36 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v147: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 13 B/s, 0 objects/s recovering
Jan 10 17:00:36 compute-0 jovial_lichterman[99100]: {
Jan 10 17:00:36 compute-0 jovial_lichterman[99100]:     "0": [
Jan 10 17:00:36 compute-0 jovial_lichterman[99100]:         {
Jan 10 17:00:36 compute-0 jovial_lichterman[99100]:             "devices": [
Jan 10 17:00:36 compute-0 jovial_lichterman[99100]:                 "/dev/loop3"
Jan 10 17:00:36 compute-0 jovial_lichterman[99100]:             ],
Jan 10 17:00:36 compute-0 jovial_lichterman[99100]:             "lv_name": "ceph_lv0",
Jan 10 17:00:36 compute-0 jovial_lichterman[99100]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 10 17:00:36 compute-0 jovial_lichterman[99100]:             "lv_size": "21470642176",
Jan 10 17:00:36 compute-0 jovial_lichterman[99100]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9aa1dcc9-88f4-49c0-be40-744313964d3e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 17:00:36 compute-0 jovial_lichterman[99100]:             "lv_uuid": "fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2",
Jan 10 17:00:36 compute-0 jovial_lichterman[99100]:             "name": "ceph_lv0",
Jan 10 17:00:36 compute-0 jovial_lichterman[99100]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 10 17:00:36 compute-0 jovial_lichterman[99100]:             "tags": {
Jan 10 17:00:36 compute-0 jovial_lichterman[99100]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 10 17:00:36 compute-0 jovial_lichterman[99100]:                 "ceph.block_uuid": "fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2",
Jan 10 17:00:36 compute-0 jovial_lichterman[99100]:                 "ceph.cephx_lockbox_secret": "",
Jan 10 17:00:36 compute-0 jovial_lichterman[99100]:                 "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 17:00:36 compute-0 jovial_lichterman[99100]:                 "ceph.cluster_name": "ceph",
Jan 10 17:00:36 compute-0 jovial_lichterman[99100]:                 "ceph.crush_device_class": "",
Jan 10 17:00:36 compute-0 jovial_lichterman[99100]:                 "ceph.encrypted": "0",
Jan 10 17:00:36 compute-0 jovial_lichterman[99100]:                 "ceph.objectstore": "bluestore",
Jan 10 17:00:36 compute-0 jovial_lichterman[99100]:                 "ceph.osd_fsid": "9aa1dcc9-88f4-49c0-be40-744313964d3e",
Jan 10 17:00:36 compute-0 jovial_lichterman[99100]:                 "ceph.osd_id": "0",
Jan 10 17:00:36 compute-0 jovial_lichterman[99100]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 10 17:00:36 compute-0 jovial_lichterman[99100]:                 "ceph.type": "block",
Jan 10 17:00:36 compute-0 jovial_lichterman[99100]:                 "ceph.vdo": "0",
Jan 10 17:00:36 compute-0 jovial_lichterman[99100]:                 "ceph.with_tpm": "0"
Jan 10 17:00:36 compute-0 jovial_lichterman[99100]:             },
Jan 10 17:00:36 compute-0 jovial_lichterman[99100]:             "type": "block",
Jan 10 17:00:36 compute-0 jovial_lichterman[99100]:             "vg_name": "ceph_vg0"
Jan 10 17:00:36 compute-0 jovial_lichterman[99100]:         }
Jan 10 17:00:36 compute-0 jovial_lichterman[99100]:     ],
Jan 10 17:00:36 compute-0 jovial_lichterman[99100]:     "1": [
Jan 10 17:00:36 compute-0 jovial_lichterman[99100]:         {
Jan 10 17:00:36 compute-0 jovial_lichterman[99100]:             "devices": [
Jan 10 17:00:36 compute-0 jovial_lichterman[99100]:                 "/dev/loop4"
Jan 10 17:00:36 compute-0 jovial_lichterman[99100]:             ],
Jan 10 17:00:36 compute-0 jovial_lichterman[99100]:             "lv_name": "ceph_lv1",
Jan 10 17:00:36 compute-0 jovial_lichterman[99100]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 10 17:00:36 compute-0 jovial_lichterman[99100]:             "lv_size": "21470642176",
Jan 10 17:00:36 compute-0 jovial_lichterman[99100]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e8e31518-65ae-476c-891c-e2fc550d0a1c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 17:00:36 compute-0 jovial_lichterman[99100]:             "lv_uuid": "jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE",
Jan 10 17:00:36 compute-0 jovial_lichterman[99100]:             "name": "ceph_lv1",
Jan 10 17:00:36 compute-0 jovial_lichterman[99100]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 10 17:00:36 compute-0 jovial_lichterman[99100]:             "tags": {
Jan 10 17:00:36 compute-0 jovial_lichterman[99100]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 10 17:00:36 compute-0 jovial_lichterman[99100]:                 "ceph.block_uuid": "jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE",
Jan 10 17:00:36 compute-0 jovial_lichterman[99100]:                 "ceph.cephx_lockbox_secret": "",
Jan 10 17:00:36 compute-0 jovial_lichterman[99100]:                 "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 17:00:36 compute-0 jovial_lichterman[99100]:                 "ceph.cluster_name": "ceph",
Jan 10 17:00:36 compute-0 jovial_lichterman[99100]:                 "ceph.crush_device_class": "",
Jan 10 17:00:36 compute-0 jovial_lichterman[99100]:                 "ceph.encrypted": "0",
Jan 10 17:00:36 compute-0 jovial_lichterman[99100]:                 "ceph.objectstore": "bluestore",
Jan 10 17:00:36 compute-0 jovial_lichterman[99100]:                 "ceph.osd_fsid": "e8e31518-65ae-476c-891c-e2fc550d0a1c",
Jan 10 17:00:36 compute-0 jovial_lichterman[99100]:                 "ceph.osd_id": "1",
Jan 10 17:00:36 compute-0 jovial_lichterman[99100]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 10 17:00:36 compute-0 jovial_lichterman[99100]:                 "ceph.type": "block",
Jan 10 17:00:36 compute-0 jovial_lichterman[99100]:                 "ceph.vdo": "0",
Jan 10 17:00:36 compute-0 jovial_lichterman[99100]:                 "ceph.with_tpm": "0"
Jan 10 17:00:36 compute-0 jovial_lichterman[99100]:             },
Jan 10 17:00:36 compute-0 jovial_lichterman[99100]:             "type": "block",
Jan 10 17:00:36 compute-0 jovial_lichterman[99100]:             "vg_name": "ceph_vg1"
Jan 10 17:00:36 compute-0 jovial_lichterman[99100]:         }
Jan 10 17:00:36 compute-0 jovial_lichterman[99100]:     ],
Jan 10 17:00:36 compute-0 jovial_lichterman[99100]:     "2": [
Jan 10 17:00:36 compute-0 jovial_lichterman[99100]:         {
Jan 10 17:00:36 compute-0 jovial_lichterman[99100]:             "devices": [
Jan 10 17:00:36 compute-0 jovial_lichterman[99100]:                 "/dev/loop5"
Jan 10 17:00:36 compute-0 jovial_lichterman[99100]:             ],
Jan 10 17:00:36 compute-0 jovial_lichterman[99100]:             "lv_name": "ceph_lv2",
Jan 10 17:00:36 compute-0 jovial_lichterman[99100]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 10 17:00:36 compute-0 jovial_lichterman[99100]:             "lv_size": "21470642176",
Jan 10 17:00:36 compute-0 jovial_lichterman[99100]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=87473727-6468-4f68-8371-e0bf60edaa43,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 17:00:36 compute-0 jovial_lichterman[99100]:             "lv_uuid": "47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL",
Jan 10 17:00:36 compute-0 jovial_lichterman[99100]:             "name": "ceph_lv2",
Jan 10 17:00:36 compute-0 jovial_lichterman[99100]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 10 17:00:36 compute-0 jovial_lichterman[99100]:             "tags": {
Jan 10 17:00:36 compute-0 jovial_lichterman[99100]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 10 17:00:36 compute-0 jovial_lichterman[99100]:                 "ceph.block_uuid": "47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL",
Jan 10 17:00:36 compute-0 jovial_lichterman[99100]:                 "ceph.cephx_lockbox_secret": "",
Jan 10 17:00:36 compute-0 jovial_lichterman[99100]:                 "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 17:00:36 compute-0 jovial_lichterman[99100]:                 "ceph.cluster_name": "ceph",
Jan 10 17:00:36 compute-0 jovial_lichterman[99100]:                 "ceph.crush_device_class": "",
Jan 10 17:00:36 compute-0 jovial_lichterman[99100]:                 "ceph.encrypted": "0",
Jan 10 17:00:36 compute-0 jovial_lichterman[99100]:                 "ceph.objectstore": "bluestore",
Jan 10 17:00:36 compute-0 jovial_lichterman[99100]:                 "ceph.osd_fsid": "87473727-6468-4f68-8371-e0bf60edaa43",
Jan 10 17:00:36 compute-0 jovial_lichterman[99100]:                 "ceph.osd_id": "2",
Jan 10 17:00:36 compute-0 jovial_lichterman[99100]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 10 17:00:36 compute-0 jovial_lichterman[99100]:                 "ceph.type": "block",
Jan 10 17:00:36 compute-0 jovial_lichterman[99100]:                 "ceph.vdo": "0",
Jan 10 17:00:36 compute-0 jovial_lichterman[99100]:                 "ceph.with_tpm": "0"
Jan 10 17:00:36 compute-0 jovial_lichterman[99100]:             },
Jan 10 17:00:36 compute-0 jovial_lichterman[99100]:             "type": "block",
Jan 10 17:00:36 compute-0 jovial_lichterman[99100]:             "vg_name": "ceph_vg2"
Jan 10 17:00:36 compute-0 jovial_lichterman[99100]:         }
Jan 10 17:00:36 compute-0 jovial_lichterman[99100]:     ]
Jan 10 17:00:36 compute-0 jovial_lichterman[99100]: }
Jan 10 17:00:36 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Jan 10 17:00:36 compute-0 ceph-mon[75249]: osdmap e67: 3 total, 3 up, 3 in
Jan 10 17:00:36 compute-0 ceph-mon[75249]: 3.4 scrub starts
Jan 10 17:00:36 compute-0 ceph-mon[75249]: 3.4 scrub ok
Jan 10 17:00:36 compute-0 systemd[1]: libpod-f45824ee63c6d787265729f153515d164ed8d30eeec4f6ee280ed20e7d570a5d.scope: Deactivated successfully.
Jan 10 17:00:36 compute-0 podman[99084]: 2026-01-10 17:00:36.067492012 +0000 UTC m=+0.505510873 container died f45824ee63c6d787265729f153515d164ed8d30eeec4f6ee280ed20e7d570a5d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_lichterman, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 10 17:00:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-c6a149a7755436951a72da4edf0acfdb4e878994a38cc0e219c2d0896595767e-merged.mount: Deactivated successfully.
Jan 10 17:00:36 compute-0 podman[99084]: 2026-01-10 17:00:36.139975506 +0000 UTC m=+0.577994367 container remove f45824ee63c6d787265729f153515d164ed8d30eeec4f6ee280ed20e7d570a5d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_lichterman, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 10 17:00:36 compute-0 systemd[1]: libpod-conmon-f45824ee63c6d787265729f153515d164ed8d30eeec4f6ee280ed20e7d570a5d.scope: Deactivated successfully.
Jan 10 17:00:36 compute-0 sudo[99005]: pam_unix(sudo:session): session closed for user root
Jan 10 17:00:36 compute-0 sudo[99123]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 17:00:36 compute-0 sudo[99123]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:00:36 compute-0 sudo[99123]: pam_unix(sudo:session): session closed for user root
Jan 10 17:00:36 compute-0 sudo[99148]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 -- raw list --format json
Jan 10 17:00:36 compute-0 sudo[99148]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:00:36 compute-0 podman[99186]: 2026-01-10 17:00:36.568030204 +0000 UTC m=+0.020078110 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:00:36 compute-0 podman[99186]: 2026-01-10 17:00:36.80262132 +0000 UTC m=+0.254669226 container create 1a93aedad4283f4ffe575dd21da22e6f9e81782555977bc3f0c375028027d504 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_moser, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 17:00:36 compute-0 systemd[1]: Started libpod-conmon-1a93aedad4283f4ffe575dd21da22e6f9e81782555977bc3f0c375028027d504.scope.
Jan 10 17:00:36 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:00:37 compute-0 podman[99186]: 2026-01-10 17:00:37.009071715 +0000 UTC m=+0.461119621 container init 1a93aedad4283f4ffe575dd21da22e6f9e81782555977bc3f0c375028027d504 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_moser, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 10 17:00:37 compute-0 podman[99186]: 2026-01-10 17:00:37.019855955 +0000 UTC m=+0.471903821 container start 1a93aedad4283f4ffe575dd21da22e6f9e81782555977bc3f0c375028027d504 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_moser, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 17:00:37 compute-0 dazzling_moser[99202]: 167 167
Jan 10 17:00:37 compute-0 systemd[1]: libpod-1a93aedad4283f4ffe575dd21da22e6f9e81782555977bc3f0c375028027d504.scope: Deactivated successfully.
Jan 10 17:00:37 compute-0 podman[99186]: 2026-01-10 17:00:37.038048782 +0000 UTC m=+0.490096688 container attach 1a93aedad4283f4ffe575dd21da22e6f9e81782555977bc3f0c375028027d504 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_moser, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 10 17:00:37 compute-0 podman[99186]: 2026-01-10 17:00:37.038538007 +0000 UTC m=+0.490585883 container died 1a93aedad4283f4ffe575dd21da22e6f9e81782555977bc3f0c375028027d504 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_moser, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 10 17:00:37 compute-0 ceph-mon[75249]: 2.0 scrub starts
Jan 10 17:00:37 compute-0 ceph-mon[75249]: 2.0 scrub ok
Jan 10 17:00:37 compute-0 ceph-mon[75249]: pgmap v147: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 13 B/s, 0 objects/s recovering
Jan 10 17:00:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-7ede883d89862905a42416e9d84caacab7e0c3ef034ecb42e8c52678d3517fa7-merged.mount: Deactivated successfully.
Jan 10 17:00:37 compute-0 ceph-mgr[75538]: [balancer INFO root] Optimize plan auto_2026-01-10_17:00:37
Jan 10 17:00:37 compute-0 ceph-mgr[75538]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 10 17:00:37 compute-0 ceph-mgr[75538]: [balancer INFO root] do_upmap
Jan 10 17:00:37 compute-0 ceph-mgr[75538]: [balancer INFO root] pools ['cephfs.cephfs.meta', '.mgr', 'volumes', 'backups', 'images', 'vms', 'cephfs.cephfs.data']
Jan 10 17:00:37 compute-0 ceph-mgr[75538]: [balancer INFO root] prepared 0/10 upmap changes
Jan 10 17:00:38 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v148: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 11 B/s, 0 objects/s recovering
Jan 10 17:00:38 compute-0 podman[99186]: 2026-01-10 17:00:38.106040141 +0000 UTC m=+1.558088017 container remove 1a93aedad4283f4ffe575dd21da22e6f9e81782555977bc3f0c375028027d504 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_moser, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 17:00:38 compute-0 systemd[1]: libpod-conmon-1a93aedad4283f4ffe575dd21da22e6f9e81782555977bc3f0c375028027d504.scope: Deactivated successfully.
Jan 10 17:00:38 compute-0 podman[99228]: 2026-01-10 17:00:38.298398196 +0000 UTC m=+0.057213612 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:00:38 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:00:38 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:00:38 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:00:38 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:00:38 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:00:38 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:00:38 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e67 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:00:38 compute-0 ceph-mgr[75538]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 10 17:00:38 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 10 17:00:38 compute-0 ceph-mgr[75538]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 10 17:00:38 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 67 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=67 pruub=12.483779907s) [2] r=-1 lpr=67 pi=[48,67)/1 crt=33'39 active pruub 141.736114502s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 17:00:38 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 67 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=67 pruub=12.483655930s) [2] r=-1 lpr=67 pi=[48,67)/1 crt=33'39 unknown NOTIFY pruub 141.736114502s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 17:00:38 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 7.13 scrub starts
Jan 10 17:00:38 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 67 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=67) [2] r=0 lpr=67 pi=[48,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:00:38 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 10 17:00:38 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 10 17:00:38 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 10 17:00:38 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 10 17:00:38 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 10 17:00:38 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 10 17:00:38 compute-0 podman[99228]: 2026-01-10 17:00:38.998803541 +0000 UTC m=+0.757618957 container create 2ac3111036bb14a3e0e70f8764170032e38e01d585b57aecddf22502736841fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_swartz, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 17:00:38 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 10 17:00:39 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 7.13 scrub ok
Jan 10 17:00:39 compute-0 ceph-mon[75249]: pgmap v148: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 11 B/s, 0 objects/s recovering
Jan 10 17:00:39 compute-0 systemd[1]: Started libpod-conmon-2ac3111036bb14a3e0e70f8764170032e38e01d585b57aecddf22502736841fd.scope.
Jan 10 17:00:39 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:00:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac38948ad77b9e1e7615ba4d3aae2ea1b11449688d26e6e60b3020132e4af505/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 17:00:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac38948ad77b9e1e7615ba4d3aae2ea1b11449688d26e6e60b3020132e4af505/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 17:00:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac38948ad77b9e1e7615ba4d3aae2ea1b11449688d26e6e60b3020132e4af505/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 17:00:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac38948ad77b9e1e7615ba4d3aae2ea1b11449688d26e6e60b3020132e4af505/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 17:00:39 compute-0 podman[99228]: 2026-01-10 17:00:39.104971229 +0000 UTC m=+0.863786645 container init 2ac3111036bb14a3e0e70f8764170032e38e01d585b57aecddf22502736841fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_swartz, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 17:00:39 compute-0 podman[99228]: 2026-01-10 17:00:39.112496779 +0000 UTC m=+0.871312185 container start 2ac3111036bb14a3e0e70f8764170032e38e01d585b57aecddf22502736841fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_swartz, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 10 17:00:39 compute-0 podman[99228]: 2026-01-10 17:00:39.116549653 +0000 UTC m=+0.875365059 container attach 2ac3111036bb14a3e0e70f8764170032e38e01d585b57aecddf22502736841fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_swartz, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 10 17:00:39 compute-0 lvm[99323]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 10 17:00:39 compute-0 lvm[99323]: VG ceph_vg0 finished
Jan 10 17:00:39 compute-0 lvm[99324]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 10 17:00:39 compute-0 lvm[99324]: VG ceph_vg1 finished
Jan 10 17:00:39 compute-0 lvm[99326]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 10 17:00:39 compute-0 lvm[99326]: VG ceph_vg2 finished
Jan 10 17:00:39 compute-0 musing_swartz[99245]: {}
Jan 10 17:00:39 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e67 do_prune osdmap full prune enabled
Jan 10 17:00:39 compute-0 systemd[1]: libpod-2ac3111036bb14a3e0e70f8764170032e38e01d585b57aecddf22502736841fd.scope: Deactivated successfully.
Jan 10 17:00:39 compute-0 systemd[1]: libpod-2ac3111036bb14a3e0e70f8764170032e38e01d585b57aecddf22502736841fd.scope: Consumed 1.443s CPU time.
Jan 10 17:00:39 compute-0 podman[99228]: 2026-01-10 17:00:39.993925863 +0000 UTC m=+1.752741299 container died 2ac3111036bb14a3e0e70f8764170032e38e01d585b57aecddf22502736841fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_swartz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 17:00:39 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 e68: 3 total, 3 up, 3 in
Jan 10 17:00:40 compute-0 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e68: 3 total, 3 up, 3 in
Jan 10 17:00:40 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 68 pg[6.f( v 33'39 lc 33'1 (0'0,33'39] local-lis/les=67/68 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=67) [2] r=0 lpr=67 pi=[48,67)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:00:40 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v150: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 9 B/s, 0 objects/s recovering
Jan 10 17:00:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-ac38948ad77b9e1e7615ba4d3aae2ea1b11449688d26e6e60b3020132e4af505-merged.mount: Deactivated successfully.
Jan 10 17:00:40 compute-0 ceph-mon[75249]: 7.13 scrub starts
Jan 10 17:00:40 compute-0 ceph-mon[75249]: 7.13 scrub ok
Jan 10 17:00:40 compute-0 ceph-mon[75249]: osdmap e68: 3 total, 3 up, 3 in
Jan 10 17:00:40 compute-0 podman[99228]: 2026-01-10 17:00:40.058987503 +0000 UTC m=+1.817802909 container remove 2ac3111036bb14a3e0e70f8764170032e38e01d585b57aecddf22502736841fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_swartz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 17:00:40 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 2.13 scrub starts
Jan 10 17:00:40 compute-0 systemd[1]: libpod-conmon-2ac3111036bb14a3e0e70f8764170032e38e01d585b57aecddf22502736841fd.scope: Deactivated successfully.
Jan 10 17:00:40 compute-0 sudo[99148]: pam_unix(sudo:session): session closed for user root
Jan 10 17:00:40 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 10 17:00:40 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:00:40 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 10 17:00:40 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:00:40 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 2.13 scrub ok
Jan 10 17:00:40 compute-0 sudo[99339]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 10 17:00:40 compute-0 sudo[99339]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:00:40 compute-0 sudo[99339]: pam_unix(sudo:session): session closed for user root
Jan 10 17:00:40 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 7.7 scrub starts
Jan 10 17:00:40 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 7.7 scrub ok
Jan 10 17:00:41 compute-0 ceph-mon[75249]: pgmap v150: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 9 B/s, 0 objects/s recovering
Jan 10 17:00:41 compute-0 ceph-mon[75249]: 2.13 scrub starts
Jan 10 17:00:41 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:00:41 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:00:41 compute-0 ceph-mon[75249]: 2.13 scrub ok
Jan 10 17:00:41 compute-0 ceph-mon[75249]: 7.7 scrub starts
Jan 10 17:00:41 compute-0 ceph-mon[75249]: 7.7 scrub ok
Jan 10 17:00:42 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v151: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 85 B/s, 0 objects/s recovering
Jan 10 17:00:42 compute-0 sshd-session[99364]: Connection closed by authenticating user root 216.36.124.133 port 41274 [preauth]
Jan 10 17:00:43 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 3.17 scrub starts
Jan 10 17:00:43 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 3.17 scrub ok
Jan 10 17:00:43 compute-0 ceph-mon[75249]: pgmap v151: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 85 B/s, 0 objects/s recovering
Jan 10 17:00:43 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:00:44 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v152: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 67 B/s, 0 objects/s recovering
Jan 10 17:00:44 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 3.9 scrub starts
Jan 10 17:00:44 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 3.9 scrub ok
Jan 10 17:00:44 compute-0 ceph-mon[75249]: 3.17 scrub starts
Jan 10 17:00:44 compute-0 ceph-mon[75249]: 3.17 scrub ok
Jan 10 17:00:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] _maybe_adjust
Jan 10 17:00:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:00:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 10 17:00:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:00:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 17:00:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:00:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 17:00:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:00:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 17:00:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:00:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 17:00:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:00:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 9.290204970656704e-07 of space, bias 4.0, pg target 0.0011148245964788044 quantized to 16 (current 16)
Jan 10 17:00:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:00:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 17:00:45 compute-0 ceph-mon[75249]: pgmap v152: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 67 B/s, 0 objects/s recovering
Jan 10 17:00:45 compute-0 ceph-mon[75249]: 3.9 scrub starts
Jan 10 17:00:45 compute-0 ceph-mon[75249]: 3.9 scrub ok
Jan 10 17:00:46 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v153: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 61 B/s, 0 objects/s recovering
Jan 10 17:00:47 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 7.d scrub starts
Jan 10 17:00:47 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 7.d scrub ok
Jan 10 17:00:47 compute-0 ceph-mon[75249]: pgmap v153: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 61 B/s, 0 objects/s recovering
Jan 10 17:00:48 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v154: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 61 B/s, 0 objects/s recovering
Jan 10 17:00:48 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 5.0 scrub starts
Jan 10 17:00:48 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 5.0 scrub ok
Jan 10 17:00:48 compute-0 ceph-mon[75249]: 7.d scrub starts
Jan 10 17:00:48 compute-0 ceph-mon[75249]: 7.d scrub ok
Jan 10 17:00:48 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:00:49 compute-0 ceph-mon[75249]: pgmap v154: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 61 B/s, 0 objects/s recovering
Jan 10 17:00:49 compute-0 ceph-mon[75249]: 5.0 scrub starts
Jan 10 17:00:49 compute-0 ceph-mon[75249]: 5.0 scrub ok
Jan 10 17:00:50 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v155: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 61 B/s, 0 objects/s recovering
Jan 10 17:00:50 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 2.8 scrub starts
Jan 10 17:00:50 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 2.8 scrub ok
Jan 10 17:00:50 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 2.1 scrub starts
Jan 10 17:00:50 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 2.1 scrub ok
Jan 10 17:00:50 compute-0 ceph-mon[75249]: pgmap v155: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 61 B/s, 0 objects/s recovering
Jan 10 17:00:51 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 7.19 scrub starts
Jan 10 17:00:51 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 7.19 scrub ok
Jan 10 17:00:51 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 5.6 scrub starts
Jan 10 17:00:51 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 5.6 scrub ok
Jan 10 17:00:51 compute-0 ceph-mon[75249]: 2.8 scrub starts
Jan 10 17:00:51 compute-0 ceph-mon[75249]: 2.8 scrub ok
Jan 10 17:00:51 compute-0 ceph-mon[75249]: 2.1 scrub starts
Jan 10 17:00:51 compute-0 ceph-mon[75249]: 2.1 scrub ok
Jan 10 17:00:51 compute-0 ceph-mon[75249]: 7.19 scrub starts
Jan 10 17:00:51 compute-0 ceph-mon[75249]: 7.19 scrub ok
Jan 10 17:00:52 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v156: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 51 B/s, 0 objects/s recovering
Jan 10 17:00:52 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 3.15 scrub starts
Jan 10 17:00:52 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 3.15 scrub ok
Jan 10 17:00:52 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 5.e scrub starts
Jan 10 17:00:52 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 5.e scrub ok
Jan 10 17:00:52 compute-0 ceph-mon[75249]: 5.6 scrub starts
Jan 10 17:00:52 compute-0 ceph-mon[75249]: 5.6 scrub ok
Jan 10 17:00:52 compute-0 ceph-mon[75249]: pgmap v156: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 51 B/s, 0 objects/s recovering
Jan 10 17:00:53 compute-0 ceph-mon[75249]: 3.15 scrub starts
Jan 10 17:00:53 compute-0 ceph-mon[75249]: 3.15 scrub ok
Jan 10 17:00:53 compute-0 ceph-mon[75249]: 5.e scrub starts
Jan 10 17:00:53 compute-0 ceph-mon[75249]: 5.e scrub ok
Jan 10 17:00:53 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:00:54 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v157: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:00:54 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 5.d scrub starts
Jan 10 17:00:54 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 5.d scrub ok
Jan 10 17:00:54 compute-0 ceph-mon[75249]: pgmap v157: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:00:55 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 2.16 scrub starts
Jan 10 17:00:55 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 2.16 scrub ok
Jan 10 17:00:55 compute-0 ceph-mon[75249]: 5.d scrub starts
Jan 10 17:00:55 compute-0 ceph-mon[75249]: 5.d scrub ok
Jan 10 17:00:55 compute-0 ceph-mon[75249]: 2.16 scrub starts
Jan 10 17:00:55 compute-0 ceph-mon[75249]: 2.16 scrub ok
Jan 10 17:00:56 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v158: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:00:56 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 7.f scrub starts
Jan 10 17:00:56 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 7.f scrub ok
Jan 10 17:00:56 compute-0 ceph-mon[75249]: pgmap v158: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:00:56 compute-0 ceph-mon[75249]: 7.f scrub starts
Jan 10 17:00:56 compute-0 ceph-mon[75249]: 7.f scrub ok
Jan 10 17:00:57 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 5.3 scrub starts
Jan 10 17:00:57 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 5.3 scrub ok
Jan 10 17:00:57 compute-0 ceph-mon[75249]: 5.3 scrub starts
Jan 10 17:00:57 compute-0 ceph-mon[75249]: 5.3 scrub ok
Jan 10 17:00:58 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v159: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:00:58 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 3.6 scrub starts
Jan 10 17:00:58 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 3.6 scrub ok
Jan 10 17:00:58 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 4.d scrub starts
Jan 10 17:00:58 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 4.d scrub ok
Jan 10 17:00:58 compute-0 ceph-mon[75249]: pgmap v159: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:00:58 compute-0 ceph-mon[75249]: 3.6 scrub starts
Jan 10 17:00:58 compute-0 ceph-mon[75249]: 3.6 scrub ok
Jan 10 17:00:58 compute-0 ceph-mon[75249]: 4.d scrub starts
Jan 10 17:00:58 compute-0 ceph-mon[75249]: 4.d scrub ok
Jan 10 17:00:58 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:00:59 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 4.f scrub starts
Jan 10 17:00:59 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 4.f scrub ok
Jan 10 17:00:59 compute-0 ceph-mon[75249]: 4.f scrub starts
Jan 10 17:00:59 compute-0 ceph-mon[75249]: 4.f scrub ok
Jan 10 17:01:00 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v160: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:01:00 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 4.4 scrub starts
Jan 10 17:01:00 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 4.4 scrub ok
Jan 10 17:01:01 compute-0 ceph-mon[75249]: pgmap v160: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:01:01 compute-0 ceph-mon[75249]: 4.4 scrub starts
Jan 10 17:01:01 compute-0 ceph-mon[75249]: 4.4 scrub ok
Jan 10 17:01:01 compute-0 CROND[99442]: (root) CMD (run-parts /etc/cron.hourly)
Jan 10 17:01:01 compute-0 run-parts[99445]: (/etc/cron.hourly) starting 0anacron
Jan 10 17:01:01 compute-0 anacron[99453]: Anacron started on 2026-01-10
Jan 10 17:01:01 compute-0 anacron[99453]: Will run job `cron.daily' in 15 min.
Jan 10 17:01:01 compute-0 anacron[99453]: Will run job `cron.weekly' in 35 min.
Jan 10 17:01:01 compute-0 anacron[99453]: Will run job `cron.monthly' in 55 min.
Jan 10 17:01:01 compute-0 anacron[99453]: Jobs will be executed sequentially
Jan 10 17:01:01 compute-0 run-parts[99455]: (/etc/cron.hourly) finished 0anacron
Jan 10 17:01:01 compute-0 CROND[99441]: (root) CMDEND (run-parts /etc/cron.hourly)
Jan 10 17:01:01 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 5.2 scrub starts
Jan 10 17:01:01 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 5.2 scrub ok
Jan 10 17:01:02 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v161: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:01:02 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 2.2 scrub starts
Jan 10 17:01:02 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 2.2 scrub ok
Jan 10 17:01:03 compute-0 ceph-mon[75249]: 5.2 scrub starts
Jan 10 17:01:03 compute-0 ceph-mon[75249]: 5.2 scrub ok
Jan 10 17:01:03 compute-0 ceph-mon[75249]: pgmap v161: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:01:03 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 5.1b scrub starts
Jan 10 17:01:03 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 5.1b scrub ok
Jan 10 17:01:03 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:01:04 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v162: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:01:04 compute-0 ceph-mon[75249]: 2.2 scrub starts
Jan 10 17:01:04 compute-0 ceph-mon[75249]: 2.2 scrub ok
Jan 10 17:01:04 compute-0 ceph-mon[75249]: 5.1b scrub starts
Jan 10 17:01:04 compute-0 ceph-mon[75249]: 5.1b scrub ok
Jan 10 17:01:05 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 7.3 scrub starts
Jan 10 17:01:05 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 7.3 scrub ok
Jan 10 17:01:05 compute-0 ceph-mon[75249]: pgmap v162: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:01:05 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 2.1e scrub starts
Jan 10 17:01:05 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 2.1e scrub ok
Jan 10 17:01:06 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v163: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:01:06 compute-0 ceph-mon[75249]: 7.3 scrub starts
Jan 10 17:01:06 compute-0 ceph-mon[75249]: 7.3 scrub ok
Jan 10 17:01:06 compute-0 ceph-mon[75249]: 2.1e scrub starts
Jan 10 17:01:06 compute-0 ceph-mon[75249]: 2.1e scrub ok
Jan 10 17:01:07 compute-0 ceph-mon[75249]: pgmap v163: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:01:07 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 4.9 scrub starts
Jan 10 17:01:07 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 4.9 scrub ok
Jan 10 17:01:08 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v164: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:01:08 compute-0 ceph-mon[75249]: 4.9 scrub starts
Jan 10 17:01:08 compute-0 ceph-mon[75249]: 4.9 scrub ok
Jan 10 17:01:08 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:01:08 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:01:08 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:01:08 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:01:08 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:01:08 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:01:08 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:01:09 compute-0 ceph-mon[75249]: pgmap v164: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:01:10 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v165: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:01:10 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 2.1b scrub starts
Jan 10 17:01:10 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 2.1b scrub ok
Jan 10 17:01:11 compute-0 ceph-mon[75249]: pgmap v165: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:01:12 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v166: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:01:12 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 3.3 scrub starts
Jan 10 17:01:12 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 3.3 scrub ok
Jan 10 17:01:12 compute-0 ceph-mon[75249]: 2.1b scrub starts
Jan 10 17:01:12 compute-0 ceph-mon[75249]: 2.1b scrub ok
Jan 10 17:01:13 compute-0 ceph-mon[75249]: pgmap v166: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:01:13 compute-0 ceph-mon[75249]: 3.3 scrub starts
Jan 10 17:01:13 compute-0 ceph-mon[75249]: 3.3 scrub ok
Jan 10 17:01:13 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:01:14 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v167: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:01:14 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 4.1b scrub starts
Jan 10 17:01:14 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 4.1b scrub ok
Jan 10 17:01:14 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 4.10 scrub starts
Jan 10 17:01:14 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 4.10 scrub ok
Jan 10 17:01:15 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 5.5 scrub starts
Jan 10 17:01:15 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 5.5 scrub ok
Jan 10 17:01:15 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 4.1a scrub starts
Jan 10 17:01:15 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 4.1a scrub ok
Jan 10 17:01:15 compute-0 ceph-mon[75249]: pgmap v167: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:01:15 compute-0 ceph-mon[75249]: 4.1b scrub starts
Jan 10 17:01:15 compute-0 ceph-mon[75249]: 4.1b scrub ok
Jan 10 17:01:15 compute-0 ceph-mon[75249]: 4.10 scrub starts
Jan 10 17:01:15 compute-0 ceph-mon[75249]: 4.10 scrub ok
Jan 10 17:01:16 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v168: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:01:16 compute-0 ceph-mon[75249]: 5.5 scrub starts
Jan 10 17:01:16 compute-0 ceph-mon[75249]: 5.5 scrub ok
Jan 10 17:01:16 compute-0 ceph-mon[75249]: 4.1a scrub starts
Jan 10 17:01:16 compute-0 ceph-mon[75249]: 4.1a scrub ok
Jan 10 17:01:17 compute-0 ceph-mon[75249]: pgmap v168: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:01:17 compute-0 sudo[98670]: pam_unix(sudo:session): session closed for user root
Jan 10 17:01:18 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v169: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:01:18 compute-0 sudo[99605]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pvgtkqlsxfndaicxglpvpaggflnqbzfh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064477.8414311-132-120236147248038/AnsiballZ_command.py'
Jan 10 17:01:18 compute-0 sudo[99605]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:01:18 compute-0 python3.9[99607]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 17:01:18 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 4.2 scrub starts
Jan 10 17:01:18 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 4.2 scrub ok
Jan 10 17:01:18 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:01:19 compute-0 sudo[99605]: pam_unix(sudo:session): session closed for user root
Jan 10 17:01:19 compute-0 ceph-mon[75249]: pgmap v169: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:01:19 compute-0 ceph-mon[75249]: 4.2 scrub starts
Jan 10 17:01:19 compute-0 ceph-mon[75249]: 4.2 scrub ok
Jan 10 17:01:19 compute-0 sudo[99892]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xehdmzaxyhaaiprjqcjewryjlymfqpqf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064479.3763316-140-119487483737822/AnsiballZ_selinux.py'
Jan 10 17:01:19 compute-0 sudo[99892]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:01:20 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v170: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:01:20 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 2.f scrub starts
Jan 10 17:01:20 compute-0 python3.9[99894]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Jan 10 17:01:20 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 2.f scrub ok
Jan 10 17:01:20 compute-0 sudo[99892]: pam_unix(sudo:session): session closed for user root
Jan 10 17:01:20 compute-0 ceph-mon[75249]: pgmap v170: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:01:20 compute-0 sudo[100044]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ebztjfyhhdmzlopqwocsqdffnrnvplie ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064480.6319764-151-100027612040199/AnsiballZ_command.py'
Jan 10 17:01:20 compute-0 sudo[100044]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:01:21 compute-0 python3.9[100046]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Jan 10 17:01:21 compute-0 sudo[100044]: pam_unix(sudo:session): session closed for user root
Jan 10 17:01:21 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 4.e scrub starts
Jan 10 17:01:21 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 4.e scrub ok
Jan 10 17:01:21 compute-0 ceph-mon[75249]: 2.f scrub starts
Jan 10 17:01:21 compute-0 ceph-mon[75249]: 2.f scrub ok
Jan 10 17:01:21 compute-0 sudo[100196]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ybhfaubygcbyzqfodwbgkffuavkqpwvt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064481.2342048-159-30629369213599/AnsiballZ_file.py'
Jan 10 17:01:21 compute-0 sudo[100196]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:01:21 compute-0 python3.9[100198]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:01:21 compute-0 sudo[100196]: pam_unix(sudo:session): session closed for user root
Jan 10 17:01:22 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v171: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:01:22 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 4.1 scrub starts
Jan 10 17:01:22 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 4.1 scrub ok
Jan 10 17:01:22 compute-0 ceph-mon[75249]: 4.e scrub starts
Jan 10 17:01:22 compute-0 ceph-mon[75249]: 4.e scrub ok
Jan 10 17:01:22 compute-0 ceph-mon[75249]: pgmap v171: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:01:22 compute-0 sudo[100348]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-llliwcgsannuojxtfehldpfasbvjvrse ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064482.082501-167-74232395177668/AnsiballZ_mount.py'
Jan 10 17:01:22 compute-0 sudo[100348]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:01:22 compute-0 python3.9[100350]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Jan 10 17:01:22 compute-0 sudo[100348]: pam_unix(sudo:session): session closed for user root
Jan 10 17:01:23 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 7.18 scrub starts
Jan 10 17:01:23 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 7.18 scrub ok
Jan 10 17:01:23 compute-0 ceph-mon[75249]: 4.1 scrub starts
Jan 10 17:01:23 compute-0 ceph-mon[75249]: 4.1 scrub ok
Jan 10 17:01:23 compute-0 sudo[100500]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xoxkzcjigktaqwvuthliudgbztthvocv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064483.4492238-195-226858395387687/AnsiballZ_file.py'
Jan 10 17:01:23 compute-0 sudo[100500]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:01:23 compute-0 python3.9[100502]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 10 17:01:23 compute-0 sudo[100500]: pam_unix(sudo:session): session closed for user root
Jan 10 17:01:23 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:01:24 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v172: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:01:24 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 2.17 scrub starts
Jan 10 17:01:24 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 2.17 scrub ok
Jan 10 17:01:24 compute-0 sudo[100652]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uqmlapynyyuffmimfqitxixanepoperh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064484.108923-203-100484094538561/AnsiballZ_stat.py'
Jan 10 17:01:24 compute-0 sudo[100652]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:01:24 compute-0 ceph-mon[75249]: 7.18 scrub starts
Jan 10 17:01:24 compute-0 ceph-mon[75249]: 7.18 scrub ok
Jan 10 17:01:24 compute-0 ceph-mon[75249]: pgmap v172: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:01:24 compute-0 python3.9[100654]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 17:01:24 compute-0 sudo[100652]: pam_unix(sudo:session): session closed for user root
Jan 10 17:01:24 compute-0 sudo[100730]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-umbfoirjntlurogeoriqkhfnmeroyrej ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064484.108923-203-100484094538561/AnsiballZ_file.py'
Jan 10 17:01:24 compute-0 sudo[100730]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:01:25 compute-0 python3.9[100732]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:01:25 compute-0 sudo[100730]: pam_unix(sudo:session): session closed for user root
Jan 10 17:01:25 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 4.13 scrub starts
Jan 10 17:01:25 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 4.13 scrub ok
Jan 10 17:01:25 compute-0 ceph-mon[75249]: 2.17 scrub starts
Jan 10 17:01:25 compute-0 ceph-mon[75249]: 2.17 scrub ok
Jan 10 17:01:25 compute-0 sudo[100882]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ltqfaodulrperrvbnrmzurpamdaszflf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064485.453745-224-110554449957607/AnsiballZ_stat.py'
Jan 10 17:01:25 compute-0 sudo[100882]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:01:26 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v173: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:01:26 compute-0 python3.9[100884]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 10 17:01:26 compute-0 sudo[100882]: pam_unix(sudo:session): session closed for user root
Jan 10 17:01:26 compute-0 ceph-mon[75249]: 4.13 scrub starts
Jan 10 17:01:26 compute-0 ceph-mon[75249]: 4.13 scrub ok
Jan 10 17:01:26 compute-0 ceph-mon[75249]: pgmap v173: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:01:26 compute-0 sudo[101036]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tlynjaivzdmfcpmagvnnrhrliioxrjzn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064486.5247264-237-181650412367641/AnsiballZ_getent.py'
Jan 10 17:01:26 compute-0 sudo[101036]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:01:27 compute-0 python3.9[101038]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Jan 10 17:01:27 compute-0 sudo[101036]: pam_unix(sudo:session): session closed for user root
Jan 10 17:01:27 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 4.a scrub starts
Jan 10 17:01:27 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 4.a scrub ok
Jan 10 17:01:27 compute-0 sudo[101189]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kiryxahyweduqtpmwboaqpfydmxxtpws ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064487.4595418-247-173411052204552/AnsiballZ_getent.py'
Jan 10 17:01:27 compute-0 sudo[101189]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:01:27 compute-0 python3.9[101191]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Jan 10 17:01:27 compute-0 sudo[101189]: pam_unix(sudo:session): session closed for user root
Jan 10 17:01:28 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v174: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:01:28 compute-0 ceph-mon[75249]: 4.a scrub starts
Jan 10 17:01:28 compute-0 ceph-mon[75249]: 4.a scrub ok
Jan 10 17:01:28 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 7.6 scrub starts
Jan 10 17:01:28 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 7.6 scrub ok
Jan 10 17:01:28 compute-0 sudo[101342]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uxkysvvicmicneykbdtweizgohxewocv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064488.196046-255-144102047261200/AnsiballZ_group.py'
Jan 10 17:01:28 compute-0 sudo[101342]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:01:28 compute-0 python3.9[101344]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 10 17:01:28 compute-0 sudo[101342]: pam_unix(sudo:session): session closed for user root
Jan 10 17:01:28 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:01:29 compute-0 ceph-mon[75249]: pgmap v174: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:01:29 compute-0 ceph-mon[75249]: 7.6 scrub starts
Jan 10 17:01:29 compute-0 ceph-mon[75249]: 7.6 scrub ok
Jan 10 17:01:29 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 5.4 scrub starts
Jan 10 17:01:29 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 5.4 scrub ok
Jan 10 17:01:29 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 5.13 scrub starts
Jan 10 17:01:29 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 5.13 scrub ok
Jan 10 17:01:29 compute-0 sudo[101494]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uqcclcyswmsywosrbjqoddqvjpdrvtfk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064489.1744637-264-245480904441751/AnsiballZ_file.py'
Jan 10 17:01:29 compute-0 sudo[101494]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:01:29 compute-0 python3.9[101496]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Jan 10 17:01:29 compute-0 sudo[101494]: pam_unix(sudo:session): session closed for user root
Jan 10 17:01:30 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v175: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:01:30 compute-0 ceph-mon[75249]: 5.4 scrub starts
Jan 10 17:01:30 compute-0 ceph-mon[75249]: 5.4 scrub ok
Jan 10 17:01:30 compute-0 ceph-mon[75249]: 5.13 scrub starts
Jan 10 17:01:30 compute-0 ceph-mon[75249]: 5.13 scrub ok
Jan 10 17:01:30 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 4.11 scrub starts
Jan 10 17:01:30 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 4.11 scrub ok
Jan 10 17:01:30 compute-0 sudo[101646]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-otvrmcuytlwgkiktdrzzulseynpcfwzg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064490.013148-275-51457013527213/AnsiballZ_dnf.py'
Jan 10 17:01:30 compute-0 sudo[101646]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:01:30 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 3.1 scrub starts
Jan 10 17:01:30 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 3.1 scrub ok
Jan 10 17:01:30 compute-0 python3.9[101648]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 10 17:01:31 compute-0 ceph-mon[75249]: pgmap v175: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:01:31 compute-0 ceph-mon[75249]: 4.11 scrub starts
Jan 10 17:01:31 compute-0 ceph-mon[75249]: 4.11 scrub ok
Jan 10 17:01:31 compute-0 ceph-mon[75249]: 3.1 scrub starts
Jan 10 17:01:31 compute-0 ceph-mon[75249]: 3.1 scrub ok
Jan 10 17:01:31 compute-0 sudo[101646]: pam_unix(sudo:session): session closed for user root
Jan 10 17:01:32 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v176: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:01:32 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 4.18 scrub starts
Jan 10 17:01:32 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 4.18 scrub ok
Jan 10 17:01:32 compute-0 sudo[101799]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vturtcomwvadsnrwgmgcixufnfuwfxpg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064492.020122-283-183850506636222/AnsiballZ_file.py'
Jan 10 17:01:32 compute-0 sudo[101799]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:01:32 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 3.c scrub starts
Jan 10 17:01:32 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 3.c scrub ok
Jan 10 17:01:32 compute-0 python3.9[101801]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 10 17:01:32 compute-0 sudo[101799]: pam_unix(sudo:session): session closed for user root
Jan 10 17:01:32 compute-0 sudo[101951]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pbwdpoomhalfbsmzeowvefqnjwvdeohz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064492.6591187-291-69224073666609/AnsiballZ_stat.py'
Jan 10 17:01:32 compute-0 sudo[101951]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:01:33 compute-0 ceph-mon[75249]: pgmap v176: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:01:33 compute-0 ceph-mon[75249]: 4.18 scrub starts
Jan 10 17:01:33 compute-0 ceph-mon[75249]: 4.18 scrub ok
Jan 10 17:01:33 compute-0 ceph-mon[75249]: 3.c scrub starts
Jan 10 17:01:33 compute-0 ceph-mon[75249]: 3.c scrub ok
Jan 10 17:01:33 compute-0 python3.9[101953]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 17:01:33 compute-0 sudo[101951]: pam_unix(sudo:session): session closed for user root
Jan 10 17:01:33 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 2.15 scrub starts
Jan 10 17:01:33 compute-0 sudo[102029]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-upqqirajmzcpfddawzxjmwlitzhpyqzf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064492.6591187-291-69224073666609/AnsiballZ_file.py'
Jan 10 17:01:33 compute-0 sudo[102029]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:01:33 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 2.15 scrub ok
Jan 10 17:01:33 compute-0 python3.9[102031]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/modules-load.d/99-edpm.conf _original_basename=edpm-modprobe.conf.j2 recurse=False state=file path=/etc/modules-load.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 10 17:01:33 compute-0 sudo[102029]: pam_unix(sudo:session): session closed for user root
Jan 10 17:01:33 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:01:34 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v177: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:01:34 compute-0 sudo[102181]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eqhhjlwykkuuzpqclhousjkcyfntpikt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064493.9250238-304-211065712168240/AnsiballZ_stat.py'
Jan 10 17:01:34 compute-0 sudo[102181]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:01:34 compute-0 python3.9[102183]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 17:01:34 compute-0 sudo[102181]: pam_unix(sudo:session): session closed for user root
Jan 10 17:01:34 compute-0 sudo[102259]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xpwtgjhipcnmgqyvosipcpvybuspdacu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064493.9250238-304-211065712168240/AnsiballZ_file.py'
Jan 10 17:01:34 compute-0 sudo[102259]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:01:34 compute-0 python3.9[102261]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/sysctl.d/99-edpm.conf _original_basename=edpm-sysctl.conf.j2 recurse=False state=file path=/etc/sysctl.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 10 17:01:34 compute-0 sudo[102259]: pam_unix(sudo:session): session closed for user root
Jan 10 17:01:35 compute-0 ceph-mon[75249]: 2.15 scrub starts
Jan 10 17:01:35 compute-0 ceph-mon[75249]: 2.15 scrub ok
Jan 10 17:01:35 compute-0 ceph-mon[75249]: pgmap v177: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:01:35 compute-0 sudo[102411]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pdtjycgkmbsjqoyiloihghbgpgjvhdrl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064495.2264373-319-243139572471919/AnsiballZ_dnf.py'
Jan 10 17:01:35 compute-0 sudo[102411]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:01:35 compute-0 python3.9[102413]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 10 17:01:36 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v178: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:01:37 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 3.18 scrub starts
Jan 10 17:01:37 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 3.18 scrub ok
Jan 10 17:01:37 compute-0 ceph-mon[75249]: pgmap v178: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:01:37 compute-0 sudo[102411]: pam_unix(sudo:session): session closed for user root
Jan 10 17:01:38 compute-0 ceph-mgr[75538]: [balancer INFO root] Optimize plan auto_2026-01-10_17:01:37
Jan 10 17:01:38 compute-0 ceph-mgr[75538]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 10 17:01:38 compute-0 ceph-mgr[75538]: [balancer INFO root] do_upmap
Jan 10 17:01:38 compute-0 ceph-mgr[75538]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'backups', 'images', 'vms', 'cephfs.cephfs.data', '.mgr', 'volumes']
Jan 10 17:01:38 compute-0 ceph-mgr[75538]: [balancer INFO root] prepared 0/10 upmap changes
Jan 10 17:01:38 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v179: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:01:38 compute-0 python3.9[102564]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 10 17:01:38 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 7.11 scrub starts
Jan 10 17:01:38 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 7.11 scrub ok
Jan 10 17:01:38 compute-0 ceph-mon[75249]: 3.18 scrub starts
Jan 10 17:01:38 compute-0 ceph-mon[75249]: 3.18 scrub ok
Jan 10 17:01:38 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 5.12 scrub starts
Jan 10 17:01:38 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 5.12 scrub ok
Jan 10 17:01:38 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:01:38 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:01:38 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:01:38 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:01:38 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:01:38 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:01:38 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:01:38 compute-0 ceph-mgr[75538]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 10 17:01:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 10 17:01:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 10 17:01:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 10 17:01:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 10 17:01:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 10 17:01:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 10 17:01:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 10 17:01:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 10 17:01:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 10 17:01:39 compute-0 python3.9[102716]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Jan 10 17:01:39 compute-0 ceph-mon[75249]: pgmap v179: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:01:39 compute-0 ceph-mon[75249]: 7.11 scrub starts
Jan 10 17:01:39 compute-0 ceph-mon[75249]: 7.11 scrub ok
Jan 10 17:01:39 compute-0 ceph-mon[75249]: 5.12 scrub starts
Jan 10 17:01:39 compute-0 ceph-mon[75249]: 5.12 scrub ok
Jan 10 17:01:39 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 7.4 scrub starts
Jan 10 17:01:39 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 7.4 scrub ok
Jan 10 17:01:39 compute-0 python3.9[102866]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 10 17:01:40 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v180: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:01:40 compute-0 sudo[102943]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 17:01:40 compute-0 sudo[102943]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:01:40 compute-0 sudo[102943]: pam_unix(sudo:session): session closed for user root
Jan 10 17:01:40 compute-0 sudo[102968]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 10 17:01:40 compute-0 sudo[102968]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:01:40 compute-0 ceph-mon[75249]: 7.4 scrub starts
Jan 10 17:01:40 compute-0 ceph-mon[75249]: 7.4 scrub ok
Jan 10 17:01:40 compute-0 sudo[103080]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uzbtklgiibnsuomyjbikozbuybctzigy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064500.0685954-360-157074875565453/AnsiballZ_systemd.py'
Jan 10 17:01:40 compute-0 sudo[103080]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:01:40 compute-0 sudo[102968]: pam_unix(sudo:session): session closed for user root
Jan 10 17:01:40 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 17:01:40 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 17:01:40 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 10 17:01:40 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 10 17:01:40 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 10 17:01:40 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:01:40 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 10 17:01:40 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 10 17:01:40 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 10 17:01:40 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 10 17:01:40 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 17:01:40 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 17:01:40 compute-0 python3.9[103082]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 10 17:01:41 compute-0 sudo[103100]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 17:01:41 compute-0 sudo[103100]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:01:41 compute-0 sudo[103100]: pam_unix(sudo:session): session closed for user root
Jan 10 17:01:41 compute-0 systemd[1]: Stopping Dynamic System Tuning Daemon...
Jan 10 17:01:41 compute-0 sudo[103128]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 10 17:01:41 compute-0 sudo[103128]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:01:41 compute-0 systemd[1]: tuned.service: Deactivated successfully.
Jan 10 17:01:41 compute-0 systemd[1]: Stopped Dynamic System Tuning Daemon.
Jan 10 17:01:41 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Jan 10 17:01:41 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 5.16 scrub starts
Jan 10 17:01:41 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 5.16 scrub ok
Jan 10 17:01:41 compute-0 podman[103168]: 2026-01-10 17:01:41.409182561 +0000 UTC m=+0.082028046 container create 000b2e0937879957347675201c7de2327d78896118f0d356708612c4c81e21b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_curie, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 10 17:01:41 compute-0 ceph-mon[75249]: pgmap v180: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:01:41 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 17:01:41 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 10 17:01:41 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:01:41 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 10 17:01:41 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 10 17:01:41 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 17:01:41 compute-0 podman[103168]: 2026-01-10 17:01:41.351388763 +0000 UTC m=+0.024234268 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:01:41 compute-0 systemd[1]: Started libpod-conmon-000b2e0937879957347675201c7de2327d78896118f0d356708612c4c81e21b3.scope.
Jan 10 17:01:41 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:01:41 compute-0 podman[103168]: 2026-01-10 17:01:41.510334118 +0000 UTC m=+0.183179673 container init 000b2e0937879957347675201c7de2327d78896118f0d356708612c4c81e21b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_curie, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 10 17:01:41 compute-0 podman[103168]: 2026-01-10 17:01:41.519292772 +0000 UTC m=+0.192138267 container start 000b2e0937879957347675201c7de2327d78896118f0d356708612c4c81e21b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_curie, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 17:01:41 compute-0 podman[103168]: 2026-01-10 17:01:41.522997857 +0000 UTC m=+0.195843362 container attach 000b2e0937879957347675201c7de2327d78896118f0d356708612c4c81e21b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_curie, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 10 17:01:41 compute-0 jolly_curie[103186]: 167 167
Jan 10 17:01:41 compute-0 systemd[1]: libpod-000b2e0937879957347675201c7de2327d78896118f0d356708612c4c81e21b3.scope: Deactivated successfully.
Jan 10 17:01:41 compute-0 podman[103168]: 2026-01-10 17:01:41.528343939 +0000 UTC m=+0.201189424 container died 000b2e0937879957347675201c7de2327d78896118f0d356708612c4c81e21b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_curie, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 10 17:01:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-3aab4e681fe84b6b9e5adf2f14bd1a610da4f4146e4ad27371316dee6511c0e5-merged.mount: Deactivated successfully.
Jan 10 17:01:41 compute-0 podman[103168]: 2026-01-10 17:01:41.569629759 +0000 UTC m=+0.242475244 container remove 000b2e0937879957347675201c7de2327d78896118f0d356708612c4c81e21b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_curie, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 10 17:01:41 compute-0 systemd[1]: libpod-conmon-000b2e0937879957347675201c7de2327d78896118f0d356708612c4c81e21b3.scope: Deactivated successfully.
Jan 10 17:01:41 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Jan 10 17:01:41 compute-0 sudo[103080]: pam_unix(sudo:session): session closed for user root
Jan 10 17:01:41 compute-0 podman[103220]: 2026-01-10 17:01:41.739500074 +0000 UTC m=+0.051757148 container create b2b171dbe5ffe2a508c40c6e2e0de1c6bada1fff4d06fa99be0c344720ceec78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_tu, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 17:01:41 compute-0 systemd[1]: Started libpod-conmon-b2b171dbe5ffe2a508c40c6e2e0de1c6bada1fff4d06fa99be0c344720ceec78.scope.
Jan 10 17:01:41 compute-0 podman[103220]: 2026-01-10 17:01:41.714281189 +0000 UTC m=+0.026538233 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:01:41 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:01:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7497cb18f3ca89bf21136d4f0de5bcba617467e881368d12d344a1a52c915b12/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 17:01:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7497cb18f3ca89bf21136d4f0de5bcba617467e881368d12d344a1a52c915b12/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 17:01:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7497cb18f3ca89bf21136d4f0de5bcba617467e881368d12d344a1a52c915b12/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 17:01:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7497cb18f3ca89bf21136d4f0de5bcba617467e881368d12d344a1a52c915b12/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 17:01:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7497cb18f3ca89bf21136d4f0de5bcba617467e881368d12d344a1a52c915b12/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 10 17:01:41 compute-0 podman[103220]: 2026-01-10 17:01:41.829863746 +0000 UTC m=+0.142120810 container init b2b171dbe5ffe2a508c40c6e2e0de1c6bada1fff4d06fa99be0c344720ceec78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_tu, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 17:01:41 compute-0 podman[103220]: 2026-01-10 17:01:41.839581021 +0000 UTC m=+0.151838055 container start b2b171dbe5ffe2a508c40c6e2e0de1c6bada1fff4d06fa99be0c344720ceec78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_tu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 10 17:01:41 compute-0 podman[103220]: 2026-01-10 17:01:41.843184083 +0000 UTC m=+0.155441147 container attach b2b171dbe5ffe2a508c40c6e2e0de1c6bada1fff4d06fa99be0c344720ceec78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_tu, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 17:01:42 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v181: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:01:42 compute-0 python3.9[103382]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Jan 10 17:01:42 compute-0 fervent_tu[103254]: --> passed data devices: 0 physical, 3 LVM
Jan 10 17:01:42 compute-0 fervent_tu[103254]: --> All data devices are unavailable
Jan 10 17:01:42 compute-0 ceph-mon[75249]: 5.16 scrub starts
Jan 10 17:01:42 compute-0 ceph-mon[75249]: 5.16 scrub ok
Jan 10 17:01:42 compute-0 ceph-mon[75249]: pgmap v181: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:01:42 compute-0 systemd[1]: libpod-b2b171dbe5ffe2a508c40c6e2e0de1c6bada1fff4d06fa99be0c344720ceec78.scope: Deactivated successfully.
Jan 10 17:01:42 compute-0 podman[103220]: 2026-01-10 17:01:42.445919321 +0000 UTC m=+0.758176375 container died b2b171dbe5ffe2a508c40c6e2e0de1c6bada1fff4d06fa99be0c344720ceec78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_tu, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 17:01:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-7497cb18f3ca89bf21136d4f0de5bcba617467e881368d12d344a1a52c915b12-merged.mount: Deactivated successfully.
Jan 10 17:01:42 compute-0 podman[103220]: 2026-01-10 17:01:42.493962496 +0000 UTC m=+0.806219530 container remove b2b171dbe5ffe2a508c40c6e2e0de1c6bada1fff4d06fa99be0c344720ceec78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_tu, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 17:01:42 compute-0 systemd[1]: libpod-conmon-b2b171dbe5ffe2a508c40c6e2e0de1c6bada1fff4d06fa99be0c344720ceec78.scope: Deactivated successfully.
Jan 10 17:01:42 compute-0 sudo[103128]: pam_unix(sudo:session): session closed for user root
Jan 10 17:01:42 compute-0 sudo[103432]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 17:01:42 compute-0 sudo[103432]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:01:42 compute-0 sudo[103432]: pam_unix(sudo:session): session closed for user root
Jan 10 17:01:42 compute-0 sudo[103457]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 -- lvm list --format json
Jan 10 17:01:42 compute-0 sudo[103457]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:01:43 compute-0 podman[103494]: 2026-01-10 17:01:43.068824186 +0000 UTC m=+0.055700933 container create f914c5917fba935e53881b74909710ade22c9ae541e944461d2a95667933d859 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_albattani, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 17:01:43 compute-0 systemd[1]: Started libpod-conmon-f914c5917fba935e53881b74909710ade22c9ae541e944461d2a95667933d859.scope.
Jan 10 17:01:43 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:01:43 compute-0 podman[103494]: 2026-01-10 17:01:43.04360158 +0000 UTC m=+0.030478457 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:01:43 compute-0 podman[103494]: 2026-01-10 17:01:43.150631981 +0000 UTC m=+0.137508828 container init f914c5917fba935e53881b74909710ade22c9ae541e944461d2a95667933d859 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_albattani, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 10 17:01:43 compute-0 podman[103494]: 2026-01-10 17:01:43.159814091 +0000 UTC m=+0.146690888 container start f914c5917fba935e53881b74909710ade22c9ae541e944461d2a95667933d859 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_albattani, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 17:01:43 compute-0 naughty_albattani[103510]: 167 167
Jan 10 17:01:43 compute-0 podman[103494]: 2026-01-10 17:01:43.164016651 +0000 UTC m=+0.150893438 container attach f914c5917fba935e53881b74909710ade22c9ae541e944461d2a95667933d859 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_albattani, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 17:01:43 compute-0 systemd[1]: libpod-f914c5917fba935e53881b74909710ade22c9ae541e944461d2a95667933d859.scope: Deactivated successfully.
Jan 10 17:01:43 compute-0 podman[103494]: 2026-01-10 17:01:43.165547984 +0000 UTC m=+0.152424801 container died f914c5917fba935e53881b74909710ade22c9ae541e944461d2a95667933d859 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_albattani, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 17:01:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-6df8e85110fa5979245f72674140df0755cbd693c3ccc50dce372240baa555f5-merged.mount: Deactivated successfully.
Jan 10 17:01:43 compute-0 podman[103494]: 2026-01-10 17:01:43.218398316 +0000 UTC m=+0.205275113 container remove f914c5917fba935e53881b74909710ade22c9ae541e944461d2a95667933d859 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_albattani, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 17:01:43 compute-0 systemd[1]: libpod-conmon-f914c5917fba935e53881b74909710ade22c9ae541e944461d2a95667933d859.scope: Deactivated successfully.
Jan 10 17:01:43 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 5.9 scrub starts
Jan 10 17:01:43 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 5.9 scrub ok
Jan 10 17:01:43 compute-0 podman[103534]: 2026-01-10 17:01:43.394856318 +0000 UTC m=+0.054235021 container create 0c60dca3ae43e24784238265e134641fa4ad651936b906c453ecbf7f466d677a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_davinci, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 17:01:43 compute-0 systemd[1]: Started libpod-conmon-0c60dca3ae43e24784238265e134641fa4ad651936b906c453ecbf7f466d677a.scope.
Jan 10 17:01:43 compute-0 podman[103534]: 2026-01-10 17:01:43.371739632 +0000 UTC m=+0.031118315 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:01:43 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:01:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89cf8b0c1172334acc05fa3af3dbd903465f134348bde563e4791006e17ce686/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 17:01:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89cf8b0c1172334acc05fa3af3dbd903465f134348bde563e4791006e17ce686/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 17:01:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89cf8b0c1172334acc05fa3af3dbd903465f134348bde563e4791006e17ce686/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 17:01:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89cf8b0c1172334acc05fa3af3dbd903465f134348bde563e4791006e17ce686/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 17:01:43 compute-0 podman[103534]: 2026-01-10 17:01:43.487958373 +0000 UTC m=+0.147337096 container init 0c60dca3ae43e24784238265e134641fa4ad651936b906c453ecbf7f466d677a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_davinci, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 17:01:43 compute-0 ceph-mon[75249]: 5.9 scrub starts
Jan 10 17:01:43 compute-0 ceph-mon[75249]: 5.9 scrub ok
Jan 10 17:01:43 compute-0 podman[103534]: 2026-01-10 17:01:43.497441663 +0000 UTC m=+0.156820336 container start 0c60dca3ae43e24784238265e134641fa4ad651936b906c453ecbf7f466d677a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_davinci, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default)
Jan 10 17:01:43 compute-0 podman[103534]: 2026-01-10 17:01:43.503074833 +0000 UTC m=+0.162453546 container attach 0c60dca3ae43e24784238265e134641fa4ad651936b906c453ecbf7f466d677a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_davinci, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 10 17:01:43 compute-0 kind_davinci[103550]: {
Jan 10 17:01:43 compute-0 kind_davinci[103550]:     "0": [
Jan 10 17:01:43 compute-0 kind_davinci[103550]:         {
Jan 10 17:01:43 compute-0 kind_davinci[103550]:             "devices": [
Jan 10 17:01:43 compute-0 kind_davinci[103550]:                 "/dev/loop3"
Jan 10 17:01:43 compute-0 kind_davinci[103550]:             ],
Jan 10 17:01:43 compute-0 kind_davinci[103550]:             "lv_name": "ceph_lv0",
Jan 10 17:01:43 compute-0 kind_davinci[103550]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 10 17:01:43 compute-0 kind_davinci[103550]:             "lv_size": "21470642176",
Jan 10 17:01:43 compute-0 kind_davinci[103550]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9aa1dcc9-88f4-49c0-be40-744313964d3e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 17:01:43 compute-0 kind_davinci[103550]:             "lv_uuid": "fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2",
Jan 10 17:01:43 compute-0 kind_davinci[103550]:             "name": "ceph_lv0",
Jan 10 17:01:43 compute-0 kind_davinci[103550]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 10 17:01:43 compute-0 kind_davinci[103550]:             "tags": {
Jan 10 17:01:43 compute-0 kind_davinci[103550]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 10 17:01:43 compute-0 kind_davinci[103550]:                 "ceph.block_uuid": "fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2",
Jan 10 17:01:43 compute-0 kind_davinci[103550]:                 "ceph.cephx_lockbox_secret": "",
Jan 10 17:01:43 compute-0 kind_davinci[103550]:                 "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 17:01:43 compute-0 kind_davinci[103550]:                 "ceph.cluster_name": "ceph",
Jan 10 17:01:43 compute-0 kind_davinci[103550]:                 "ceph.crush_device_class": "",
Jan 10 17:01:43 compute-0 kind_davinci[103550]:                 "ceph.encrypted": "0",
Jan 10 17:01:43 compute-0 kind_davinci[103550]:                 "ceph.objectstore": "bluestore",
Jan 10 17:01:43 compute-0 kind_davinci[103550]:                 "ceph.osd_fsid": "9aa1dcc9-88f4-49c0-be40-744313964d3e",
Jan 10 17:01:43 compute-0 kind_davinci[103550]:                 "ceph.osd_id": "0",
Jan 10 17:01:43 compute-0 kind_davinci[103550]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 10 17:01:43 compute-0 kind_davinci[103550]:                 "ceph.type": "block",
Jan 10 17:01:43 compute-0 kind_davinci[103550]:                 "ceph.vdo": "0",
Jan 10 17:01:43 compute-0 kind_davinci[103550]:                 "ceph.with_tpm": "0"
Jan 10 17:01:43 compute-0 kind_davinci[103550]:             },
Jan 10 17:01:43 compute-0 kind_davinci[103550]:             "type": "block",
Jan 10 17:01:43 compute-0 kind_davinci[103550]:             "vg_name": "ceph_vg0"
Jan 10 17:01:43 compute-0 kind_davinci[103550]:         }
Jan 10 17:01:43 compute-0 kind_davinci[103550]:     ],
Jan 10 17:01:43 compute-0 kind_davinci[103550]:     "1": [
Jan 10 17:01:43 compute-0 kind_davinci[103550]:         {
Jan 10 17:01:43 compute-0 kind_davinci[103550]:             "devices": [
Jan 10 17:01:43 compute-0 kind_davinci[103550]:                 "/dev/loop4"
Jan 10 17:01:43 compute-0 kind_davinci[103550]:             ],
Jan 10 17:01:43 compute-0 kind_davinci[103550]:             "lv_name": "ceph_lv1",
Jan 10 17:01:43 compute-0 kind_davinci[103550]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 10 17:01:43 compute-0 kind_davinci[103550]:             "lv_size": "21470642176",
Jan 10 17:01:43 compute-0 kind_davinci[103550]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e8e31518-65ae-476c-891c-e2fc550d0a1c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 17:01:43 compute-0 kind_davinci[103550]:             "lv_uuid": "jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE",
Jan 10 17:01:43 compute-0 kind_davinci[103550]:             "name": "ceph_lv1",
Jan 10 17:01:43 compute-0 kind_davinci[103550]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 10 17:01:43 compute-0 kind_davinci[103550]:             "tags": {
Jan 10 17:01:43 compute-0 kind_davinci[103550]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 10 17:01:43 compute-0 kind_davinci[103550]:                 "ceph.block_uuid": "jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE",
Jan 10 17:01:43 compute-0 kind_davinci[103550]:                 "ceph.cephx_lockbox_secret": "",
Jan 10 17:01:43 compute-0 kind_davinci[103550]:                 "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 17:01:43 compute-0 kind_davinci[103550]:                 "ceph.cluster_name": "ceph",
Jan 10 17:01:43 compute-0 kind_davinci[103550]:                 "ceph.crush_device_class": "",
Jan 10 17:01:43 compute-0 kind_davinci[103550]:                 "ceph.encrypted": "0",
Jan 10 17:01:43 compute-0 kind_davinci[103550]:                 "ceph.objectstore": "bluestore",
Jan 10 17:01:43 compute-0 kind_davinci[103550]:                 "ceph.osd_fsid": "e8e31518-65ae-476c-891c-e2fc550d0a1c",
Jan 10 17:01:43 compute-0 kind_davinci[103550]:                 "ceph.osd_id": "1",
Jan 10 17:01:43 compute-0 kind_davinci[103550]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 10 17:01:43 compute-0 kind_davinci[103550]:                 "ceph.type": "block",
Jan 10 17:01:43 compute-0 kind_davinci[103550]:                 "ceph.vdo": "0",
Jan 10 17:01:43 compute-0 kind_davinci[103550]:                 "ceph.with_tpm": "0"
Jan 10 17:01:43 compute-0 kind_davinci[103550]:             },
Jan 10 17:01:43 compute-0 kind_davinci[103550]:             "type": "block",
Jan 10 17:01:43 compute-0 kind_davinci[103550]:             "vg_name": "ceph_vg1"
Jan 10 17:01:43 compute-0 kind_davinci[103550]:         }
Jan 10 17:01:43 compute-0 kind_davinci[103550]:     ],
Jan 10 17:01:43 compute-0 kind_davinci[103550]:     "2": [
Jan 10 17:01:43 compute-0 kind_davinci[103550]:         {
Jan 10 17:01:43 compute-0 kind_davinci[103550]:             "devices": [
Jan 10 17:01:43 compute-0 kind_davinci[103550]:                 "/dev/loop5"
Jan 10 17:01:43 compute-0 kind_davinci[103550]:             ],
Jan 10 17:01:43 compute-0 kind_davinci[103550]:             "lv_name": "ceph_lv2",
Jan 10 17:01:43 compute-0 kind_davinci[103550]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 10 17:01:43 compute-0 kind_davinci[103550]:             "lv_size": "21470642176",
Jan 10 17:01:43 compute-0 kind_davinci[103550]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=87473727-6468-4f68-8371-e0bf60edaa43,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 17:01:43 compute-0 kind_davinci[103550]:             "lv_uuid": "47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL",
Jan 10 17:01:43 compute-0 kind_davinci[103550]:             "name": "ceph_lv2",
Jan 10 17:01:43 compute-0 kind_davinci[103550]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 10 17:01:43 compute-0 kind_davinci[103550]:             "tags": {
Jan 10 17:01:43 compute-0 kind_davinci[103550]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 10 17:01:43 compute-0 kind_davinci[103550]:                 "ceph.block_uuid": "47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL",
Jan 10 17:01:43 compute-0 kind_davinci[103550]:                 "ceph.cephx_lockbox_secret": "",
Jan 10 17:01:43 compute-0 kind_davinci[103550]:                 "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 17:01:43 compute-0 kind_davinci[103550]:                 "ceph.cluster_name": "ceph",
Jan 10 17:01:43 compute-0 kind_davinci[103550]:                 "ceph.crush_device_class": "",
Jan 10 17:01:43 compute-0 kind_davinci[103550]:                 "ceph.encrypted": "0",
Jan 10 17:01:43 compute-0 kind_davinci[103550]:                 "ceph.objectstore": "bluestore",
Jan 10 17:01:43 compute-0 kind_davinci[103550]:                 "ceph.osd_fsid": "87473727-6468-4f68-8371-e0bf60edaa43",
Jan 10 17:01:43 compute-0 kind_davinci[103550]:                 "ceph.osd_id": "2",
Jan 10 17:01:43 compute-0 kind_davinci[103550]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 10 17:01:43 compute-0 kind_davinci[103550]:                 "ceph.type": "block",
Jan 10 17:01:43 compute-0 kind_davinci[103550]:                 "ceph.vdo": "0",
Jan 10 17:01:43 compute-0 kind_davinci[103550]:                 "ceph.with_tpm": "0"
Jan 10 17:01:43 compute-0 kind_davinci[103550]:             },
Jan 10 17:01:43 compute-0 kind_davinci[103550]:             "type": "block",
Jan 10 17:01:43 compute-0 kind_davinci[103550]:             "vg_name": "ceph_vg2"
Jan 10 17:01:43 compute-0 kind_davinci[103550]:         }
Jan 10 17:01:43 compute-0 kind_davinci[103550]:     ]
Jan 10 17:01:43 compute-0 kind_davinci[103550]: }
Jan 10 17:01:43 compute-0 systemd[1]: libpod-0c60dca3ae43e24784238265e134641fa4ad651936b906c453ecbf7f466d677a.scope: Deactivated successfully.
Jan 10 17:01:43 compute-0 podman[103534]: 2026-01-10 17:01:43.851356667 +0000 UTC m=+0.510735400 container died 0c60dca3ae43e24784238265e134641fa4ad651936b906c453ecbf7f466d677a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_davinci, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 17:01:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-89cf8b0c1172334acc05fa3af3dbd903465f134348bde563e4791006e17ce686-merged.mount: Deactivated successfully.
Jan 10 17:01:43 compute-0 podman[103534]: 2026-01-10 17:01:43.914097759 +0000 UTC m=+0.573476412 container remove 0c60dca3ae43e24784238265e134641fa4ad651936b906c453ecbf7f466d677a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_davinci, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 10 17:01:43 compute-0 systemd[1]: libpod-conmon-0c60dca3ae43e24784238265e134641fa4ad651936b906c453ecbf7f466d677a.scope: Deactivated successfully.
Jan 10 17:01:43 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:01:44 compute-0 sudo[103457]: pam_unix(sudo:session): session closed for user root
Jan 10 17:01:44 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v182: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:01:44 compute-0 sudo[103657]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 17:01:44 compute-0 sudo[103657]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:01:44 compute-0 sudo[103657]: pam_unix(sudo:session): session closed for user root
Jan 10 17:01:44 compute-0 sudo[103726]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pschkzgxxscxuhggzhvxiupvqbmsbgbn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064503.72658-417-136153193852089/AnsiballZ_systemd.py'
Jan 10 17:01:44 compute-0 sudo[103726]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:01:44 compute-0 sudo[103718]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 -- raw list --format json
Jan 10 17:01:44 compute-0 sudo[103718]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:01:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] _maybe_adjust
Jan 10 17:01:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:01:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 10 17:01:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:01:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 17:01:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:01:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 17:01:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:01:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 17:01:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:01:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 17:01:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:01:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 9.302004027771843e-07 of space, bias 4.0, pg target 0.0011162404833326212 quantized to 16 (current 16)
Jan 10 17:01:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:01:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 17:01:44 compute-0 python3.9[103743]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 10 17:01:44 compute-0 podman[103761]: 2026-01-10 17:01:44.432799215 +0000 UTC m=+0.052570485 container create c057b405eff814588ce5ba2474d6faa7b09d97d824b5904664cf03780f4ceb1a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_meitner, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 17:01:44 compute-0 systemd[1]: Started libpod-conmon-c057b405eff814588ce5ba2474d6faa7b09d97d824b5904664cf03780f4ceb1a.scope.
Jan 10 17:01:44 compute-0 sudo[103726]: pam_unix(sudo:session): session closed for user root
Jan 10 17:01:44 compute-0 ceph-mon[75249]: pgmap v182: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:01:44 compute-0 podman[103761]: 2026-01-10 17:01:44.407326551 +0000 UTC m=+0.027097871 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:01:44 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:01:44 compute-0 podman[103761]: 2026-01-10 17:01:44.526063164 +0000 UTC m=+0.145834584 container init c057b405eff814588ce5ba2474d6faa7b09d97d824b5904664cf03780f4ceb1a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_meitner, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True)
Jan 10 17:01:44 compute-0 podman[103761]: 2026-01-10 17:01:44.5343808 +0000 UTC m=+0.154152070 container start c057b405eff814588ce5ba2474d6faa7b09d97d824b5904664cf03780f4ceb1a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_meitner, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 17:01:44 compute-0 podman[103761]: 2026-01-10 17:01:44.538313682 +0000 UTC m=+0.158084972 container attach c057b405eff814588ce5ba2474d6faa7b09d97d824b5904664cf03780f4ceb1a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_meitner, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 10 17:01:44 compute-0 clever_meitner[103778]: 167 167
Jan 10 17:01:44 compute-0 systemd[1]: libpod-c057b405eff814588ce5ba2474d6faa7b09d97d824b5904664cf03780f4ceb1a.scope: Deactivated successfully.
Jan 10 17:01:44 compute-0 podman[103761]: 2026-01-10 17:01:44.540603717 +0000 UTC m=+0.160374997 container died c057b405eff814588ce5ba2474d6faa7b09d97d824b5904664cf03780f4ceb1a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_meitner, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 10 17:01:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-d0454058dbbea5b21cad6d42b4905541eaf410ba9e246a1bc5355a8bcb87244f-merged.mount: Deactivated successfully.
Jan 10 17:01:44 compute-0 podman[103761]: 2026-01-10 17:01:44.578386971 +0000 UTC m=+0.198158241 container remove c057b405eff814588ce5ba2474d6faa7b09d97d824b5904664cf03780f4ceb1a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_meitner, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 17:01:44 compute-0 systemd[1]: libpod-conmon-c057b405eff814588ce5ba2474d6faa7b09d97d824b5904664cf03780f4ceb1a.scope: Deactivated successfully.
Jan 10 17:01:44 compute-0 podman[103877]: 2026-01-10 17:01:44.753735822 +0000 UTC m=+0.051949177 container create 9e603cbaf74050d79bb506fc67051645930b41cc2f72f9f4dc4d248627606720 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_hopper, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 17:01:44 compute-0 podman[103877]: 2026-01-10 17:01:44.732078487 +0000 UTC m=+0.030291852 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:01:44 compute-0 systemd[1]: Started libpod-conmon-9e603cbaf74050d79bb506fc67051645930b41cc2f72f9f4dc4d248627606720.scope.
Jan 10 17:01:44 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:01:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b77299e35e0935e2b2cd2733546aa1a1e63504e45d020e340881dcaeadadab3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 17:01:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b77299e35e0935e2b2cd2733546aa1a1e63504e45d020e340881dcaeadadab3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 17:01:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b77299e35e0935e2b2cd2733546aa1a1e63504e45d020e340881dcaeadadab3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 17:01:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b77299e35e0935e2b2cd2733546aa1a1e63504e45d020e340881dcaeadadab3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 17:01:44 compute-0 sudo[103969]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cjbtqqwyyvcordesghrwlnhtsozvuies ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064504.6203766-417-21851591073369/AnsiballZ_systemd.py'
Jan 10 17:01:44 compute-0 sudo[103969]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:01:45 compute-0 podman[103877]: 2026-01-10 17:01:45.222879529 +0000 UTC m=+0.521092914 container init 9e603cbaf74050d79bb506fc67051645930b41cc2f72f9f4dc4d248627606720 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_hopper, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 17:01:45 compute-0 podman[103877]: 2026-01-10 17:01:45.237258578 +0000 UTC m=+0.535471933 container start 9e603cbaf74050d79bb506fc67051645930b41cc2f72f9f4dc4d248627606720 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_hopper, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 10 17:01:45 compute-0 python3.9[103971]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 10 17:01:45 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 3.f scrub starts
Jan 10 17:01:45 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 3.f scrub ok
Jan 10 17:01:45 compute-0 sudo[103969]: pam_unix(sudo:session): session closed for user root
Jan 10 17:01:45 compute-0 podman[103877]: 2026-01-10 17:01:45.460572672 +0000 UTC m=+0.758786067 container attach 9e603cbaf74050d79bb506fc67051645930b41cc2f72f9f4dc4d248627606720 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_hopper, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 17:01:45 compute-0 sshd-session[96757]: Connection closed by 192.168.122.30 port 40626
Jan 10 17:01:45 compute-0 sshd-session[96754]: pam_unix(sshd:session): session closed for user zuul
Jan 10 17:01:45 compute-0 systemd[1]: session-35.scope: Deactivated successfully.
Jan 10 17:01:45 compute-0 systemd[1]: session-35.scope: Consumed 1min 10.570s CPU time.
Jan 10 17:01:45 compute-0 systemd-logind[798]: Session 35 logged out. Waiting for processes to exit.
Jan 10 17:01:45 compute-0 systemd-logind[798]: Removed session 35.
Jan 10 17:01:45 compute-0 ceph-mon[75249]: 3.f scrub starts
Jan 10 17:01:45 compute-0 ceph-mon[75249]: 3.f scrub ok
Jan 10 17:01:45 compute-0 lvm[104074]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 10 17:01:45 compute-0 lvm[104073]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 10 17:01:45 compute-0 lvm[104073]: VG ceph_vg0 finished
Jan 10 17:01:45 compute-0 lvm[104074]: VG ceph_vg1 finished
Jan 10 17:01:45 compute-0 lvm[104076]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 10 17:01:45 compute-0 lvm[104076]: VG ceph_vg2 finished
Jan 10 17:01:46 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v183: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:01:46 compute-0 musing_hopper[103947]: {}
Jan 10 17:01:46 compute-0 systemd[1]: libpod-9e603cbaf74050d79bb506fc67051645930b41cc2f72f9f4dc4d248627606720.scope: Deactivated successfully.
Jan 10 17:01:46 compute-0 systemd[1]: libpod-9e603cbaf74050d79bb506fc67051645930b41cc2f72f9f4dc4d248627606720.scope: Consumed 1.397s CPU time.
Jan 10 17:01:46 compute-0 podman[103877]: 2026-01-10 17:01:46.119984624 +0000 UTC m=+1.418197989 container died 9e603cbaf74050d79bb506fc67051645930b41cc2f72f9f4dc4d248627606720 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_hopper, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 17:01:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-4b77299e35e0935e2b2cd2733546aa1a1e63504e45d020e340881dcaeadadab3-merged.mount: Deactivated successfully.
Jan 10 17:01:46 compute-0 podman[103877]: 2026-01-10 17:01:46.291939639 +0000 UTC m=+1.590153004 container remove 9e603cbaf74050d79bb506fc67051645930b41cc2f72f9f4dc4d248627606720 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_hopper, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 10 17:01:46 compute-0 systemd[1]: libpod-conmon-9e603cbaf74050d79bb506fc67051645930b41cc2f72f9f4dc4d248627606720.scope: Deactivated successfully.
Jan 10 17:01:46 compute-0 sudo[103718]: pam_unix(sudo:session): session closed for user root
Jan 10 17:01:46 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 3.16 scrub starts
Jan 10 17:01:46 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 10 17:01:46 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 3.16 scrub ok
Jan 10 17:01:46 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:01:46 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 10 17:01:46 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:01:46 compute-0 sudo[104092]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 10 17:01:46 compute-0 sudo[104092]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:01:46 compute-0 sudo[104092]: pam_unix(sudo:session): session closed for user root
Jan 10 17:01:47 compute-0 ceph-mon[75249]: pgmap v183: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:01:47 compute-0 ceph-mon[75249]: 3.16 scrub starts
Jan 10 17:01:47 compute-0 ceph-mon[75249]: 3.16 scrub ok
Jan 10 17:01:47 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:01:47 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:01:48 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v184: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:01:49 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:01:49 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 7.15 scrub starts
Jan 10 17:01:49 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 7.15 scrub ok
Jan 10 17:01:49 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 7.1f scrub starts
Jan 10 17:01:49 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 7.1f scrub ok
Jan 10 17:01:49 compute-0 ceph-mon[75249]: pgmap v184: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:01:50 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v185: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:01:50 compute-0 ceph-mon[75249]: 7.15 scrub starts
Jan 10 17:01:50 compute-0 ceph-mon[75249]: 7.15 scrub ok
Jan 10 17:01:50 compute-0 ceph-mon[75249]: 7.1f scrub starts
Jan 10 17:01:50 compute-0 ceph-mon[75249]: 7.1f scrub ok
Jan 10 17:01:51 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 4.12 scrub starts
Jan 10 17:01:51 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 4.12 scrub ok
Jan 10 17:01:51 compute-0 sshd-session[104117]: Accepted publickey for zuul from 192.168.122.30 port 52004 ssh2: ECDSA SHA256:YYROLJW/JwZAyyZtyl+88gzuUs1GqrQIhGb+AzXg9yc
Jan 10 17:01:51 compute-0 systemd-logind[798]: New session 36 of user zuul.
Jan 10 17:01:51 compute-0 systemd[1]: Started Session 36 of User zuul.
Jan 10 17:01:51 compute-0 ceph-mon[75249]: pgmap v185: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:01:51 compute-0 sshd-session[104117]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 10 17:01:52 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v186: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:01:52 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 2.d scrub starts
Jan 10 17:01:52 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 2.d scrub ok
Jan 10 17:01:52 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 5.7 scrub starts
Jan 10 17:01:52 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 5.7 scrub ok
Jan 10 17:01:52 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 7.1c scrub starts
Jan 10 17:01:52 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 7.1c scrub ok
Jan 10 17:01:52 compute-0 ceph-mon[75249]: 4.12 scrub starts
Jan 10 17:01:52 compute-0 ceph-mon[75249]: 4.12 scrub ok
Jan 10 17:01:52 compute-0 ceph-mon[75249]: pgmap v186: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:01:52 compute-0 ceph-mon[75249]: 2.d scrub starts
Jan 10 17:01:52 compute-0 ceph-mon[75249]: 2.d scrub ok
Jan 10 17:01:52 compute-0 python3.9[104270]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 10 17:01:53 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 3.1b scrub starts
Jan 10 17:01:53 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 3.1b scrub ok
Jan 10 17:01:53 compute-0 ceph-mon[75249]: 5.7 scrub starts
Jan 10 17:01:53 compute-0 ceph-mon[75249]: 5.7 scrub ok
Jan 10 17:01:53 compute-0 ceph-mon[75249]: 7.1c scrub starts
Jan 10 17:01:53 compute-0 ceph-mon[75249]: 7.1c scrub ok
Jan 10 17:01:53 compute-0 sudo[104424]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yzenbnwtnlaicaihajecqoexbaqhmozb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064513.0562391-31-125609569135916/AnsiballZ_getent.py'
Jan 10 17:01:53 compute-0 sudo[104424]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:01:53 compute-0 python3.9[104426]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Jan 10 17:01:53 compute-0 sudo[104424]: pam_unix(sudo:session): session closed for user root
Jan 10 17:01:54 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:01:54 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v187: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:01:54 compute-0 sudo[104577]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sdbgjfhpywsooyumfemqsdzopqtnoeyi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064514.136013-43-77586394267730/AnsiballZ_setup.py'
Jan 10 17:01:54 compute-0 sudo[104577]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:01:54 compute-0 ceph-mon[75249]: 3.1b scrub starts
Jan 10 17:01:54 compute-0 ceph-mon[75249]: 3.1b scrub ok
Jan 10 17:01:54 compute-0 ceph-mon[75249]: pgmap v187: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:01:54 compute-0 python3.9[104579]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 10 17:01:55 compute-0 sudo[104577]: pam_unix(sudo:session): session closed for user root
Jan 10 17:01:55 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 3.11 scrub starts
Jan 10 17:01:55 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 3.11 scrub ok
Jan 10 17:01:55 compute-0 sudo[104661]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djkizildpnratatagpgyklzwpegrtsiu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064514.136013-43-77586394267730/AnsiballZ_dnf.py'
Jan 10 17:01:55 compute-0 sudo[104661]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:01:55 compute-0 python3.9[104663]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 10 17:01:56 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v188: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:01:56 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 2.19 scrub starts
Jan 10 17:01:56 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 2.19 scrub ok
Jan 10 17:01:56 compute-0 ceph-mon[75249]: 3.11 scrub starts
Jan 10 17:01:56 compute-0 ceph-mon[75249]: 3.11 scrub ok
Jan 10 17:01:57 compute-0 sudo[104661]: pam_unix(sudo:session): session closed for user root
Jan 10 17:01:57 compute-0 ceph-mon[75249]: pgmap v188: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:01:57 compute-0 ceph-mon[75249]: 2.19 scrub starts
Jan 10 17:01:57 compute-0 ceph-mon[75249]: 2.19 scrub ok
Jan 10 17:01:57 compute-0 sudo[104814]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ndwxosgalbwzhgpyvqdqmxsaqeqcbtic ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064517.5547104-57-40765203858404/AnsiballZ_dnf.py'
Jan 10 17:01:57 compute-0 sudo[104814]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:01:58 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v189: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:01:58 compute-0 python3.9[104816]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 10 17:01:58 compute-0 ceph-mon[75249]: pgmap v189: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:01:59 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:01:59 compute-0 sudo[104814]: pam_unix(sudo:session): session closed for user root
Jan 10 17:02:00 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v190: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:02:00 compute-0 sudo[104967]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wsnrctdetffcwzkwcqrozipcfxyruxkw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064519.5971751-65-127041539450282/AnsiballZ_systemd.py'
Jan 10 17:02:00 compute-0 sudo[104967]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:02:00 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 2.3 scrub starts
Jan 10 17:02:00 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 2.3 scrub ok
Jan 10 17:02:00 compute-0 python3.9[104969]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 10 17:02:00 compute-0 sudo[104967]: pam_unix(sudo:session): session closed for user root
Jan 10 17:02:01 compute-0 ceph-mon[75249]: pgmap v190: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:02:01 compute-0 ceph-mon[75249]: 2.3 scrub starts
Jan 10 17:02:01 compute-0 ceph-mon[75249]: 2.3 scrub ok
Jan 10 17:02:01 compute-0 python3.9[105122]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 10 17:02:02 compute-0 sudo[105272]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nbvdejsmosajlegjyjrepsfacbnbgdob ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064521.6102965-83-249041135016251/AnsiballZ_sefcontext.py'
Jan 10 17:02:02 compute-0 sudo[105272]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:02:02 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v191: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:02:02 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 2.4 scrub starts
Jan 10 17:02:02 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 2.4 scrub ok
Jan 10 17:02:02 compute-0 python3.9[105274]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Jan 10 17:02:02 compute-0 sudo[105272]: pam_unix(sudo:session): session closed for user root
Jan 10 17:02:03 compute-0 ceph-mon[75249]: pgmap v191: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:02:03 compute-0 ceph-mon[75249]: 2.4 scrub starts
Jan 10 17:02:03 compute-0 ceph-mon[75249]: 2.4 scrub ok
Jan 10 17:02:03 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 2.18 scrub starts
Jan 10 17:02:03 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 2.18 scrub ok
Jan 10 17:02:03 compute-0 python3.9[105424]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 10 17:02:04 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:02:04 compute-0 sudo[105580]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jbtflnkzabvwckolbelxkgwsiptcvkqy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064523.7466726-101-135882594912681/AnsiballZ_dnf.py'
Jan 10 17:02:04 compute-0 sudo[105580]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:02:04 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v192: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:02:04 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 7.9 scrub starts
Jan 10 17:02:04 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 7.9 scrub ok
Jan 10 17:02:04 compute-0 ceph-mon[75249]: 2.18 scrub starts
Jan 10 17:02:04 compute-0 ceph-mon[75249]: 2.18 scrub ok
Jan 10 17:02:04 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 2.5 scrub starts
Jan 10 17:02:04 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 2.5 scrub ok
Jan 10 17:02:04 compute-0 python3.9[105582]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 10 17:02:04 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 7.5 scrub starts
Jan 10 17:02:04 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 7.5 scrub ok
Jan 10 17:02:05 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 2.7 scrub starts
Jan 10 17:02:05 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 2.7 scrub ok
Jan 10 17:02:05 compute-0 ceph-mon[75249]: pgmap v192: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:02:05 compute-0 ceph-mon[75249]: 7.9 scrub starts
Jan 10 17:02:05 compute-0 ceph-mon[75249]: 7.9 scrub ok
Jan 10 17:02:05 compute-0 ceph-mon[75249]: 2.5 scrub starts
Jan 10 17:02:05 compute-0 ceph-mon[75249]: 2.5 scrub ok
Jan 10 17:02:05 compute-0 ceph-mon[75249]: 7.5 scrub starts
Jan 10 17:02:05 compute-0 ceph-mon[75249]: 7.5 scrub ok
Jan 10 17:02:05 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 3.e scrub starts
Jan 10 17:02:05 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 3.e scrub ok
Jan 10 17:02:05 compute-0 sudo[105580]: pam_unix(sudo:session): session closed for user root
Jan 10 17:02:06 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v193: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:02:06 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 5.11 scrub starts
Jan 10 17:02:06 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 5.11 scrub ok
Jan 10 17:02:06 compute-0 ceph-mon[75249]: 2.7 scrub starts
Jan 10 17:02:06 compute-0 ceph-mon[75249]: 2.7 scrub ok
Jan 10 17:02:06 compute-0 ceph-mon[75249]: 3.e scrub starts
Jan 10 17:02:06 compute-0 ceph-mon[75249]: 3.e scrub ok
Jan 10 17:02:06 compute-0 sudo[105733]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lgkqmabhzeexzdiaahypemmmmjntihzq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064525.8427079-109-206087948154446/AnsiballZ_command.py'
Jan 10 17:02:06 compute-0 sudo[105733]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:02:06 compute-0 python3.9[105735]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 17:02:07 compute-0 sudo[105733]: pam_unix(sudo:session): session closed for user root
Jan 10 17:02:07 compute-0 ceph-mon[75249]: pgmap v193: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:02:07 compute-0 ceph-mon[75249]: 5.11 scrub starts
Jan 10 17:02:07 compute-0 ceph-mon[75249]: 5.11 scrub ok
Jan 10 17:02:07 compute-0 sudo[106020]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rriwlnotfuxaignjpoykgtijflzaiunl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064527.3396301-117-182070579926793/AnsiballZ_file.py'
Jan 10 17:02:07 compute-0 sudo[106020]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:02:07 compute-0 python3.9[106022]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None attributes=None
Jan 10 17:02:07 compute-0 sudo[106020]: pam_unix(sudo:session): session closed for user root
Jan 10 17:02:08 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v194: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:02:08 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 5.1e scrub starts
Jan 10 17:02:08 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 5.1e scrub ok
Jan 10 17:02:08 compute-0 python3.9[106172]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 10 17:02:08 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:02:08 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:02:08 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:02:08 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:02:08 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:02:08 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:02:09 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:02:09 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 5.1 scrub starts
Jan 10 17:02:09 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 5.1 scrub ok
Jan 10 17:02:09 compute-0 ceph-mon[75249]: pgmap v194: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:02:09 compute-0 ceph-mon[75249]: 5.1e scrub starts
Jan 10 17:02:09 compute-0 ceph-mon[75249]: 5.1e scrub ok
Jan 10 17:02:09 compute-0 sudo[106324]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cckgwfwnjolbwhnsmbrxavopkevjfdej ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064529.0613904-133-164679942934257/AnsiballZ_dnf.py'
Jan 10 17:02:09 compute-0 sudo[106324]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:02:09 compute-0 python3.9[106326]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 10 17:02:10 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v195: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:02:10 compute-0 ceph-mon[75249]: 5.1 scrub starts
Jan 10 17:02:10 compute-0 ceph-mon[75249]: 5.1 scrub ok
Jan 10 17:02:10 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 6.0 scrub starts
Jan 10 17:02:10 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 6.0 scrub ok
Jan 10 17:02:10 compute-0 sudo[106324]: pam_unix(sudo:session): session closed for user root
Jan 10 17:02:11 compute-0 ceph-mon[75249]: pgmap v195: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:02:11 compute-0 ceph-mon[75249]: 6.0 scrub starts
Jan 10 17:02:11 compute-0 ceph-mon[75249]: 6.0 scrub ok
Jan 10 17:02:11 compute-0 sudo[106477]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-omzskzfterkaeopytamyspbhbbxdrwmp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064531.079416-142-77717275129887/AnsiballZ_dnf.py'
Jan 10 17:02:11 compute-0 sudo[106477]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:02:11 compute-0 python3.9[106479]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 10 17:02:12 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v196: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:02:12 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 7.2 scrub starts
Jan 10 17:02:12 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 7.2 scrub ok
Jan 10 17:02:12 compute-0 sudo[106477]: pam_unix(sudo:session): session closed for user root
Jan 10 17:02:13 compute-0 ceph-mon[75249]: pgmap v196: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:02:13 compute-0 ceph-mon[75249]: 7.2 scrub starts
Jan 10 17:02:13 compute-0 ceph-mon[75249]: 7.2 scrub ok
Jan 10 17:02:13 compute-0 sudo[106630]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kiijyjydgkmvcmhepeznzggejjtklmof ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064533.2322607-154-57879528188149/AnsiballZ_stat.py'
Jan 10 17:02:13 compute-0 sudo[106630]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:02:13 compute-0 python3.9[106632]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 10 17:02:13 compute-0 sudo[106630]: pam_unix(sudo:session): session closed for user root
Jan 10 17:02:14 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:02:14 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v197: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:02:14 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 7.a scrub starts
Jan 10 17:02:14 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 7.a scrub ok
Jan 10 17:02:14 compute-0 sudo[106784]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-moshopnoeyogxgzgggsxfdtakbvzwzcf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064533.8927865-162-13968033403527/AnsiballZ_slurp.py'
Jan 10 17:02:14 compute-0 sudo[106784]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:02:14 compute-0 python3.9[106786]: ansible-ansible.builtin.slurp Invoked with path=/var/lib/edpm-config/os-net-config.returncode src=/var/lib/edpm-config/os-net-config.returncode
Jan 10 17:02:14 compute-0 sudo[106784]: pam_unix(sudo:session): session closed for user root
Jan 10 17:02:15 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 6.3 scrub starts
Jan 10 17:02:15 compute-0 ceph-mon[75249]: pgmap v197: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:02:15 compute-0 ceph-mon[75249]: 7.a scrub starts
Jan 10 17:02:15 compute-0 ceph-mon[75249]: 7.a scrub ok
Jan 10 17:02:15 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 6.3 scrub ok
Jan 10 17:02:15 compute-0 sshd-session[104120]: Connection closed by 192.168.122.30 port 52004
Jan 10 17:02:15 compute-0 sshd-session[104117]: pam_unix(sshd:session): session closed for user zuul
Jan 10 17:02:15 compute-0 systemd[1]: session-36.scope: Deactivated successfully.
Jan 10 17:02:15 compute-0 systemd[1]: session-36.scope: Consumed 18.810s CPU time.
Jan 10 17:02:15 compute-0 systemd-logind[798]: Session 36 logged out. Waiting for processes to exit.
Jan 10 17:02:15 compute-0 systemd-logind[798]: Removed session 36.
Jan 10 17:02:16 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v198: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:02:16 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 3.7 scrub starts
Jan 10 17:02:16 compute-0 ceph-mon[75249]: 6.3 scrub starts
Jan 10 17:02:16 compute-0 ceph-mon[75249]: 6.3 scrub ok
Jan 10 17:02:16 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 3.7 scrub ok
Jan 10 17:02:17 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 3.5 scrub starts
Jan 10 17:02:17 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 3.5 scrub ok
Jan 10 17:02:17 compute-0 ceph-mon[75249]: pgmap v198: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:02:17 compute-0 ceph-mon[75249]: 3.7 scrub starts
Jan 10 17:02:17 compute-0 ceph-mon[75249]: 3.7 scrub ok
Jan 10 17:02:18 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v199: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:02:18 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 2.6 scrub starts
Jan 10 17:02:18 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 2.6 scrub ok
Jan 10 17:02:18 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 7.c scrub starts
Jan 10 17:02:18 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 7.c scrub ok
Jan 10 17:02:18 compute-0 ceph-mon[75249]: 3.5 scrub starts
Jan 10 17:02:18 compute-0 ceph-mon[75249]: 3.5 scrub ok
Jan 10 17:02:19 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:02:19 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 5.1d scrub starts
Jan 10 17:02:19 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 5.1d scrub ok
Jan 10 17:02:19 compute-0 ceph-mon[75249]: pgmap v199: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:02:19 compute-0 ceph-mon[75249]: 2.6 scrub starts
Jan 10 17:02:19 compute-0 ceph-mon[75249]: 2.6 scrub ok
Jan 10 17:02:19 compute-0 ceph-mon[75249]: 7.c scrub starts
Jan 10 17:02:19 compute-0 ceph-mon[75249]: 7.c scrub ok
Jan 10 17:02:20 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v200: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:02:20 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 5.f scrub starts
Jan 10 17:02:20 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 5.f scrub ok
Jan 10 17:02:20 compute-0 ceph-mon[75249]: 5.1d scrub starts
Jan 10 17:02:20 compute-0 ceph-mon[75249]: 5.1d scrub ok
Jan 10 17:02:20 compute-0 sshd-session[106811]: Accepted publickey for zuul from 192.168.122.30 port 40240 ssh2: ECDSA SHA256:YYROLJW/JwZAyyZtyl+88gzuUs1GqrQIhGb+AzXg9yc
Jan 10 17:02:20 compute-0 systemd-logind[798]: New session 37 of user zuul.
Jan 10 17:02:20 compute-0 systemd[1]: Started Session 37 of User zuul.
Jan 10 17:02:20 compute-0 sshd-session[106811]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 10 17:02:21 compute-0 ceph-mon[75249]: pgmap v200: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:02:21 compute-0 ceph-mon[75249]: 5.f scrub starts
Jan 10 17:02:21 compute-0 ceph-mon[75249]: 5.f scrub ok
Jan 10 17:02:21 compute-0 python3.9[106964]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 10 17:02:22 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v201: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:02:22 compute-0 python3.9[107118]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 10 17:02:23 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 6.7 scrub starts
Jan 10 17:02:23 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 6.7 scrub ok
Jan 10 17:02:23 compute-0 ceph-mon[75249]: pgmap v201: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:02:23 compute-0 python3.9[107311]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 17:02:24 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:02:24 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v202: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:02:24 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 7.8 scrub starts
Jan 10 17:02:24 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 7.8 scrub ok
Jan 10 17:02:24 compute-0 ceph-mon[75249]: 6.7 scrub starts
Jan 10 17:02:24 compute-0 ceph-mon[75249]: 6.7 scrub ok
Jan 10 17:02:24 compute-0 sshd-session[106814]: Connection closed by 192.168.122.30 port 40240
Jan 10 17:02:24 compute-0 sshd-session[106811]: pam_unix(sshd:session): session closed for user zuul
Jan 10 17:02:24 compute-0 systemd[1]: session-37.scope: Deactivated successfully.
Jan 10 17:02:24 compute-0 systemd[1]: session-37.scope: Consumed 2.603s CPU time.
Jan 10 17:02:24 compute-0 systemd-logind[798]: Session 37 logged out. Waiting for processes to exit.
Jan 10 17:02:24 compute-0 systemd-logind[798]: Removed session 37.
Jan 10 17:02:25 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 5.c scrub starts
Jan 10 17:02:25 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 5.c scrub ok
Jan 10 17:02:25 compute-0 ceph-mon[75249]: pgmap v202: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:02:25 compute-0 ceph-mon[75249]: 7.8 scrub starts
Jan 10 17:02:25 compute-0 ceph-mon[75249]: 7.8 scrub ok
Jan 10 17:02:26 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v203: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:02:26 compute-0 ceph-mon[75249]: 5.c scrub starts
Jan 10 17:02:26 compute-0 ceph-mon[75249]: 5.c scrub ok
Jan 10 17:02:26 compute-0 ceph-mon[75249]: pgmap v203: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:02:28 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v204: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:02:28 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 6.9 scrub starts
Jan 10 17:02:28 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 6.9 scrub ok
Jan 10 17:02:29 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:02:29 compute-0 ceph-mon[75249]: pgmap v204: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:02:29 compute-0 ceph-mon[75249]: 6.9 scrub starts
Jan 10 17:02:29 compute-0 ceph-mon[75249]: 6.9 scrub ok
Jan 10 17:02:29 compute-0 sshd-session[107337]: Accepted publickey for zuul from 192.168.122.30 port 40242 ssh2: ECDSA SHA256:YYROLJW/JwZAyyZtyl+88gzuUs1GqrQIhGb+AzXg9yc
Jan 10 17:02:29 compute-0 systemd-logind[798]: New session 38 of user zuul.
Jan 10 17:02:29 compute-0 systemd[1]: Started Session 38 of User zuul.
Jan 10 17:02:29 compute-0 sshd-session[107337]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 10 17:02:30 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v205: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:02:30 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 3.1d scrub starts
Jan 10 17:02:30 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 3.1d scrub ok
Jan 10 17:02:30 compute-0 python3.9[107490]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 10 17:02:31 compute-0 ceph-mon[75249]: pgmap v205: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:02:31 compute-0 ceph-mon[75249]: 3.1d scrub starts
Jan 10 17:02:31 compute-0 ceph-mon[75249]: 3.1d scrub ok
Jan 10 17:02:31 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 4.5 scrub starts
Jan 10 17:02:31 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 4.5 scrub ok
Jan 10 17:02:31 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 6.5 scrub starts
Jan 10 17:02:31 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 6.5 scrub ok
Jan 10 17:02:31 compute-0 python3.9[107644]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 10 17:02:32 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v206: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:02:32 compute-0 ceph-mon[75249]: 4.5 scrub starts
Jan 10 17:02:32 compute-0 ceph-mon[75249]: 4.5 scrub ok
Jan 10 17:02:32 compute-0 ceph-mon[75249]: 6.5 scrub starts
Jan 10 17:02:32 compute-0 ceph-mon[75249]: 6.5 scrub ok
Jan 10 17:02:32 compute-0 sudo[107798]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wotnvtkmgpdbhwbfvyyvhrhcgqyixsgc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064552.037987-35-150531666267085/AnsiballZ_setup.py'
Jan 10 17:02:32 compute-0 sudo[107798]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:02:32 compute-0 python3.9[107800]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 10 17:02:32 compute-0 sudo[107798]: pam_unix(sudo:session): session closed for user root
Jan 10 17:02:33 compute-0 ceph-mon[75249]: pgmap v206: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:02:33 compute-0 sudo[107882]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vldhiadmpbyhabxixencygqwdxylisur ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064552.037987-35-150531666267085/AnsiballZ_dnf.py'
Jan 10 17:02:33 compute-0 sudo[107882]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:02:33 compute-0 python3.9[107884]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 10 17:02:34 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:02:34 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v207: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:02:34 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 7.e scrub starts
Jan 10 17:02:34 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 7.e scrub ok
Jan 10 17:02:34 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 6.a scrub starts
Jan 10 17:02:34 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 6.a scrub ok
Jan 10 17:02:34 compute-0 ceph-mon[75249]: pgmap v207: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:02:35 compute-0 sudo[107882]: pam_unix(sudo:session): session closed for user root
Jan 10 17:02:35 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 5.1a scrub starts
Jan 10 17:02:35 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 5.1a scrub ok
Jan 10 17:02:35 compute-0 ceph-mon[75249]: 7.e scrub starts
Jan 10 17:02:35 compute-0 ceph-mon[75249]: 7.e scrub ok
Jan 10 17:02:35 compute-0 ceph-mon[75249]: 6.a scrub starts
Jan 10 17:02:35 compute-0 ceph-mon[75249]: 6.a scrub ok
Jan 10 17:02:35 compute-0 ceph-mon[75249]: 5.1a scrub starts
Jan 10 17:02:35 compute-0 ceph-mon[75249]: 5.1a scrub ok
Jan 10 17:02:35 compute-0 sshd-session[107886]: Connection closed by authenticating user root 216.36.124.133 port 42440 [preauth]
Jan 10 17:02:35 compute-0 sudo[108037]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ixujtblolqasbgtnmcoyjonbfvnlhlac ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064555.3073142-47-96478569838953/AnsiballZ_setup.py'
Jan 10 17:02:35 compute-0 sudo[108037]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:02:35 compute-0 python3.9[108039]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 10 17:02:36 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v208: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:02:36 compute-0 sudo[108037]: pam_unix(sudo:session): session closed for user root
Jan 10 17:02:36 compute-0 ceph-mon[75249]: pgmap v208: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:02:36 compute-0 sudo[108232]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mhosumnnexrdcwcilbezskvaxfznxdqv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064556.4060004-58-133389230768550/AnsiballZ_file.py'
Jan 10 17:02:36 compute-0 sudo[108232]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:02:37 compute-0 python3.9[108234]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:02:37 compute-0 sudo[108232]: pam_unix(sudo:session): session closed for user root
Jan 10 17:02:37 compute-0 sudo[108384]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hafmyhcgurjokbemdlpaglasbcinwulh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064557.2878122-66-182530519278814/AnsiballZ_command.py'
Jan 10 17:02:37 compute-0 sudo[108384]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:02:37 compute-0 python3.9[108386]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 17:02:38 compute-0 ceph-mgr[75538]: [balancer INFO root] Optimize plan auto_2026-01-10_17:02:38
Jan 10 17:02:38 compute-0 ceph-mgr[75538]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 10 17:02:38 compute-0 ceph-mgr[75538]: [balancer INFO root] do_upmap
Jan 10 17:02:38 compute-0 ceph-mgr[75538]: [balancer INFO root] pools ['.mgr', 'volumes', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'vms', 'backups', 'images']
Jan 10 17:02:38 compute-0 ceph-mgr[75538]: [balancer INFO root] prepared 0/10 upmap changes
Jan 10 17:02:38 compute-0 sudo[108384]: pam_unix(sudo:session): session closed for user root
Jan 10 17:02:38 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v209: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:02:38 compute-0 sudo[108549]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-srsvsumrybvaujvwmtoznrspipnpwtil ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064558.192501-74-161962954383476/AnsiballZ_stat.py'
Jan 10 17:02:38 compute-0 sudo[108549]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:02:38 compute-0 python3.9[108551]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 17:02:38 compute-0 sudo[108549]: pam_unix(sudo:session): session closed for user root
Jan 10 17:02:38 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:02:38 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:02:38 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:02:38 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:02:38 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:02:38 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:02:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 10 17:02:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 10 17:02:39 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:02:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 10 17:02:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 10 17:02:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 10 17:02:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 10 17:02:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 10 17:02:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 10 17:02:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 10 17:02:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 10 17:02:39 compute-0 sudo[108627]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-necwylwwfbdgbnplcrrvlscwuxzhvjls ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064558.192501-74-161962954383476/AnsiballZ_file.py'
Jan 10 17:02:39 compute-0 ceph-mon[75249]: pgmap v209: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:02:39 compute-0 sudo[108627]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:02:39 compute-0 python3.9[108629]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:02:39 compute-0 sudo[108627]: pam_unix(sudo:session): session closed for user root
Jan 10 17:02:39 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 5.18 scrub starts
Jan 10 17:02:39 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 5.18 scrub ok
Jan 10 17:02:39 compute-0 sudo[108779]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cbpbauoufbaijhqxwsftjmmhftevmaga ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064559.5321512-86-89565137918676/AnsiballZ_stat.py'
Jan 10 17:02:39 compute-0 sudo[108779]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:02:40 compute-0 python3.9[108781]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 17:02:40 compute-0 sudo[108779]: pam_unix(sudo:session): session closed for user root
Jan 10 17:02:40 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v210: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:02:40 compute-0 ceph-mon[75249]: 5.18 scrub starts
Jan 10 17:02:40 compute-0 ceph-mon[75249]: 5.18 scrub ok
Jan 10 17:02:40 compute-0 sudo[108857]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-shgcsrfstehiyzhzjyaqcpioohrexvap ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064559.5321512-86-89565137918676/AnsiballZ_file.py'
Jan 10 17:02:40 compute-0 sudo[108857]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:02:40 compute-0 python3.9[108859]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 10 17:02:40 compute-0 sudo[108857]: pam_unix(sudo:session): session closed for user root
Jan 10 17:02:41 compute-0 ceph-mon[75249]: pgmap v210: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:02:41 compute-0 sudo[109009]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tzbcyxfgiogczziykrltpwuzehmjqorx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064560.8033087-99-155639724656295/AnsiballZ_ini_file.py'
Jan 10 17:02:41 compute-0 sudo[109009]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:02:41 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 7.1 scrub starts
Jan 10 17:02:41 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 7.1 scrub ok
Jan 10 17:02:41 compute-0 python3.9[109011]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 10 17:02:41 compute-0 sudo[109009]: pam_unix(sudo:session): session closed for user root
Jan 10 17:02:42 compute-0 sudo[109161]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wvzkljirnfvnztkztbvvfbrmetetonhg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064561.6173558-99-54149316677756/AnsiballZ_ini_file.py'
Jan 10 17:02:42 compute-0 sudo[109161]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:02:42 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v211: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:02:42 compute-0 ceph-mon[75249]: 7.1 scrub starts
Jan 10 17:02:42 compute-0 ceph-mon[75249]: 7.1 scrub ok
Jan 10 17:02:42 compute-0 python3.9[109163]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 10 17:02:42 compute-0 sudo[109161]: pam_unix(sudo:session): session closed for user root
Jan 10 17:02:42 compute-0 sudo[109313]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xshweqqzunmjzplwmpryydpqvospnmqs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064562.3691185-99-256279850448688/AnsiballZ_ini_file.py'
Jan 10 17:02:42 compute-0 sudo[109313]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:02:42 compute-0 python3.9[109315]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 10 17:02:42 compute-0 sudo[109313]: pam_unix(sudo:session): session closed for user root
Jan 10 17:02:43 compute-0 ceph-mon[75249]: pgmap v211: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:02:43 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 7.1a scrub starts
Jan 10 17:02:43 compute-0 sudo[109465]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gpsntzfuewtfwvedydsvcvujsperxwvj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064563.034143-99-99743026855589/AnsiballZ_ini_file.py'
Jan 10 17:02:43 compute-0 sudo[109465]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:02:43 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 7.1a scrub ok
Jan 10 17:02:43 compute-0 python3.9[109467]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 10 17:02:43 compute-0 sudo[109465]: pam_unix(sudo:session): session closed for user root
Jan 10 17:02:44 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:02:44 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v212: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:02:44 compute-0 ceph-mon[75249]: 7.1a scrub starts
Jan 10 17:02:44 compute-0 ceph-mon[75249]: 7.1a scrub ok
Jan 10 17:02:44 compute-0 sudo[109617]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zzshambnkkhfrgywfhqvymmhyjejtnkx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064563.8855429-130-110274729332714/AnsiballZ_dnf.py'
Jan 10 17:02:44 compute-0 sudo[109617]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:02:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] _maybe_adjust
Jan 10 17:02:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:02:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 10 17:02:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:02:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 17:02:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:02:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 17:02:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:02:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 17:02:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:02:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 17:02:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:02:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 9.302004027771843e-07 of space, bias 4.0, pg target 0.0011162404833326212 quantized to 16 (current 16)
Jan 10 17:02:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:02:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 17:02:44 compute-0 python3.9[109619]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 10 17:02:45 compute-0 ceph-mon[75249]: pgmap v212: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:02:45 compute-0 sudo[109617]: pam_unix(sudo:session): session closed for user root
Jan 10 17:02:46 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v213: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:02:46 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 5.19 scrub starts
Jan 10 17:02:46 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 5.19 scrub ok
Jan 10 17:02:46 compute-0 sudo[109770]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yheyeixpflqfcmusayfampmkgaxoblzq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064566.1589801-141-119624596595040/AnsiballZ_setup.py'
Jan 10 17:02:46 compute-0 sudo[109770]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:02:46 compute-0 sudo[109773]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 17:02:46 compute-0 sudo[109773]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:02:46 compute-0 sudo[109773]: pam_unix(sudo:session): session closed for user root
Jan 10 17:02:46 compute-0 sudo[109798]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 10 17:02:46 compute-0 sudo[109798]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:02:46 compute-0 python3.9[109772]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 10 17:02:46 compute-0 sudo[109770]: pam_unix(sudo:session): session closed for user root
Jan 10 17:02:47 compute-0 sudo[109798]: pam_unix(sudo:session): session closed for user root
Jan 10 17:02:47 compute-0 ceph-mon[75249]: pgmap v213: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:02:47 compute-0 ceph-mon[75249]: 5.19 scrub starts
Jan 10 17:02:47 compute-0 ceph-mon[75249]: 5.19 scrub ok
Jan 10 17:02:47 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 17:02:47 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 17:02:47 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 10 17:02:47 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 10 17:02:47 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 10 17:02:47 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:02:47 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 10 17:02:47 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 10 17:02:47 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 10 17:02:47 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 10 17:02:47 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 17:02:47 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 17:02:47 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 3.8 scrub starts
Jan 10 17:02:47 compute-0 sudo[110025]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pjvrkprwswojvmezkvtveixqiibkpanm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064566.9819188-149-273609075389324/AnsiballZ_stat.py'
Jan 10 17:02:47 compute-0 sudo[110025]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:02:47 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 3.8 scrub ok
Jan 10 17:02:47 compute-0 sudo[109991]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 17:02:47 compute-0 sudo[109991]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:02:47 compute-0 sudo[109991]: pam_unix(sudo:session): session closed for user root
Jan 10 17:02:47 compute-0 sudo[110034]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 10 17:02:47 compute-0 sudo[110034]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:02:47 compute-0 python3.9[110032]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 10 17:02:47 compute-0 sudo[110025]: pam_unix(sudo:session): session closed for user root
Jan 10 17:02:47 compute-0 podman[110095]: 2026-01-10 17:02:47.651729469 +0000 UTC m=+0.043517759 container create fec2727e1157a42eb905d7fa3edaaa3384f331b7d9b3c765f1cbfd5e99d160e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_euler, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 10 17:02:47 compute-0 systemd[1]: Started libpod-conmon-fec2727e1157a42eb905d7fa3edaaa3384f331b7d9b3c765f1cbfd5e99d160e0.scope.
Jan 10 17:02:47 compute-0 podman[110095]: 2026-01-10 17:02:47.635402023 +0000 UTC m=+0.027190343 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:02:47 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:02:47 compute-0 podman[110095]: 2026-01-10 17:02:47.751978105 +0000 UTC m=+0.143766435 container init fec2727e1157a42eb905d7fa3edaaa3384f331b7d9b3c765f1cbfd5e99d160e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_euler, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 17:02:47 compute-0 podman[110095]: 2026-01-10 17:02:47.760579339 +0000 UTC m=+0.152367639 container start fec2727e1157a42eb905d7fa3edaaa3384f331b7d9b3c765f1cbfd5e99d160e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_euler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 10 17:02:47 compute-0 podman[110095]: 2026-01-10 17:02:47.764095665 +0000 UTC m=+0.155883975 container attach fec2727e1157a42eb905d7fa3edaaa3384f331b7d9b3c765f1cbfd5e99d160e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_euler, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 10 17:02:47 compute-0 hardcore_euler[110116]: 167 167
Jan 10 17:02:47 compute-0 systemd[1]: libpod-fec2727e1157a42eb905d7fa3edaaa3384f331b7d9b3c765f1cbfd5e99d160e0.scope: Deactivated successfully.
Jan 10 17:02:47 compute-0 podman[110095]: 2026-01-10 17:02:47.767104937 +0000 UTC m=+0.158893237 container died fec2727e1157a42eb905d7fa3edaaa3384f331b7d9b3c765f1cbfd5e99d160e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_euler, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 17:02:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-32a7e4d49bef48ceb62d64f8bce932dd4101843c16ff0d4430a22446cbf98432-merged.mount: Deactivated successfully.
Jan 10 17:02:47 compute-0 podman[110095]: 2026-01-10 17:02:47.808279321 +0000 UTC m=+0.200067611 container remove fec2727e1157a42eb905d7fa3edaaa3384f331b7d9b3c765f1cbfd5e99d160e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_euler, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 10 17:02:47 compute-0 systemd[1]: libpod-conmon-fec2727e1157a42eb905d7fa3edaaa3384f331b7d9b3c765f1cbfd5e99d160e0.scope: Deactivated successfully.
Jan 10 17:02:47 compute-0 podman[110235]: 2026-01-10 17:02:47.967966579 +0000 UTC m=+0.044910926 container create 224cfc8e7d4aa5962a8ed77d9e23487f566438e5622e4f6fa7e3ebdece25adb3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_ellis, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 17:02:47 compute-0 sudo[110275]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wnadttlbydfywnsahrfzfppzzkvargju ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064567.7394967-158-208435400287392/AnsiballZ_stat.py'
Jan 10 17:02:47 compute-0 sudo[110275]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:02:48 compute-0 systemd[1]: Started libpod-conmon-224cfc8e7d4aa5962a8ed77d9e23487f566438e5622e4f6fa7e3ebdece25adb3.scope.
Jan 10 17:02:48 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:02:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/beaf53ebe700309851883469d879ff968dcc0f50654303bafc9667360f623497/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 17:02:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/beaf53ebe700309851883469d879ff968dcc0f50654303bafc9667360f623497/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 17:02:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/beaf53ebe700309851883469d879ff968dcc0f50654303bafc9667360f623497/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 17:02:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/beaf53ebe700309851883469d879ff968dcc0f50654303bafc9667360f623497/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 17:02:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/beaf53ebe700309851883469d879ff968dcc0f50654303bafc9667360f623497/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 10 17:02:48 compute-0 podman[110235]: 2026-01-10 17:02:48.041249979 +0000 UTC m=+0.118194376 container init 224cfc8e7d4aa5962a8ed77d9e23487f566438e5622e4f6fa7e3ebdece25adb3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_ellis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 17:02:48 compute-0 podman[110235]: 2026-01-10 17:02:47.950353839 +0000 UTC m=+0.027298216 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:02:48 compute-0 podman[110235]: 2026-01-10 17:02:48.052810265 +0000 UTC m=+0.129754622 container start 224cfc8e7d4aa5962a8ed77d9e23487f566438e5622e4f6fa7e3ebdece25adb3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_ellis, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS)
Jan 10 17:02:48 compute-0 podman[110235]: 2026-01-10 17:02:48.058724916 +0000 UTC m=+0.135669303 container attach 224cfc8e7d4aa5962a8ed77d9e23487f566438e5622e4f6fa7e3ebdece25adb3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_ellis, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 10 17:02:48 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v214: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:02:48 compute-0 python3.9[110279]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 10 17:02:48 compute-0 sudo[110275]: pam_unix(sudo:session): session closed for user root
Jan 10 17:02:48 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 17:02:48 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 10 17:02:48 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:02:48 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 10 17:02:48 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 10 17:02:48 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 17:02:48 compute-0 ceph-mon[75249]: 3.8 scrub starts
Jan 10 17:02:48 compute-0 ceph-mon[75249]: 3.8 scrub ok
Jan 10 17:02:48 compute-0 wizardly_ellis[110280]: --> passed data devices: 0 physical, 3 LVM
Jan 10 17:02:48 compute-0 wizardly_ellis[110280]: --> All data devices are unavailable
Jan 10 17:02:48 compute-0 systemd[1]: libpod-224cfc8e7d4aa5962a8ed77d9e23487f566438e5622e4f6fa7e3ebdece25adb3.scope: Deactivated successfully.
Jan 10 17:02:48 compute-0 podman[110235]: 2026-01-10 17:02:48.617921327 +0000 UTC m=+0.694865684 container died 224cfc8e7d4aa5962a8ed77d9e23487f566438e5622e4f6fa7e3ebdece25adb3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_ellis, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS)
Jan 10 17:02:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-beaf53ebe700309851883469d879ff968dcc0f50654303bafc9667360f623497-merged.mount: Deactivated successfully.
Jan 10 17:02:48 compute-0 systemd[76625]: Created slice User Background Tasks Slice.
Jan 10 17:02:48 compute-0 podman[110235]: 2026-01-10 17:02:48.677502273 +0000 UTC m=+0.754446620 container remove 224cfc8e7d4aa5962a8ed77d9e23487f566438e5622e4f6fa7e3ebdece25adb3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_ellis, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 17:02:48 compute-0 systemd[76625]: Starting Cleanup of User's Temporary Files and Directories...
Jan 10 17:02:48 compute-0 systemd[1]: libpod-conmon-224cfc8e7d4aa5962a8ed77d9e23487f566438e5622e4f6fa7e3ebdece25adb3.scope: Deactivated successfully.
Jan 10 17:02:48 compute-0 systemd[76625]: Finished Cleanup of User's Temporary Files and Directories.
Jan 10 17:02:48 compute-0 sudo[110034]: pam_unix(sudo:session): session closed for user root
Jan 10 17:02:48 compute-0 sudo[110484]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhtcawszfugdswodlvsyvazzhbzhuzqd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064568.4935782-168-119298122079783/AnsiballZ_command.py'
Jan 10 17:02:48 compute-0 sudo[110484]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:02:48 compute-0 sudo[110446]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 17:02:48 compute-0 sudo[110446]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:02:48 compute-0 sudo[110446]: pam_unix(sudo:session): session closed for user root
Jan 10 17:02:48 compute-0 sudo[110492]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 -- lvm list --format json
Jan 10 17:02:48 compute-0 sudo[110492]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:02:48 compute-0 python3.9[110490]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 17:02:49 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:02:49 compute-0 sudo[110484]: pam_unix(sudo:session): session closed for user root
Jan 10 17:02:49 compute-0 podman[110547]: 2026-01-10 17:02:49.128387108 +0000 UTC m=+0.040375663 container create 7d20398867b9ce409d03967b73853ffdd8b11bf99dd59384a06daf75c742d24c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_panini, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 10 17:02:49 compute-0 systemd[1]: Started libpod-conmon-7d20398867b9ce409d03967b73853ffdd8b11bf99dd59384a06daf75c742d24c.scope.
Jan 10 17:02:49 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:02:49 compute-0 podman[110547]: 2026-01-10 17:02:49.198675797 +0000 UTC m=+0.110664362 container init 7d20398867b9ce409d03967b73853ffdd8b11bf99dd59384a06daf75c742d24c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_panini, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 10 17:02:49 compute-0 podman[110547]: 2026-01-10 17:02:49.204545207 +0000 UTC m=+0.116533762 container start 7d20398867b9ce409d03967b73853ffdd8b11bf99dd59384a06daf75c742d24c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_panini, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 17:02:49 compute-0 optimistic_panini[110571]: 167 167
Jan 10 17:02:49 compute-0 podman[110547]: 2026-01-10 17:02:49.207829536 +0000 UTC m=+0.119818091 container attach 7d20398867b9ce409d03967b73853ffdd8b11bf99dd59384a06daf75c742d24c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_panini, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 17:02:49 compute-0 podman[110547]: 2026-01-10 17:02:49.112244958 +0000 UTC m=+0.024233533 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:02:49 compute-0 systemd[1]: libpod-7d20398867b9ce409d03967b73853ffdd8b11bf99dd59384a06daf75c742d24c.scope: Deactivated successfully.
Jan 10 17:02:49 compute-0 podman[110547]: 2026-01-10 17:02:49.2094215 +0000 UTC m=+0.121410055 container died 7d20398867b9ce409d03967b73853ffdd8b11bf99dd59384a06daf75c742d24c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_panini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 10 17:02:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-e1e54025596cb42c39f8762b0b33cda82d73d631c5773645c6acb651a9eb109d-merged.mount: Deactivated successfully.
Jan 10 17:02:49 compute-0 podman[110547]: 2026-01-10 17:02:49.245955437 +0000 UTC m=+0.157943992 container remove 7d20398867b9ce409d03967b73853ffdd8b11bf99dd59384a06daf75c742d24c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_panini, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 10 17:02:49 compute-0 ceph-mon[75249]: pgmap v214: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:02:49 compute-0 systemd[1]: libpod-conmon-7d20398867b9ce409d03967b73853ffdd8b11bf99dd59384a06daf75c742d24c.scope: Deactivated successfully.
Jan 10 17:02:49 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 4.8 scrub starts
Jan 10 17:02:49 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 4.8 scrub ok
Jan 10 17:02:49 compute-0 podman[110647]: 2026-01-10 17:02:49.416256765 +0000 UTC m=+0.042457190 container create a8b934ff9225dc5d6aae6edd8ec6676c85e8e07b7117a843d1a6116d706bca99 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_yonath, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 17:02:49 compute-0 systemd[1]: Started libpod-conmon-a8b934ff9225dc5d6aae6edd8ec6676c85e8e07b7117a843d1a6116d706bca99.scope.
Jan 10 17:02:49 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:02:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fb15d4f7e2b591e578bee3a899514d18bbad58ffbfcc5f5ac5065a05bc1884c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 17:02:49 compute-0 podman[110647]: 2026-01-10 17:02:49.394577193 +0000 UTC m=+0.020777658 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:02:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fb15d4f7e2b591e578bee3a899514d18bbad58ffbfcc5f5ac5065a05bc1884c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 17:02:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fb15d4f7e2b591e578bee3a899514d18bbad58ffbfcc5f5ac5065a05bc1884c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 17:02:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fb15d4f7e2b591e578bee3a899514d18bbad58ffbfcc5f5ac5065a05bc1884c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 17:02:49 compute-0 podman[110647]: 2026-01-10 17:02:49.501984604 +0000 UTC m=+0.128185049 container init a8b934ff9225dc5d6aae6edd8ec6676c85e8e07b7117a843d1a6116d706bca99 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_yonath, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 17:02:49 compute-0 podman[110647]: 2026-01-10 17:02:49.513876789 +0000 UTC m=+0.140077214 container start a8b934ff9225dc5d6aae6edd8ec6676c85e8e07b7117a843d1a6116d706bca99 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_yonath, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 10 17:02:49 compute-0 podman[110647]: 2026-01-10 17:02:49.519286726 +0000 UTC m=+0.145487231 container attach a8b934ff9225dc5d6aae6edd8ec6676c85e8e07b7117a843d1a6116d706bca99 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_yonath, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 10 17:02:49 compute-0 sudo[110744]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hkoqcfvxhswedfrqfatdahanwwmikajy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064569.26915-178-194710149465455/AnsiballZ_service_facts.py'
Jan 10 17:02:49 compute-0 sudo[110744]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:02:49 compute-0 bold_yonath[110664]: {
Jan 10 17:02:49 compute-0 bold_yonath[110664]:     "0": [
Jan 10 17:02:49 compute-0 bold_yonath[110664]:         {
Jan 10 17:02:49 compute-0 bold_yonath[110664]:             "devices": [
Jan 10 17:02:49 compute-0 bold_yonath[110664]:                 "/dev/loop3"
Jan 10 17:02:49 compute-0 bold_yonath[110664]:             ],
Jan 10 17:02:49 compute-0 bold_yonath[110664]:             "lv_name": "ceph_lv0",
Jan 10 17:02:49 compute-0 bold_yonath[110664]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 10 17:02:49 compute-0 bold_yonath[110664]:             "lv_size": "21470642176",
Jan 10 17:02:49 compute-0 bold_yonath[110664]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9aa1dcc9-88f4-49c0-be40-744313964d3e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 17:02:49 compute-0 bold_yonath[110664]:             "lv_uuid": "fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2",
Jan 10 17:02:49 compute-0 bold_yonath[110664]:             "name": "ceph_lv0",
Jan 10 17:02:49 compute-0 bold_yonath[110664]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 10 17:02:49 compute-0 bold_yonath[110664]:             "tags": {
Jan 10 17:02:49 compute-0 bold_yonath[110664]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 10 17:02:49 compute-0 bold_yonath[110664]:                 "ceph.block_uuid": "fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2",
Jan 10 17:02:49 compute-0 bold_yonath[110664]:                 "ceph.cephx_lockbox_secret": "",
Jan 10 17:02:49 compute-0 bold_yonath[110664]:                 "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 17:02:49 compute-0 bold_yonath[110664]:                 "ceph.cluster_name": "ceph",
Jan 10 17:02:49 compute-0 bold_yonath[110664]:                 "ceph.crush_device_class": "",
Jan 10 17:02:49 compute-0 bold_yonath[110664]:                 "ceph.encrypted": "0",
Jan 10 17:02:49 compute-0 bold_yonath[110664]:                 "ceph.objectstore": "bluestore",
Jan 10 17:02:49 compute-0 bold_yonath[110664]:                 "ceph.osd_fsid": "9aa1dcc9-88f4-49c0-be40-744313964d3e",
Jan 10 17:02:49 compute-0 bold_yonath[110664]:                 "ceph.osd_id": "0",
Jan 10 17:02:49 compute-0 bold_yonath[110664]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 10 17:02:49 compute-0 bold_yonath[110664]:                 "ceph.type": "block",
Jan 10 17:02:49 compute-0 bold_yonath[110664]:                 "ceph.vdo": "0",
Jan 10 17:02:49 compute-0 bold_yonath[110664]:                 "ceph.with_tpm": "0"
Jan 10 17:02:49 compute-0 bold_yonath[110664]:             },
Jan 10 17:02:49 compute-0 bold_yonath[110664]:             "type": "block",
Jan 10 17:02:49 compute-0 bold_yonath[110664]:             "vg_name": "ceph_vg0"
Jan 10 17:02:49 compute-0 bold_yonath[110664]:         }
Jan 10 17:02:49 compute-0 bold_yonath[110664]:     ],
Jan 10 17:02:49 compute-0 bold_yonath[110664]:     "1": [
Jan 10 17:02:49 compute-0 bold_yonath[110664]:         {
Jan 10 17:02:49 compute-0 bold_yonath[110664]:             "devices": [
Jan 10 17:02:49 compute-0 bold_yonath[110664]:                 "/dev/loop4"
Jan 10 17:02:49 compute-0 bold_yonath[110664]:             ],
Jan 10 17:02:49 compute-0 bold_yonath[110664]:             "lv_name": "ceph_lv1",
Jan 10 17:02:49 compute-0 bold_yonath[110664]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 10 17:02:49 compute-0 bold_yonath[110664]:             "lv_size": "21470642176",
Jan 10 17:02:49 compute-0 bold_yonath[110664]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e8e31518-65ae-476c-891c-e2fc550d0a1c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 17:02:49 compute-0 bold_yonath[110664]:             "lv_uuid": "jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE",
Jan 10 17:02:49 compute-0 bold_yonath[110664]:             "name": "ceph_lv1",
Jan 10 17:02:49 compute-0 bold_yonath[110664]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 10 17:02:49 compute-0 bold_yonath[110664]:             "tags": {
Jan 10 17:02:49 compute-0 bold_yonath[110664]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 10 17:02:49 compute-0 bold_yonath[110664]:                 "ceph.block_uuid": "jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE",
Jan 10 17:02:49 compute-0 bold_yonath[110664]:                 "ceph.cephx_lockbox_secret": "",
Jan 10 17:02:49 compute-0 bold_yonath[110664]:                 "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 17:02:49 compute-0 bold_yonath[110664]:                 "ceph.cluster_name": "ceph",
Jan 10 17:02:49 compute-0 bold_yonath[110664]:                 "ceph.crush_device_class": "",
Jan 10 17:02:49 compute-0 bold_yonath[110664]:                 "ceph.encrypted": "0",
Jan 10 17:02:49 compute-0 bold_yonath[110664]:                 "ceph.objectstore": "bluestore",
Jan 10 17:02:49 compute-0 bold_yonath[110664]:                 "ceph.osd_fsid": "e8e31518-65ae-476c-891c-e2fc550d0a1c",
Jan 10 17:02:49 compute-0 bold_yonath[110664]:                 "ceph.osd_id": "1",
Jan 10 17:02:49 compute-0 bold_yonath[110664]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 10 17:02:49 compute-0 bold_yonath[110664]:                 "ceph.type": "block",
Jan 10 17:02:49 compute-0 bold_yonath[110664]:                 "ceph.vdo": "0",
Jan 10 17:02:49 compute-0 bold_yonath[110664]:                 "ceph.with_tpm": "0"
Jan 10 17:02:49 compute-0 bold_yonath[110664]:             },
Jan 10 17:02:49 compute-0 bold_yonath[110664]:             "type": "block",
Jan 10 17:02:49 compute-0 bold_yonath[110664]:             "vg_name": "ceph_vg1"
Jan 10 17:02:49 compute-0 bold_yonath[110664]:         }
Jan 10 17:02:49 compute-0 bold_yonath[110664]:     ],
Jan 10 17:02:49 compute-0 bold_yonath[110664]:     "2": [
Jan 10 17:02:49 compute-0 bold_yonath[110664]:         {
Jan 10 17:02:49 compute-0 bold_yonath[110664]:             "devices": [
Jan 10 17:02:49 compute-0 bold_yonath[110664]:                 "/dev/loop5"
Jan 10 17:02:49 compute-0 bold_yonath[110664]:             ],
Jan 10 17:02:49 compute-0 bold_yonath[110664]:             "lv_name": "ceph_lv2",
Jan 10 17:02:49 compute-0 bold_yonath[110664]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 10 17:02:49 compute-0 bold_yonath[110664]:             "lv_size": "21470642176",
Jan 10 17:02:49 compute-0 bold_yonath[110664]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=87473727-6468-4f68-8371-e0bf60edaa43,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 17:02:49 compute-0 bold_yonath[110664]:             "lv_uuid": "47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL",
Jan 10 17:02:49 compute-0 bold_yonath[110664]:             "name": "ceph_lv2",
Jan 10 17:02:49 compute-0 bold_yonath[110664]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 10 17:02:49 compute-0 bold_yonath[110664]:             "tags": {
Jan 10 17:02:49 compute-0 bold_yonath[110664]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 10 17:02:49 compute-0 bold_yonath[110664]:                 "ceph.block_uuid": "47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL",
Jan 10 17:02:49 compute-0 bold_yonath[110664]:                 "ceph.cephx_lockbox_secret": "",
Jan 10 17:02:49 compute-0 bold_yonath[110664]:                 "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 17:02:49 compute-0 bold_yonath[110664]:                 "ceph.cluster_name": "ceph",
Jan 10 17:02:49 compute-0 bold_yonath[110664]:                 "ceph.crush_device_class": "",
Jan 10 17:02:49 compute-0 bold_yonath[110664]:                 "ceph.encrypted": "0",
Jan 10 17:02:49 compute-0 bold_yonath[110664]:                 "ceph.objectstore": "bluestore",
Jan 10 17:02:49 compute-0 bold_yonath[110664]:                 "ceph.osd_fsid": "87473727-6468-4f68-8371-e0bf60edaa43",
Jan 10 17:02:49 compute-0 bold_yonath[110664]:                 "ceph.osd_id": "2",
Jan 10 17:02:49 compute-0 bold_yonath[110664]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 10 17:02:49 compute-0 bold_yonath[110664]:                 "ceph.type": "block",
Jan 10 17:02:49 compute-0 bold_yonath[110664]:                 "ceph.vdo": "0",
Jan 10 17:02:49 compute-0 bold_yonath[110664]:                 "ceph.with_tpm": "0"
Jan 10 17:02:49 compute-0 bold_yonath[110664]:             },
Jan 10 17:02:49 compute-0 bold_yonath[110664]:             "type": "block",
Jan 10 17:02:49 compute-0 bold_yonath[110664]:             "vg_name": "ceph_vg2"
Jan 10 17:02:49 compute-0 bold_yonath[110664]:         }
Jan 10 17:02:49 compute-0 bold_yonath[110664]:     ]
Jan 10 17:02:49 compute-0 bold_yonath[110664]: }
Jan 10 17:02:49 compute-0 systemd[1]: libpod-a8b934ff9225dc5d6aae6edd8ec6676c85e8e07b7117a843d1a6116d706bca99.scope: Deactivated successfully.
Jan 10 17:02:49 compute-0 podman[110647]: 2026-01-10 17:02:49.898599638 +0000 UTC m=+0.524800133 container died a8b934ff9225dc5d6aae6edd8ec6676c85e8e07b7117a843d1a6116d706bca99 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_yonath, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 17:02:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-4fb15d4f7e2b591e578bee3a899514d18bbad58ffbfcc5f5ac5065a05bc1884c-merged.mount: Deactivated successfully.
Jan 10 17:02:49 compute-0 python3.9[110746]: ansible-service_facts Invoked
Jan 10 17:02:49 compute-0 podman[110647]: 2026-01-10 17:02:49.966955044 +0000 UTC m=+0.593155499 container remove a8b934ff9225dc5d6aae6edd8ec6676c85e8e07b7117a843d1a6116d706bca99 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_yonath, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 10 17:02:49 compute-0 systemd[1]: libpod-conmon-a8b934ff9225dc5d6aae6edd8ec6676c85e8e07b7117a843d1a6116d706bca99.scope: Deactivated successfully.
Jan 10 17:02:50 compute-0 network[110777]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 10 17:02:50 compute-0 network[110778]: 'network-scripts' will be removed from distribution in near future.
Jan 10 17:02:50 compute-0 network[110779]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 10 17:02:50 compute-0 sudo[110492]: pam_unix(sudo:session): session closed for user root
Jan 10 17:02:50 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v215: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:02:50 compute-0 ceph-mon[75249]: 4.8 scrub starts
Jan 10 17:02:50 compute-0 ceph-mon[75249]: 4.8 scrub ok
Jan 10 17:02:50 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 6.8 scrub starts
Jan 10 17:02:50 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 6.8 scrub ok
Jan 10 17:02:50 compute-0 sudo[110784]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 17:02:50 compute-0 sudo[110784]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:02:50 compute-0 sudo[110784]: pam_unix(sudo:session): session closed for user root
Jan 10 17:02:50 compute-0 sudo[110811]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 -- raw list --format json
Jan 10 17:02:50 compute-0 sudo[110811]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:02:51 compute-0 podman[110863]: 2026-01-10 17:02:51.085228092 +0000 UTC m=+0.054141509 container create 44a5934e3225e949d21dfe98d49526db52394a5115578db3a3ca6c54dadfa506 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_kirch, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 17:02:51 compute-0 systemd[1]: Started libpod-conmon-44a5934e3225e949d21dfe98d49526db52394a5115578db3a3ca6c54dadfa506.scope.
Jan 10 17:02:51 compute-0 podman[110863]: 2026-01-10 17:02:51.053244909 +0000 UTC m=+0.022158336 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:02:51 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:02:51 compute-0 podman[110863]: 2026-01-10 17:02:51.170511189 +0000 UTC m=+0.139424606 container init 44a5934e3225e949d21dfe98d49526db52394a5115578db3a3ca6c54dadfa506 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_kirch, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 17:02:51 compute-0 podman[110863]: 2026-01-10 17:02:51.179383241 +0000 UTC m=+0.148296638 container start 44a5934e3225e949d21dfe98d49526db52394a5115578db3a3ca6c54dadfa506 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_kirch, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 17:02:51 compute-0 podman[110863]: 2026-01-10 17:02:51.18300804 +0000 UTC m=+0.151921557 container attach 44a5934e3225e949d21dfe98d49526db52394a5115578db3a3ca6c54dadfa506 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_kirch, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 17:02:51 compute-0 jovial_kirch[110883]: 167 167
Jan 10 17:02:51 compute-0 systemd[1]: libpod-44a5934e3225e949d21dfe98d49526db52394a5115578db3a3ca6c54dadfa506.scope: Deactivated successfully.
Jan 10 17:02:51 compute-0 podman[110863]: 2026-01-10 17:02:51.188553741 +0000 UTC m=+0.157467158 container died 44a5934e3225e949d21dfe98d49526db52394a5115578db3a3ca6c54dadfa506 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_kirch, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 10 17:02:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-bf3bbc4cdfd25f28db5bfbe0605b21ebf113c13f6841342fd54adbc729e4e0a6-merged.mount: Deactivated successfully.
Jan 10 17:02:51 compute-0 podman[110863]: 2026-01-10 17:02:51.230764443 +0000 UTC m=+0.199677830 container remove 44a5934e3225e949d21dfe98d49526db52394a5115578db3a3ca6c54dadfa506 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_kirch, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030)
Jan 10 17:02:51 compute-0 systemd[1]: libpod-conmon-44a5934e3225e949d21dfe98d49526db52394a5115578db3a3ca6c54dadfa506.scope: Deactivated successfully.
Jan 10 17:02:51 compute-0 ceph-mon[75249]: pgmap v215: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:02:51 compute-0 ceph-mon[75249]: 6.8 scrub starts
Jan 10 17:02:51 compute-0 ceph-mon[75249]: 6.8 scrub ok
Jan 10 17:02:51 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 6.f scrub starts
Jan 10 17:02:51 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 6.f scrub ok
Jan 10 17:02:51 compute-0 podman[110917]: 2026-01-10 17:02:51.434032511 +0000 UTC m=+0.051903108 container create e73dacfea26e89089c1685b483868c770690d719f71fadc495d5d0e54275d4de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_mclaren, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True)
Jan 10 17:02:51 compute-0 systemd[1]: Started libpod-conmon-e73dacfea26e89089c1685b483868c770690d719f71fadc495d5d0e54275d4de.scope.
Jan 10 17:02:51 compute-0 podman[110917]: 2026-01-10 17:02:51.409562893 +0000 UTC m=+0.027433460 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:02:51 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:02:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/059c50753939439fce09707bf8e128b4894be75fd54afc32b7a8bd40406a8134/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 17:02:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/059c50753939439fce09707bf8e128b4894be75fd54afc32b7a8bd40406a8134/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 17:02:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/059c50753939439fce09707bf8e128b4894be75fd54afc32b7a8bd40406a8134/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 17:02:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/059c50753939439fce09707bf8e128b4894be75fd54afc32b7a8bd40406a8134/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 17:02:51 compute-0 podman[110917]: 2026-01-10 17:02:51.541461443 +0000 UTC m=+0.159332020 container init e73dacfea26e89089c1685b483868c770690d719f71fadc495d5d0e54275d4de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_mclaren, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 10 17:02:51 compute-0 podman[110917]: 2026-01-10 17:02:51.557463599 +0000 UTC m=+0.175334136 container start e73dacfea26e89089c1685b483868c770690d719f71fadc495d5d0e54275d4de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_mclaren, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 17:02:51 compute-0 podman[110917]: 2026-01-10 17:02:51.562482876 +0000 UTC m=+0.180353643 container attach e73dacfea26e89089c1685b483868c770690d719f71fadc495d5d0e54275d4de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_mclaren, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 10 17:02:52 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v216: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:02:52 compute-0 lvm[111043]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 10 17:02:52 compute-0 lvm[111043]: VG ceph_vg0 finished
Jan 10 17:02:52 compute-0 ceph-mon[75249]: 6.f scrub starts
Jan 10 17:02:52 compute-0 ceph-mon[75249]: 6.f scrub ok
Jan 10 17:02:52 compute-0 lvm[111044]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 10 17:02:52 compute-0 lvm[111044]: VG ceph_vg1 finished
Jan 10 17:02:52 compute-0 lvm[111046]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 10 17:02:52 compute-0 lvm[111046]: VG ceph_vg2 finished
Jan 10 17:02:52 compute-0 romantic_mclaren[110937]: {}
Jan 10 17:02:52 compute-0 systemd[1]: libpod-e73dacfea26e89089c1685b483868c770690d719f71fadc495d5d0e54275d4de.scope: Deactivated successfully.
Jan 10 17:02:52 compute-0 systemd[1]: libpod-e73dacfea26e89089c1685b483868c770690d719f71fadc495d5d0e54275d4de.scope: Consumed 1.410s CPU time.
Jan 10 17:02:52 compute-0 podman[110917]: 2026-01-10 17:02:52.461680706 +0000 UTC m=+1.079551243 container died e73dacfea26e89089c1685b483868c770690d719f71fadc495d5d0e54275d4de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_mclaren, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 17:02:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-059c50753939439fce09707bf8e128b4894be75fd54afc32b7a8bd40406a8134-merged.mount: Deactivated successfully.
Jan 10 17:02:52 compute-0 podman[110917]: 2026-01-10 17:02:52.509164172 +0000 UTC m=+1.127034719 container remove e73dacfea26e89089c1685b483868c770690d719f71fadc495d5d0e54275d4de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_mclaren, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 17:02:52 compute-0 systemd[1]: libpod-conmon-e73dacfea26e89089c1685b483868c770690d719f71fadc495d5d0e54275d4de.scope: Deactivated successfully.
Jan 10 17:02:52 compute-0 sudo[110811]: pam_unix(sudo:session): session closed for user root
Jan 10 17:02:52 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 10 17:02:52 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:02:52 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 10 17:02:52 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:02:52 compute-0 sudo[111061]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 10 17:02:52 compute-0 sudo[111061]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:02:52 compute-0 sudo[111061]: pam_unix(sudo:session): session closed for user root
Jan 10 17:02:53 compute-0 ceph-mon[75249]: pgmap v216: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:02:53 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:02:53 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:02:53 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 4.14 scrub starts
Jan 10 17:02:53 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 4.14 scrub ok
Jan 10 17:02:54 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:02:54 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v217: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:02:54 compute-0 sudo[110744]: pam_unix(sudo:session): session closed for user root
Jan 10 17:02:54 compute-0 ceph-mon[75249]: 4.14 scrub starts
Jan 10 17:02:54 compute-0 ceph-mon[75249]: 4.14 scrub ok
Jan 10 17:02:55 compute-0 ceph-mon[75249]: pgmap v217: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:02:55 compute-0 sudo[111301]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-egjkiihyqkgecbtmtjrytfrukkonlgqn ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1768064574.9660456-193-221021647022482/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1768064574.9660456-193-221021647022482/args'
Jan 10 17:02:55 compute-0 sudo[111301]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:02:55 compute-0 sudo[111301]: pam_unix(sudo:session): session closed for user root
Jan 10 17:02:56 compute-0 sudo[111468]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fecrfjnisnkbcoohgigmllfjwqjkehfe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064575.7194595-204-205657302391230/AnsiballZ_dnf.py'
Jan 10 17:02:56 compute-0 sudo[111468]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:02:56 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v218: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:02:56 compute-0 python3.9[111470]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 10 17:02:57 compute-0 ceph-mon[75249]: pgmap v218: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:02:57 compute-0 sudo[111468]: pam_unix(sudo:session): session closed for user root
Jan 10 17:02:58 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v219: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:02:58 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 6.4 scrub starts
Jan 10 17:02:58 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 6.4 scrub ok
Jan 10 17:02:58 compute-0 sudo[111621]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yzdxabeokuhfxcpfjkahunghjpirksgc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064578.1394277-217-91989775928870/AnsiballZ_package_facts.py'
Jan 10 17:02:58 compute-0 sudo[111621]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:02:59 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:02:59 compute-0 python3.9[111623]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Jan 10 17:02:59 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 6.b scrub starts
Jan 10 17:02:59 compute-0 ceph-mon[75249]: pgmap v219: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:02:59 compute-0 ceph-mon[75249]: 6.4 scrub starts
Jan 10 17:02:59 compute-0 ceph-mon[75249]: 6.4 scrub ok
Jan 10 17:02:59 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 6.b scrub ok
Jan 10 17:02:59 compute-0 sudo[111621]: pam_unix(sudo:session): session closed for user root
Jan 10 17:03:00 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v220: 177 pgs: 177 active+clean; 452 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:03:00 compute-0 sudo[111773]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-opbicaamcevnxhsetqgsivmjinsvjlhd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064579.7784967-227-205692054905407/AnsiballZ_stat.py'
Jan 10 17:03:00 compute-0 sudo[111773]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:03:00 compute-0 ceph-mon[75249]: 6.b scrub starts
Jan 10 17:03:00 compute-0 ceph-mon[75249]: 6.b scrub ok
Jan 10 17:03:00 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 6.e scrub starts
Jan 10 17:03:00 compute-0 python3.9[111775]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 17:03:00 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 6.e scrub ok
Jan 10 17:03:00 compute-0 sudo[111773]: pam_unix(sudo:session): session closed for user root
Jan 10 17:03:00 compute-0 sudo[111851]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yzdzauhvxipvyopvamietlefczczcone ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064579.7784967-227-205692054905407/AnsiballZ_file.py'
Jan 10 17:03:00 compute-0 sudo[111851]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:03:00 compute-0 python3.9[111853]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/chrony.conf _original_basename=chrony.conf.j2 recurse=False state=file path=/etc/chrony.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:03:00 compute-0 sudo[111851]: pam_unix(sudo:session): session closed for user root
Jan 10 17:03:01 compute-0 ceph-mon[75249]: pgmap v220: 177 pgs: 177 active+clean; 452 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:03:01 compute-0 ceph-mon[75249]: 6.e scrub starts
Jan 10 17:03:01 compute-0 ceph-mon[75249]: 6.e scrub ok
Jan 10 17:03:01 compute-0 sudo[112003]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cfxlcvgpnlybibhzjvflpcwtoqcvribx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064581.1295254-239-17517537547886/AnsiballZ_stat.py'
Jan 10 17:03:01 compute-0 sudo[112003]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:03:01 compute-0 python3.9[112005]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 17:03:01 compute-0 sudo[112003]: pam_unix(sudo:session): session closed for user root
Jan 10 17:03:02 compute-0 sudo[112081]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hqndkfipifxueffnnutttbtduglmuiuh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064581.1295254-239-17517537547886/AnsiballZ_file.py'
Jan 10 17:03:02 compute-0 sudo[112081]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:03:02 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v221: 177 pgs: 177 active+clean; 452 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:03:02 compute-0 python3.9[112083]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/chronyd _original_basename=chronyd.sysconfig.j2 recurse=False state=file path=/etc/sysconfig/chronyd force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:03:02 compute-0 sudo[112081]: pam_unix(sudo:session): session closed for user root
Jan 10 17:03:03 compute-0 sudo[112233]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bldoqtqezhtdbtjauzpdgxevitvgqrqv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064582.6569898-257-119448678485553/AnsiballZ_lineinfile.py'
Jan 10 17:03:03 compute-0 sudo[112233]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:03:03 compute-0 python3.9[112235]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:03:03 compute-0 sudo[112233]: pam_unix(sudo:session): session closed for user root
Jan 10 17:03:03 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 6.1 scrub starts
Jan 10 17:03:03 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 6.1 scrub ok
Jan 10 17:03:03 compute-0 ceph-mon[75249]: pgmap v221: 177 pgs: 177 active+clean; 452 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:03:04 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:03:04 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v222: 177 pgs: 177 active+clean; 452 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:03:04 compute-0 sudo[112385]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uvbdjmiwwlqgvqqtawlblfnbkpdnligj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064583.9632812-272-264290009639623/AnsiballZ_setup.py'
Jan 10 17:03:04 compute-0 sudo[112385]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:03:04 compute-0 ceph-mon[75249]: 6.1 scrub starts
Jan 10 17:03:04 compute-0 ceph-mon[75249]: 6.1 scrub ok
Jan 10 17:03:04 compute-0 python3.9[112387]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 10 17:03:04 compute-0 sudo[112385]: pam_unix(sudo:session): session closed for user root
Jan 10 17:03:05 compute-0 ceph-mon[75249]: pgmap v222: 177 pgs: 177 active+clean; 452 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:03:05 compute-0 sudo[112469]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jagasbosadygprwlixeehmqsrlasmumb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064583.9632812-272-264290009639623/AnsiballZ_systemd.py'
Jan 10 17:03:05 compute-0 sudo[112469]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:03:05 compute-0 python3.9[112471]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 10 17:03:05 compute-0 sudo[112469]: pam_unix(sudo:session): session closed for user root
Jan 10 17:03:06 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v223: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:03:06 compute-0 ceph-mon[75249]: pgmap v223: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:03:06 compute-0 sshd-session[107340]: Connection closed by 192.168.122.30 port 40242
Jan 10 17:03:06 compute-0 sshd-session[107337]: pam_unix(sshd:session): session closed for user zuul
Jan 10 17:03:06 compute-0 systemd[1]: session-38.scope: Deactivated successfully.
Jan 10 17:03:06 compute-0 systemd[1]: session-38.scope: Consumed 25.671s CPU time.
Jan 10 17:03:06 compute-0 systemd-logind[798]: Session 38 logged out. Waiting for processes to exit.
Jan 10 17:03:06 compute-0 systemd-logind[798]: Removed session 38.
Jan 10 17:03:08 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v224: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:03:08 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:03:08 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:03:08 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:03:08 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:03:08 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:03:08 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:03:09 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:03:09 compute-0 ceph-mon[75249]: pgmap v224: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:03:10 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v225: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:03:11 compute-0 ceph-mon[75249]: pgmap v225: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:03:11 compute-0 sshd-session[112498]: Accepted publickey for zuul from 192.168.122.30 port 60108 ssh2: ECDSA SHA256:YYROLJW/JwZAyyZtyl+88gzuUs1GqrQIhGb+AzXg9yc
Jan 10 17:03:11 compute-0 systemd-logind[798]: New session 39 of user zuul.
Jan 10 17:03:12 compute-0 systemd[1]: Started Session 39 of User zuul.
Jan 10 17:03:12 compute-0 sshd-session[112498]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 10 17:03:12 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v226: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:03:12 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 6.6 scrub starts
Jan 10 17:03:12 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 6.6 scrub ok
Jan 10 17:03:12 compute-0 sudo[112651]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fcpudezdwljtjxjrmdovxkldbjgnitvz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064592.1183438-17-92661331657534/AnsiballZ_file.py'
Jan 10 17:03:12 compute-0 sudo[112651]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:03:12 compute-0 python3.9[112653]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:03:12 compute-0 sudo[112651]: pam_unix(sudo:session): session closed for user root
Jan 10 17:03:13 compute-0 ceph-mon[75249]: pgmap v226: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:03:13 compute-0 ceph-mon[75249]: 6.6 scrub starts
Jan 10 17:03:13 compute-0 ceph-mon[75249]: 6.6 scrub ok
Jan 10 17:03:13 compute-0 sudo[112803]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oddzdvyllqfddvozqwbychtibympnmev ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064593.0482345-29-118334020792347/AnsiballZ_stat.py'
Jan 10 17:03:13 compute-0 sudo[112803]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:03:13 compute-0 python3.9[112805]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 17:03:13 compute-0 sudo[112803]: pam_unix(sudo:session): session closed for user root
Jan 10 17:03:14 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:03:14 compute-0 sudo[112881]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wrtcixlyxnpthmpedlymbxkriukgrvne ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064593.0482345-29-118334020792347/AnsiballZ_file.py'
Jan 10 17:03:14 compute-0 sudo[112881]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:03:14 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v227: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:03:14 compute-0 python3.9[112883]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/ceph-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/ceph-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:03:14 compute-0 sudo[112881]: pam_unix(sudo:session): session closed for user root
Jan 10 17:03:14 compute-0 sshd-session[112501]: Connection closed by 192.168.122.30 port 60108
Jan 10 17:03:14 compute-0 sshd-session[112498]: pam_unix(sshd:session): session closed for user zuul
Jan 10 17:03:14 compute-0 systemd[1]: session-39.scope: Deactivated successfully.
Jan 10 17:03:14 compute-0 systemd[1]: session-39.scope: Consumed 1.771s CPU time.
Jan 10 17:03:14 compute-0 systemd-logind[798]: Session 39 logged out. Waiting for processes to exit.
Jan 10 17:03:14 compute-0 systemd-logind[798]: Removed session 39.
Jan 10 17:03:15 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 6.2 scrub starts
Jan 10 17:03:15 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 6.2 scrub ok
Jan 10 17:03:15 compute-0 ceph-mon[75249]: pgmap v227: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:03:16 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v228: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:03:16 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 6.d scrub starts
Jan 10 17:03:16 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 6.d scrub ok
Jan 10 17:03:16 compute-0 ceph-mon[75249]: 6.2 scrub starts
Jan 10 17:03:16 compute-0 ceph-mon[75249]: 6.2 scrub ok
Jan 10 17:03:17 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 6.c scrub starts
Jan 10 17:03:17 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 6.c scrub ok
Jan 10 17:03:17 compute-0 ceph-mon[75249]: pgmap v228: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:03:17 compute-0 ceph-mon[75249]: 6.d scrub starts
Jan 10 17:03:17 compute-0 ceph-mon[75249]: 6.d scrub ok
Jan 10 17:03:18 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v229: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:03:18 compute-0 ceph-mon[75249]: 6.c scrub starts
Jan 10 17:03:18 compute-0 ceph-mon[75249]: 6.c scrub ok
Jan 10 17:03:19 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:03:19 compute-0 ceph-mon[75249]: pgmap v229: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:03:20 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v230: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:03:20 compute-0 sshd-session[112909]: Accepted publickey for zuul from 192.168.122.30 port 42400 ssh2: ECDSA SHA256:YYROLJW/JwZAyyZtyl+88gzuUs1GqrQIhGb+AzXg9yc
Jan 10 17:03:20 compute-0 systemd-logind[798]: New session 40 of user zuul.
Jan 10 17:03:20 compute-0 systemd[1]: Started Session 40 of User zuul.
Jan 10 17:03:20 compute-0 sshd-session[112909]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 10 17:03:21 compute-0 ceph-mon[75249]: pgmap v230: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:03:21 compute-0 python3.9[113062]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 10 17:03:22 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v231: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:03:22 compute-0 sudo[113216]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-piksaudnzosnmogtemkgmlizqtrathis ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064602.1259499-28-91638637321548/AnsiballZ_file.py'
Jan 10 17:03:22 compute-0 sudo[113216]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:03:22 compute-0 python3.9[113218]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:03:22 compute-0 sudo[113216]: pam_unix(sudo:session): session closed for user root
Jan 10 17:03:23 compute-0 ceph-mon[75249]: pgmap v231: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:03:23 compute-0 sudo[113391]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-agakjhemavzvdcjtynfpxhjicjheoblv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064602.9491978-36-69407248996122/AnsiballZ_stat.py'
Jan 10 17:03:23 compute-0 sudo[113391]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:03:23 compute-0 python3.9[113393]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 17:03:23 compute-0 sudo[113391]: pam_unix(sudo:session): session closed for user root
Jan 10 17:03:23 compute-0 sudo[113469]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-icmhrjebqdqvjnglctbuulytlvuqsigj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064602.9491978-36-69407248996122/AnsiballZ_file.py'
Jan 10 17:03:23 compute-0 sudo[113469]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:03:24 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:03:24 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v232: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:03:24 compute-0 python3.9[113471]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.8a12f6b2 recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:03:24 compute-0 sudo[113469]: pam_unix(sudo:session): session closed for user root
Jan 10 17:03:24 compute-0 sudo[113621]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rfpuhvfwbcpepixwpgvbrovpqymuejlx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064604.5712564-56-233395131167695/AnsiballZ_stat.py'
Jan 10 17:03:24 compute-0 sudo[113621]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:03:25 compute-0 python3.9[113623]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 17:03:25 compute-0 sudo[113621]: pam_unix(sudo:session): session closed for user root
Jan 10 17:03:25 compute-0 ceph-mon[75249]: pgmap v232: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:03:25 compute-0 sudo[113699]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vagckeenjldfgrjqwipylrzgxqetliqx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064604.5712564-56-233395131167695/AnsiballZ_file.py'
Jan 10 17:03:25 compute-0 sudo[113699]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:03:25 compute-0 python3.9[113701]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.w9dsm5cd recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:03:25 compute-0 sudo[113699]: pam_unix(sudo:session): session closed for user root
Jan 10 17:03:26 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v233: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:03:26 compute-0 sudo[113851]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ueyvqsyqbrjwulsgbiwbgrsyswupnpts ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064605.7728713-69-38247715502906/AnsiballZ_file.py'
Jan 10 17:03:26 compute-0 sudo[113851]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:03:26 compute-0 python3.9[113853]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 10 17:03:26 compute-0 sudo[113851]: pam_unix(sudo:session): session closed for user root
Jan 10 17:03:26 compute-0 sudo[114003]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aqvzxaibhmyydoivqsotghaqhvvaychj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064606.5909374-77-266164337992420/AnsiballZ_stat.py'
Jan 10 17:03:26 compute-0 sudo[114003]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:03:27 compute-0 python3.9[114005]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 17:03:27 compute-0 sudo[114003]: pam_unix(sudo:session): session closed for user root
Jan 10 17:03:27 compute-0 sudo[114081]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ntyjfxpcglbiwjzxianvdnighimdpzky ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064606.5909374-77-266164337992420/AnsiballZ_file.py'
Jan 10 17:03:27 compute-0 sudo[114081]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:03:27 compute-0 python3.9[114083]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 10 17:03:27 compute-0 sudo[114081]: pam_unix(sudo:session): session closed for user root
Jan 10 17:03:27 compute-0 ceph-mon[75249]: pgmap v233: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:03:28 compute-0 sudo[114233]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mjnkmrpowhheefafmmvifqjwpkcgkohx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064607.695798-77-237758163527766/AnsiballZ_stat.py'
Jan 10 17:03:28 compute-0 sudo[114233]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:03:28 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v234: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:03:28 compute-0 python3.9[114235]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 17:03:28 compute-0 sudo[114233]: pam_unix(sudo:session): session closed for user root
Jan 10 17:03:28 compute-0 sudo[114311]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aiyoqfcjmieapwycdcfttcrkaayvcbsq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064607.695798-77-237758163527766/AnsiballZ_file.py'
Jan 10 17:03:28 compute-0 sudo[114311]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:03:28 compute-0 ceph-mon[75249]: pgmap v234: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:03:28 compute-0 python3.9[114313]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 10 17:03:28 compute-0 sudo[114311]: pam_unix(sudo:session): session closed for user root
Jan 10 17:03:29 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:03:29 compute-0 sudo[114463]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cpkbuczgeksdkvimralbssunwszpdegj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064608.9175072-100-219041898284284/AnsiballZ_file.py'
Jan 10 17:03:29 compute-0 sudo[114463]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:03:29 compute-0 python3.9[114465]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:03:29 compute-0 sudo[114463]: pam_unix(sudo:session): session closed for user root
Jan 10 17:03:29 compute-0 sudo[114615]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rljzxbncjzrktntdlmymjxuqqufxfgez ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064609.5950015-108-73386795475563/AnsiballZ_stat.py'
Jan 10 17:03:29 compute-0 sudo[114615]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:03:30 compute-0 python3.9[114617]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 17:03:30 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v235: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:03:30 compute-0 sudo[114615]: pam_unix(sudo:session): session closed for user root
Jan 10 17:03:30 compute-0 sudo[114693]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nryglpgdkktouxfrtsrciupwlyhunzpy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064609.5950015-108-73386795475563/AnsiballZ_file.py'
Jan 10 17:03:30 compute-0 sudo[114693]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:03:30 compute-0 python3.9[114695]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:03:30 compute-0 sudo[114693]: pam_unix(sudo:session): session closed for user root
Jan 10 17:03:31 compute-0 ceph-mon[75249]: pgmap v235: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:03:31 compute-0 sudo[114845]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nyrdwgmcyaclarxrxkhbufcefpqiqkgc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064610.7520878-120-58228761720097/AnsiballZ_stat.py'
Jan 10 17:03:31 compute-0 sudo[114845]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:03:31 compute-0 python3.9[114847]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 17:03:31 compute-0 sudo[114845]: pam_unix(sudo:session): session closed for user root
Jan 10 17:03:31 compute-0 sudo[114923]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sexysduwbjusfnmficjraiigdrazcwyy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064610.7520878-120-58228761720097/AnsiballZ_file.py'
Jan 10 17:03:31 compute-0 sudo[114923]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:03:31 compute-0 python3.9[114925]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:03:31 compute-0 sudo[114923]: pam_unix(sudo:session): session closed for user root
Jan 10 17:03:32 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v236: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:03:32 compute-0 sudo[115075]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yrtfnaolhkijihbeaioeupilrowvzxxs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064612.1079102-132-123949667632321/AnsiballZ_systemd.py'
Jan 10 17:03:32 compute-0 sudo[115075]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:03:32 compute-0 python3.9[115077]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 10 17:03:32 compute-0 systemd[1]: Reloading.
Jan 10 17:03:33 compute-0 systemd-rc-local-generator[115104]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 17:03:33 compute-0 systemd-sysv-generator[115107]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 10 17:03:33 compute-0 ceph-mon[75249]: pgmap v236: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:03:33 compute-0 sudo[115075]: pam_unix(sudo:session): session closed for user root
Jan 10 17:03:33 compute-0 sudo[115265]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vagypjaovssxztlxfewddovopvrncots ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064613.524705-140-140079182818552/AnsiballZ_stat.py'
Jan 10 17:03:33 compute-0 sudo[115265]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:03:34 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:03:34 compute-0 python3.9[115267]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 17:03:34 compute-0 sudo[115265]: pam_unix(sudo:session): session closed for user root
Jan 10 17:03:34 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v237: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:03:34 compute-0 sudo[115343]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aarwobtuhnetagfomhhldhkivwjnfrui ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064613.524705-140-140079182818552/AnsiballZ_file.py'
Jan 10 17:03:34 compute-0 sudo[115343]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:03:34 compute-0 python3.9[115345]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:03:34 compute-0 sudo[115343]: pam_unix(sudo:session): session closed for user root
Jan 10 17:03:35 compute-0 sudo[115495]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cmynudenjtjyojqhzhhztfovcrhqwapz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064614.6842072-152-76324525875062/AnsiballZ_stat.py'
Jan 10 17:03:35 compute-0 sudo[115495]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:03:35 compute-0 python3.9[115497]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 17:03:35 compute-0 sudo[115495]: pam_unix(sudo:session): session closed for user root
Jan 10 17:03:35 compute-0 ceph-mon[75249]: pgmap v237: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:03:35 compute-0 sudo[115573]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zintbgcufxvdzqpyngqxdfmsjdxlwodt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064614.6842072-152-76324525875062/AnsiballZ_file.py'
Jan 10 17:03:35 compute-0 sudo[115573]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:03:35 compute-0 python3.9[115575]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:03:35 compute-0 sudo[115573]: pam_unix(sudo:session): session closed for user root
Jan 10 17:03:36 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v238: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:03:36 compute-0 sudo[115725]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hboavyowzlrzcmwyacuhqtbuesmazvdk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064615.9231849-164-196364663021384/AnsiballZ_systemd.py'
Jan 10 17:03:36 compute-0 sudo[115725]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:03:36 compute-0 python3.9[115727]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 10 17:03:36 compute-0 systemd[1]: Reloading.
Jan 10 17:03:36 compute-0 systemd-sysv-generator[115761]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 10 17:03:36 compute-0 systemd-rc-local-generator[115757]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 17:03:36 compute-0 systemd[1]: Starting Create netns directory...
Jan 10 17:03:36 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 10 17:03:36 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 10 17:03:36 compute-0 systemd[1]: Finished Create netns directory.
Jan 10 17:03:36 compute-0 sudo[115725]: pam_unix(sudo:session): session closed for user root
Jan 10 17:03:37 compute-0 ceph-mon[75249]: pgmap v238: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:03:37 compute-0 python3.9[115919]: ansible-ansible.builtin.service_facts Invoked
Jan 10 17:03:37 compute-0 network[115936]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 10 17:03:37 compute-0 network[115937]: 'network-scripts' will be removed from distribution in near future.
Jan 10 17:03:37 compute-0 network[115938]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 10 17:03:38 compute-0 ceph-mgr[75538]: [balancer INFO root] Optimize plan auto_2026-01-10_17:03:38
Jan 10 17:03:38 compute-0 ceph-mgr[75538]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 10 17:03:38 compute-0 ceph-mgr[75538]: [balancer INFO root] do_upmap
Jan 10 17:03:38 compute-0 ceph-mgr[75538]: [balancer INFO root] pools ['cephfs.cephfs.data', 'cephfs.cephfs.meta', 'volumes', '.mgr', 'backups', 'images', 'vms']
Jan 10 17:03:38 compute-0 ceph-mgr[75538]: [balancer INFO root] prepared 0/10 upmap changes
Jan 10 17:03:38 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v239: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:03:38 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:03:38 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:03:38 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:03:38 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:03:38 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:03:38 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:03:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 10 17:03:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 10 17:03:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 10 17:03:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 10 17:03:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 10 17:03:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 10 17:03:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 10 17:03:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 10 17:03:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 10 17:03:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 10 17:03:39 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:03:39 compute-0 ceph-mon[75249]: pgmap v239: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:03:40 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v240: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:03:41 compute-0 ceph-mon[75249]: pgmap v240: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:03:42 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v241: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:03:42 compute-0 sudo[116198]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjdrprwajyzvukrilmmicgcyudidgfxk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064622.1932046-190-65325485316714/AnsiballZ_stat.py'
Jan 10 17:03:42 compute-0 sudo[116198]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:03:42 compute-0 python3.9[116200]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 17:03:42 compute-0 sudo[116198]: pam_unix(sudo:session): session closed for user root
Jan 10 17:03:43 compute-0 sudo[116276]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xgexrpxibkmrgaveozkvocgypwjpxdfb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064622.1932046-190-65325485316714/AnsiballZ_file.py'
Jan 10 17:03:43 compute-0 sudo[116276]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:03:43 compute-0 python3.9[116278]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:03:43 compute-0 sudo[116276]: pam_unix(sudo:session): session closed for user root
Jan 10 17:03:43 compute-0 ceph-mon[75249]: pgmap v241: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:03:43 compute-0 sudo[116428]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qgtlcwuyefmcnwpmoqfloaesgbvsaqvb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064623.5094087-203-201064061894190/AnsiballZ_file.py'
Jan 10 17:03:43 compute-0 sudo[116428]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:03:43 compute-0 python3.9[116430]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:03:43 compute-0 sudo[116428]: pam_unix(sudo:session): session closed for user root
Jan 10 17:03:44 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:03:44 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v242: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:03:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] _maybe_adjust
Jan 10 17:03:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:03:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 10 17:03:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:03:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 17:03:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:03:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 17:03:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:03:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 17:03:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:03:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 17:03:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:03:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 9.302004027771843e-07 of space, bias 4.0, pg target 0.0011162404833326212 quantized to 16 (current 16)
Jan 10 17:03:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:03:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 17:03:44 compute-0 sudo[116580]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjvzgbndjtaoyumvpyirqyqikxaghjwv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064624.1494336-211-114172270638863/AnsiballZ_stat.py'
Jan 10 17:03:44 compute-0 sudo[116580]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:03:44 compute-0 python3.9[116582]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 17:03:44 compute-0 sudo[116580]: pam_unix(sudo:session): session closed for user root
Jan 10 17:03:44 compute-0 sudo[116658]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wzxolmgxqlazjbnfttbrbtwhcxspxxky ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064624.1494336-211-114172270638863/AnsiballZ_file.py'
Jan 10 17:03:45 compute-0 sudo[116658]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:03:45 compute-0 python3.9[116660]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/edpm-config/firewall/sshd-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/sshd-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:03:45 compute-0 sudo[116658]: pam_unix(sudo:session): session closed for user root
Jan 10 17:03:45 compute-0 ceph-mon[75249]: pgmap v242: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:03:46 compute-0 sudo[116810]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ggzdukptkfzluyizbvajitknnlfacxzp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064625.50553-226-256258307819786/AnsiballZ_timezone.py'
Jan 10 17:03:46 compute-0 sudo[116810]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:03:46 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v243: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:03:46 compute-0 python3.9[116812]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Jan 10 17:03:46 compute-0 systemd[1]: Starting Time & Date Service...
Jan 10 17:03:46 compute-0 systemd[1]: Started Time & Date Service.
Jan 10 17:03:46 compute-0 sudo[116810]: pam_unix(sudo:session): session closed for user root
Jan 10 17:03:46 compute-0 sudo[116966]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vpehhwlwahtsrhbijdduafmomycfggvy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064626.6593497-235-52769700353920/AnsiballZ_file.py'
Jan 10 17:03:46 compute-0 sudo[116966]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:03:47 compute-0 python3.9[116968]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:03:47 compute-0 sudo[116966]: pam_unix(sudo:session): session closed for user root
Jan 10 17:03:47 compute-0 ceph-mon[75249]: pgmap v243: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:03:47 compute-0 sudo[117118]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jajxlqqhkusstfqibfqnnisothnjxnrh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064627.4449172-243-86337382752411/AnsiballZ_stat.py'
Jan 10 17:03:47 compute-0 sudo[117118]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:03:47 compute-0 python3.9[117120]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 17:03:48 compute-0 sudo[117118]: pam_unix(sudo:session): session closed for user root
Jan 10 17:03:48 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v244: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:03:48 compute-0 sudo[117196]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-izvkqtfivrrhirjpojvjoypsrwzmonwp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064627.4449172-243-86337382752411/AnsiballZ_file.py'
Jan 10 17:03:48 compute-0 sudo[117196]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:03:48 compute-0 python3.9[117198]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:03:48 compute-0 sudo[117196]: pam_unix(sudo:session): session closed for user root
Jan 10 17:03:48 compute-0 sudo[117348]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xltyapjadjksvjrombezzschtjgcofud ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064628.703638-255-175426389820953/AnsiballZ_stat.py'
Jan 10 17:03:48 compute-0 sudo[117348]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:03:49 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:03:49 compute-0 python3.9[117350]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 17:03:49 compute-0 sudo[117348]: pam_unix(sudo:session): session closed for user root
Jan 10 17:03:49 compute-0 sudo[117426]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ctblinjwsotztndjvutkewzzyykaqwjs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064628.703638-255-175426389820953/AnsiballZ_file.py'
Jan 10 17:03:49 compute-0 sudo[117426]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:03:49 compute-0 ceph-mon[75249]: pgmap v244: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:03:49 compute-0 python3.9[117428]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.tq8x4k1f recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:03:49 compute-0 sudo[117426]: pam_unix(sudo:session): session closed for user root
Jan 10 17:03:50 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v245: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:03:50 compute-0 sudo[117578]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ytrsujrcwggzzrnhzoyymlehvwxxpjdo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064629.8452373-267-46036452653040/AnsiballZ_stat.py'
Jan 10 17:03:50 compute-0 sudo[117578]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:03:50 compute-0 python3.9[117580]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 17:03:50 compute-0 sudo[117578]: pam_unix(sudo:session): session closed for user root
Jan 10 17:03:50 compute-0 sudo[117656]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iwposhvbefstxahbbxxweuuevfowdmsp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064629.8452373-267-46036452653040/AnsiballZ_file.py'
Jan 10 17:03:50 compute-0 sudo[117656]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:03:50 compute-0 ceph-mon[75249]: pgmap v245: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:03:50 compute-0 python3.9[117658]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:03:50 compute-0 sudo[117656]: pam_unix(sudo:session): session closed for user root
Jan 10 17:03:51 compute-0 sudo[117808]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-toojwxfkjktukbovgnqzckhdvdibaqxq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064630.9269626-280-148134531043889/AnsiballZ_command.py'
Jan 10 17:03:51 compute-0 sudo[117808]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:03:51 compute-0 python3.9[117810]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 17:03:51 compute-0 sudo[117808]: pam_unix(sudo:session): session closed for user root
Jan 10 17:03:52 compute-0 sudo[117961]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iwqvgxbqjcabscycgzdwwprkatccxbbb ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1768064631.703507-288-33489068644479/AnsiballZ_edpm_nftables_from_files.py'
Jan 10 17:03:52 compute-0 sudo[117961]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:03:52 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v246: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:03:52 compute-0 python3[117963]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 10 17:03:52 compute-0 sudo[117961]: pam_unix(sudo:session): session closed for user root
Jan 10 17:03:52 compute-0 sudo[118057]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 17:03:52 compute-0 sudo[118057]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:03:52 compute-0 sudo[118057]: pam_unix(sudo:session): session closed for user root
Jan 10 17:03:52 compute-0 sudo[118095]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 10 17:03:52 compute-0 sudo[118095]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:03:52 compute-0 sudo[118163]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lkxybnempqeefrhgrprjdoirqswqjrug ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064632.529742-296-236004641511147/AnsiballZ_stat.py'
Jan 10 17:03:52 compute-0 sudo[118163]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:03:53 compute-0 python3.9[118165]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 17:03:53 compute-0 sudo[118163]: pam_unix(sudo:session): session closed for user root
Jan 10 17:03:53 compute-0 ceph-mon[75249]: pgmap v246: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:03:53 compute-0 sudo[118264]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-arcuninnoqpsxixzpsftnptbfktblzvv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064632.529742-296-236004641511147/AnsiballZ_file.py'
Jan 10 17:03:53 compute-0 sudo[118264]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:03:53 compute-0 sudo[118095]: pam_unix(sudo:session): session closed for user root
Jan 10 17:03:53 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 17:03:53 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 17:03:53 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 10 17:03:53 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 10 17:03:53 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 10 17:03:53 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:03:53 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 10 17:03:53 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 10 17:03:53 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 10 17:03:53 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 10 17:03:53 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 17:03:53 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 17:03:53 compute-0 sudo[118275]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 17:03:53 compute-0 sudo[118275]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:03:53 compute-0 sudo[118275]: pam_unix(sudo:session): session closed for user root
Jan 10 17:03:53 compute-0 python3.9[118273]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:03:53 compute-0 sudo[118300]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 10 17:03:53 compute-0 sudo[118300]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:03:53 compute-0 sudo[118264]: pam_unix(sudo:session): session closed for user root
Jan 10 17:03:53 compute-0 podman[118413]: 2026-01-10 17:03:53.794245063 +0000 UTC m=+0.044641642 container create 192fa4025785a5a863637c68d77bfbc7c5a5d607417d180a22eddebd866b9434 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_chatterjee, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 10 17:03:53 compute-0 systemd[1]: Started libpod-conmon-192fa4025785a5a863637c68d77bfbc7c5a5d607417d180a22eddebd866b9434.scope.
Jan 10 17:03:53 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:03:53 compute-0 podman[118413]: 2026-01-10 17:03:53.776045369 +0000 UTC m=+0.026441968 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:03:53 compute-0 sudo[118505]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-frccpeugiulcshydkbvcunyryeuuxmfc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064633.6690245-308-32199545878456/AnsiballZ_stat.py'
Jan 10 17:03:53 compute-0 sudo[118505]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:03:54 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:03:54 compute-0 podman[118413]: 2026-01-10 17:03:54.073025047 +0000 UTC m=+0.323421686 container init 192fa4025785a5a863637c68d77bfbc7c5a5d607417d180a22eddebd866b9434 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_chatterjee, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 10 17:03:54 compute-0 podman[118413]: 2026-01-10 17:03:54.080764735 +0000 UTC m=+0.331161314 container start 192fa4025785a5a863637c68d77bfbc7c5a5d607417d180a22eddebd866b9434 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_chatterjee, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True)
Jan 10 17:03:54 compute-0 podman[118413]: 2026-01-10 17:03:54.084425639 +0000 UTC m=+0.334822228 container attach 192fa4025785a5a863637c68d77bfbc7c5a5d607417d180a22eddebd866b9434 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_chatterjee, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 10 17:03:54 compute-0 pedantic_chatterjee[118452]: 167 167
Jan 10 17:03:54 compute-0 systemd[1]: libpod-192fa4025785a5a863637c68d77bfbc7c5a5d607417d180a22eddebd866b9434.scope: Deactivated successfully.
Jan 10 17:03:54 compute-0 podman[118413]: 2026-01-10 17:03:54.092004713 +0000 UTC m=+0.342401292 container died 192fa4025785a5a863637c68d77bfbc7c5a5d607417d180a22eddebd866b9434 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_chatterjee, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 17:03:54 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v247: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:03:54 compute-0 python3.9[118507]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 17:03:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-04ba19d5facd5e93192f34b298c58f901a42a44a584fdb3ab510019d8258e36a-merged.mount: Deactivated successfully.
Jan 10 17:03:54 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 17:03:54 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 10 17:03:54 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:03:54 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 10 17:03:54 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 10 17:03:54 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 17:03:54 compute-0 sudo[118505]: pam_unix(sudo:session): session closed for user root
Jan 10 17:03:54 compute-0 podman[118413]: 2026-01-10 17:03:54.226079319 +0000 UTC m=+0.476475928 container remove 192fa4025785a5a863637c68d77bfbc7c5a5d607417d180a22eddebd866b9434 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_chatterjee, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 17:03:54 compute-0 systemd[1]: libpod-conmon-192fa4025785a5a863637c68d77bfbc7c5a5d607417d180a22eddebd866b9434.scope: Deactivated successfully.
Jan 10 17:03:54 compute-0 podman[118577]: 2026-01-10 17:03:54.392184871 +0000 UTC m=+0.044857848 container create edc343e1202b199fdec98c6c1d3c28de97663cf10eb2e95d9f4f458eea84b37a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_mccarthy, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 10 17:03:54 compute-0 sudo[118617]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vimcwopdmwqywlxnpsitvoihxyiqyalj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064633.6690245-308-32199545878456/AnsiballZ_file.py'
Jan 10 17:03:54 compute-0 sudo[118617]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:03:54 compute-0 systemd[1]: Started libpod-conmon-edc343e1202b199fdec98c6c1d3c28de97663cf10eb2e95d9f4f458eea84b37a.scope.
Jan 10 17:03:54 compute-0 podman[118577]: 2026-01-10 17:03:54.373684688 +0000 UTC m=+0.026357675 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:03:54 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:03:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ccac82e9f465127507038ba40c47bd62473c5475287a6b5dbb2bd9d3c922353e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 17:03:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ccac82e9f465127507038ba40c47bd62473c5475287a6b5dbb2bd9d3c922353e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 17:03:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ccac82e9f465127507038ba40c47bd62473c5475287a6b5dbb2bd9d3c922353e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 17:03:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ccac82e9f465127507038ba40c47bd62473c5475287a6b5dbb2bd9d3c922353e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 17:03:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ccac82e9f465127507038ba40c47bd62473c5475287a6b5dbb2bd9d3c922353e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 10 17:03:54 compute-0 podman[118577]: 2026-01-10 17:03:54.509824033 +0000 UTC m=+0.162497000 container init edc343e1202b199fdec98c6c1d3c28de97663cf10eb2e95d9f4f458eea84b37a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_mccarthy, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 17:03:54 compute-0 podman[118577]: 2026-01-10 17:03:54.51677613 +0000 UTC m=+0.169449097 container start edc343e1202b199fdec98c6c1d3c28de97663cf10eb2e95d9f4f458eea84b37a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_mccarthy, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 17:03:54 compute-0 podman[118577]: 2026-01-10 17:03:54.520056322 +0000 UTC m=+0.172729359 container attach edc343e1202b199fdec98c6c1d3c28de97663cf10eb2e95d9f4f458eea84b37a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_mccarthy, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 17:03:54 compute-0 python3.9[118621]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:03:54 compute-0 sudo[118617]: pam_unix(sudo:session): session closed for user root
Jan 10 17:03:55 compute-0 ecstatic_mccarthy[118624]: --> passed data devices: 0 physical, 3 LVM
Jan 10 17:03:55 compute-0 ecstatic_mccarthy[118624]: --> All data devices are unavailable
Jan 10 17:03:55 compute-0 systemd[1]: libpod-edc343e1202b199fdec98c6c1d3c28de97663cf10eb2e95d9f4f458eea84b37a.scope: Deactivated successfully.
Jan 10 17:03:55 compute-0 podman[118577]: 2026-01-10 17:03:55.12334318 +0000 UTC m=+0.776016197 container died edc343e1202b199fdec98c6c1d3c28de97663cf10eb2e95d9f4f458eea84b37a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_mccarthy, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 17:03:55 compute-0 sudo[118793]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wcgtuwzovxbnsrvrmbrbuyqgbpkvipop ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064634.8087924-320-181114373131542/AnsiballZ_stat.py'
Jan 10 17:03:55 compute-0 sudo[118793]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:03:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-ccac82e9f465127507038ba40c47bd62473c5475287a6b5dbb2bd9d3c922353e-merged.mount: Deactivated successfully.
Jan 10 17:03:55 compute-0 podman[118577]: 2026-01-10 17:03:55.165615924 +0000 UTC m=+0.818288891 container remove edc343e1202b199fdec98c6c1d3c28de97663cf10eb2e95d9f4f458eea84b37a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_mccarthy, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 10 17:03:55 compute-0 systemd[1]: libpod-conmon-edc343e1202b199fdec98c6c1d3c28de97663cf10eb2e95d9f4f458eea84b37a.scope: Deactivated successfully.
Jan 10 17:03:55 compute-0 sudo[118300]: pam_unix(sudo:session): session closed for user root
Jan 10 17:03:55 compute-0 sudo[118809]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 17:03:55 compute-0 sudo[118809]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:03:55 compute-0 sudo[118809]: pam_unix(sudo:session): session closed for user root
Jan 10 17:03:55 compute-0 sudo[118834]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 -- lvm list --format json
Jan 10 17:03:55 compute-0 sudo[118834]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:03:55 compute-0 python3.9[118796]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 17:03:55 compute-0 sudo[118793]: pam_unix(sudo:session): session closed for user root
Jan 10 17:03:55 compute-0 ceph-mon[75249]: pgmap v247: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:03:55 compute-0 sudo[118958]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dafqtzrqffcijzuetkuytymytnsbdqsc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064634.8087924-320-181114373131542/AnsiballZ_file.py'
Jan 10 17:03:55 compute-0 sudo[118958]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:03:55 compute-0 podman[118930]: 2026-01-10 17:03:55.59635692 +0000 UTC m=+0.051100705 container create 4ded34f52806b039675600fe2b8c8074723db58ec2398bdba7452f93763e1c01 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_elion, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 10 17:03:55 compute-0 systemd[1]: Started libpod-conmon-4ded34f52806b039675600fe2b8c8074723db58ec2398bdba7452f93763e1c01.scope.
Jan 10 17:03:55 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:03:55 compute-0 podman[118930]: 2026-01-10 17:03:55.578548107 +0000 UTC m=+0.033291902 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:03:55 compute-0 podman[118930]: 2026-01-10 17:03:55.679043225 +0000 UTC m=+0.133787090 container init 4ded34f52806b039675600fe2b8c8074723db58ec2398bdba7452f93763e1c01 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_elion, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 17:03:55 compute-0 podman[118930]: 2026-01-10 17:03:55.685643011 +0000 UTC m=+0.140386796 container start 4ded34f52806b039675600fe2b8c8074723db58ec2398bdba7452f93763e1c01 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_elion, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 10 17:03:55 compute-0 podman[118930]: 2026-01-10 17:03:55.688957415 +0000 UTC m=+0.143701290 container attach 4ded34f52806b039675600fe2b8c8074723db58ec2398bdba7452f93763e1c01 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_elion, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 10 17:03:55 compute-0 keen_elion[118965]: 167 167
Jan 10 17:03:55 compute-0 systemd[1]: libpod-4ded34f52806b039675600fe2b8c8074723db58ec2398bdba7452f93763e1c01.scope: Deactivated successfully.
Jan 10 17:03:55 compute-0 podman[118930]: 2026-01-10 17:03:55.69233295 +0000 UTC m=+0.147076735 container died 4ded34f52806b039675600fe2b8c8074723db58ec2398bdba7452f93763e1c01 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_elion, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 17:03:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-1875434d74bb55cb290a601a65aff9926edcc9d66d3058b5c21e58041b0e0c64-merged.mount: Deactivated successfully.
Jan 10 17:03:55 compute-0 podman[118930]: 2026-01-10 17:03:55.729949083 +0000 UTC m=+0.184692858 container remove 4ded34f52806b039675600fe2b8c8074723db58ec2398bdba7452f93763e1c01 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_elion, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 17:03:55 compute-0 systemd[1]: libpod-conmon-4ded34f52806b039675600fe2b8c8074723db58ec2398bdba7452f93763e1c01.scope: Deactivated successfully.
Jan 10 17:03:55 compute-0 python3.9[118962]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:03:55 compute-0 sudo[118958]: pam_unix(sudo:session): session closed for user root
Jan 10 17:03:55 compute-0 podman[118994]: 2026-01-10 17:03:55.895760536 +0000 UTC m=+0.055320744 container create a46152d554718048e1afa6b4f88f3177274d6cb395624b650fe9b1a67d5abc82 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_moore, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 17:03:55 compute-0 systemd[1]: Started libpod-conmon-a46152d554718048e1afa6b4f88f3177274d6cb395624b650fe9b1a67d5abc82.scope.
Jan 10 17:03:55 compute-0 podman[118994]: 2026-01-10 17:03:55.873634231 +0000 UTC m=+0.033194469 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:03:55 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:03:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc736a9cc8fadc85bdc3cea407da97f31d3d97e7c561a77481110863eba93e1c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 17:03:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc736a9cc8fadc85bdc3cea407da97f31d3d97e7c561a77481110863eba93e1c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 17:03:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc736a9cc8fadc85bdc3cea407da97f31d3d97e7c561a77481110863eba93e1c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 17:03:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc736a9cc8fadc85bdc3cea407da97f31d3d97e7c561a77481110863eba93e1c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 17:03:55 compute-0 podman[118994]: 2026-01-10 17:03:55.984525543 +0000 UTC m=+0.144085801 container init a46152d554718048e1afa6b4f88f3177274d6cb395624b650fe9b1a67d5abc82 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_moore, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0)
Jan 10 17:03:55 compute-0 podman[118994]: 2026-01-10 17:03:55.993235979 +0000 UTC m=+0.152796187 container start a46152d554718048e1afa6b4f88f3177274d6cb395624b650fe9b1a67d5abc82 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_moore, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 10 17:03:55 compute-0 podman[118994]: 2026-01-10 17:03:55.996859151 +0000 UTC m=+0.156419369 container attach a46152d554718048e1afa6b4f88f3177274d6cb395624b650fe9b1a67d5abc82 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_moore, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 10 17:03:56 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v248: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:03:56 compute-0 sudo[119165]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vqwdfsvtkrbrwcdoiujtmmihvxekltro ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064635.968789-332-97960525300357/AnsiballZ_stat.py'
Jan 10 17:03:56 compute-0 sudo[119165]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:03:56 compute-0 dazzling_moore[119039]: {
Jan 10 17:03:56 compute-0 dazzling_moore[119039]:     "0": [
Jan 10 17:03:56 compute-0 dazzling_moore[119039]:         {
Jan 10 17:03:56 compute-0 dazzling_moore[119039]:             "devices": [
Jan 10 17:03:56 compute-0 dazzling_moore[119039]:                 "/dev/loop3"
Jan 10 17:03:56 compute-0 dazzling_moore[119039]:             ],
Jan 10 17:03:56 compute-0 dazzling_moore[119039]:             "lv_name": "ceph_lv0",
Jan 10 17:03:56 compute-0 dazzling_moore[119039]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 10 17:03:56 compute-0 dazzling_moore[119039]:             "lv_size": "21470642176",
Jan 10 17:03:56 compute-0 dazzling_moore[119039]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9aa1dcc9-88f4-49c0-be40-744313964d3e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 17:03:56 compute-0 dazzling_moore[119039]:             "lv_uuid": "fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2",
Jan 10 17:03:56 compute-0 dazzling_moore[119039]:             "name": "ceph_lv0",
Jan 10 17:03:56 compute-0 dazzling_moore[119039]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 10 17:03:56 compute-0 dazzling_moore[119039]:             "tags": {
Jan 10 17:03:56 compute-0 dazzling_moore[119039]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 10 17:03:56 compute-0 dazzling_moore[119039]:                 "ceph.block_uuid": "fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2",
Jan 10 17:03:56 compute-0 dazzling_moore[119039]:                 "ceph.cephx_lockbox_secret": "",
Jan 10 17:03:56 compute-0 dazzling_moore[119039]:                 "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 17:03:56 compute-0 dazzling_moore[119039]:                 "ceph.cluster_name": "ceph",
Jan 10 17:03:56 compute-0 dazzling_moore[119039]:                 "ceph.crush_device_class": "",
Jan 10 17:03:56 compute-0 dazzling_moore[119039]:                 "ceph.encrypted": "0",
Jan 10 17:03:56 compute-0 dazzling_moore[119039]:                 "ceph.objectstore": "bluestore",
Jan 10 17:03:56 compute-0 dazzling_moore[119039]:                 "ceph.osd_fsid": "9aa1dcc9-88f4-49c0-be40-744313964d3e",
Jan 10 17:03:56 compute-0 dazzling_moore[119039]:                 "ceph.osd_id": "0",
Jan 10 17:03:56 compute-0 dazzling_moore[119039]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 10 17:03:56 compute-0 dazzling_moore[119039]:                 "ceph.type": "block",
Jan 10 17:03:56 compute-0 dazzling_moore[119039]:                 "ceph.vdo": "0",
Jan 10 17:03:56 compute-0 dazzling_moore[119039]:                 "ceph.with_tpm": "0"
Jan 10 17:03:56 compute-0 dazzling_moore[119039]:             },
Jan 10 17:03:56 compute-0 dazzling_moore[119039]:             "type": "block",
Jan 10 17:03:56 compute-0 dazzling_moore[119039]:             "vg_name": "ceph_vg0"
Jan 10 17:03:56 compute-0 dazzling_moore[119039]:         }
Jan 10 17:03:56 compute-0 dazzling_moore[119039]:     ],
Jan 10 17:03:56 compute-0 dazzling_moore[119039]:     "1": [
Jan 10 17:03:56 compute-0 dazzling_moore[119039]:         {
Jan 10 17:03:56 compute-0 dazzling_moore[119039]:             "devices": [
Jan 10 17:03:56 compute-0 dazzling_moore[119039]:                 "/dev/loop4"
Jan 10 17:03:56 compute-0 dazzling_moore[119039]:             ],
Jan 10 17:03:56 compute-0 dazzling_moore[119039]:             "lv_name": "ceph_lv1",
Jan 10 17:03:56 compute-0 dazzling_moore[119039]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 10 17:03:56 compute-0 dazzling_moore[119039]:             "lv_size": "21470642176",
Jan 10 17:03:56 compute-0 dazzling_moore[119039]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e8e31518-65ae-476c-891c-e2fc550d0a1c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 17:03:56 compute-0 dazzling_moore[119039]:             "lv_uuid": "jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE",
Jan 10 17:03:56 compute-0 dazzling_moore[119039]:             "name": "ceph_lv1",
Jan 10 17:03:56 compute-0 dazzling_moore[119039]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 10 17:03:56 compute-0 dazzling_moore[119039]:             "tags": {
Jan 10 17:03:56 compute-0 dazzling_moore[119039]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 10 17:03:56 compute-0 dazzling_moore[119039]:                 "ceph.block_uuid": "jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE",
Jan 10 17:03:56 compute-0 dazzling_moore[119039]:                 "ceph.cephx_lockbox_secret": "",
Jan 10 17:03:56 compute-0 dazzling_moore[119039]:                 "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 17:03:56 compute-0 dazzling_moore[119039]:                 "ceph.cluster_name": "ceph",
Jan 10 17:03:56 compute-0 dazzling_moore[119039]:                 "ceph.crush_device_class": "",
Jan 10 17:03:56 compute-0 dazzling_moore[119039]:                 "ceph.encrypted": "0",
Jan 10 17:03:56 compute-0 dazzling_moore[119039]:                 "ceph.objectstore": "bluestore",
Jan 10 17:03:56 compute-0 dazzling_moore[119039]:                 "ceph.osd_fsid": "e8e31518-65ae-476c-891c-e2fc550d0a1c",
Jan 10 17:03:56 compute-0 dazzling_moore[119039]:                 "ceph.osd_id": "1",
Jan 10 17:03:56 compute-0 dazzling_moore[119039]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 10 17:03:56 compute-0 dazzling_moore[119039]:                 "ceph.type": "block",
Jan 10 17:03:56 compute-0 dazzling_moore[119039]:                 "ceph.vdo": "0",
Jan 10 17:03:56 compute-0 dazzling_moore[119039]:                 "ceph.with_tpm": "0"
Jan 10 17:03:56 compute-0 dazzling_moore[119039]:             },
Jan 10 17:03:56 compute-0 dazzling_moore[119039]:             "type": "block",
Jan 10 17:03:56 compute-0 dazzling_moore[119039]:             "vg_name": "ceph_vg1"
Jan 10 17:03:56 compute-0 dazzling_moore[119039]:         }
Jan 10 17:03:56 compute-0 dazzling_moore[119039]:     ],
Jan 10 17:03:56 compute-0 dazzling_moore[119039]:     "2": [
Jan 10 17:03:56 compute-0 dazzling_moore[119039]:         {
Jan 10 17:03:56 compute-0 dazzling_moore[119039]:             "devices": [
Jan 10 17:03:56 compute-0 dazzling_moore[119039]:                 "/dev/loop5"
Jan 10 17:03:56 compute-0 dazzling_moore[119039]:             ],
Jan 10 17:03:56 compute-0 dazzling_moore[119039]:             "lv_name": "ceph_lv2",
Jan 10 17:03:56 compute-0 dazzling_moore[119039]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 10 17:03:56 compute-0 dazzling_moore[119039]:             "lv_size": "21470642176",
Jan 10 17:03:56 compute-0 dazzling_moore[119039]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=87473727-6468-4f68-8371-e0bf60edaa43,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 17:03:56 compute-0 dazzling_moore[119039]:             "lv_uuid": "47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL",
Jan 10 17:03:56 compute-0 dazzling_moore[119039]:             "name": "ceph_lv2",
Jan 10 17:03:56 compute-0 dazzling_moore[119039]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 10 17:03:56 compute-0 dazzling_moore[119039]:             "tags": {
Jan 10 17:03:56 compute-0 dazzling_moore[119039]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 10 17:03:56 compute-0 dazzling_moore[119039]:                 "ceph.block_uuid": "47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL",
Jan 10 17:03:56 compute-0 dazzling_moore[119039]:                 "ceph.cephx_lockbox_secret": "",
Jan 10 17:03:56 compute-0 dazzling_moore[119039]:                 "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 17:03:56 compute-0 dazzling_moore[119039]:                 "ceph.cluster_name": "ceph",
Jan 10 17:03:56 compute-0 dazzling_moore[119039]:                 "ceph.crush_device_class": "",
Jan 10 17:03:56 compute-0 dazzling_moore[119039]:                 "ceph.encrypted": "0",
Jan 10 17:03:56 compute-0 dazzling_moore[119039]:                 "ceph.objectstore": "bluestore",
Jan 10 17:03:56 compute-0 dazzling_moore[119039]:                 "ceph.osd_fsid": "87473727-6468-4f68-8371-e0bf60edaa43",
Jan 10 17:03:56 compute-0 dazzling_moore[119039]:                 "ceph.osd_id": "2",
Jan 10 17:03:56 compute-0 dazzling_moore[119039]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 10 17:03:56 compute-0 dazzling_moore[119039]:                 "ceph.type": "block",
Jan 10 17:03:56 compute-0 dazzling_moore[119039]:                 "ceph.vdo": "0",
Jan 10 17:03:56 compute-0 dazzling_moore[119039]:                 "ceph.with_tpm": "0"
Jan 10 17:03:56 compute-0 dazzling_moore[119039]:             },
Jan 10 17:03:56 compute-0 dazzling_moore[119039]:             "type": "block",
Jan 10 17:03:56 compute-0 dazzling_moore[119039]:             "vg_name": "ceph_vg2"
Jan 10 17:03:56 compute-0 dazzling_moore[119039]:         }
Jan 10 17:03:56 compute-0 dazzling_moore[119039]:     ]
Jan 10 17:03:56 compute-0 dazzling_moore[119039]: }
Jan 10 17:03:56 compute-0 podman[118994]: 2026-01-10 17:03:56.332056238 +0000 UTC m=+0.491616446 container died a46152d554718048e1afa6b4f88f3177274d6cb395624b650fe9b1a67d5abc82 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_moore, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 17:03:56 compute-0 systemd[1]: libpod-a46152d554718048e1afa6b4f88f3177274d6cb395624b650fe9b1a67d5abc82.scope: Deactivated successfully.
Jan 10 17:03:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-bc736a9cc8fadc85bdc3cea407da97f31d3d97e7c561a77481110863eba93e1c-merged.mount: Deactivated successfully.
Jan 10 17:03:56 compute-0 podman[118994]: 2026-01-10 17:03:56.384127869 +0000 UTC m=+0.543688087 container remove a46152d554718048e1afa6b4f88f3177274d6cb395624b650fe9b1a67d5abc82 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_moore, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 17:03:56 compute-0 systemd[1]: libpod-conmon-a46152d554718048e1afa6b4f88f3177274d6cb395624b650fe9b1a67d5abc82.scope: Deactivated successfully.
Jan 10 17:03:56 compute-0 sudo[118834]: pam_unix(sudo:session): session closed for user root
Jan 10 17:03:56 compute-0 sudo[119179]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 17:03:56 compute-0 sudo[119179]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:03:56 compute-0 sudo[119179]: pam_unix(sudo:session): session closed for user root
Jan 10 17:03:56 compute-0 python3.9[119167]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 17:03:56 compute-0 sudo[119165]: pam_unix(sudo:session): session closed for user root
Jan 10 17:03:56 compute-0 sudo[119204]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 -- raw list --format json
Jan 10 17:03:56 compute-0 sudo[119204]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:03:56 compute-0 sudo[119314]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ordlloibkoyopypwftakbjtsopytjykc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064635.968789-332-97960525300357/AnsiballZ_file.py'
Jan 10 17:03:56 compute-0 sudo[119314]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:03:56 compute-0 podman[119315]: 2026-01-10 17:03:56.827474671 +0000 UTC m=+0.052782842 container create 71aad2d36c6f9ff22791b75c26b68fd46d3ec17a662b4f85217012c8bd9f11db (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_gates, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 10 17:03:56 compute-0 systemd[1]: Started libpod-conmon-71aad2d36c6f9ff22791b75c26b68fd46d3ec17a662b4f85217012c8bd9f11db.scope.
Jan 10 17:03:56 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:03:56 compute-0 podman[119315]: 2026-01-10 17:03:56.883261876 +0000 UTC m=+0.108570057 container init 71aad2d36c6f9ff22791b75c26b68fd46d3ec17a662b4f85217012c8bd9f11db (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_gates, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 17:03:56 compute-0 podman[119315]: 2026-01-10 17:03:56.889977666 +0000 UTC m=+0.115285817 container start 71aad2d36c6f9ff22791b75c26b68fd46d3ec17a662b4f85217012c8bd9f11db (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_gates, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 10 17:03:56 compute-0 podman[119315]: 2026-01-10 17:03:56.800444817 +0000 UTC m=+0.025753028 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:03:56 compute-0 podman[119315]: 2026-01-10 17:03:56.893282259 +0000 UTC m=+0.118590440 container attach 71aad2d36c6f9ff22791b75c26b68fd46d3ec17a662b4f85217012c8bd9f11db (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_gates, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 17:03:56 compute-0 loving_gates[119333]: 167 167
Jan 10 17:03:56 compute-0 systemd[1]: libpod-71aad2d36c6f9ff22791b75c26b68fd46d3ec17a662b4f85217012c8bd9f11db.scope: Deactivated successfully.
Jan 10 17:03:56 compute-0 podman[119315]: 2026-01-10 17:03:56.895622995 +0000 UTC m=+0.120931146 container died 71aad2d36c6f9ff22791b75c26b68fd46d3ec17a662b4f85217012c8bd9f11db (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_gates, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 17:03:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-641f6c0eaecf820e63e874560af30da702074cfc860acf6997689525a6b99435-merged.mount: Deactivated successfully.
Jan 10 17:03:56 compute-0 podman[119315]: 2026-01-10 17:03:56.934128663 +0000 UTC m=+0.159436814 container remove 71aad2d36c6f9ff22791b75c26b68fd46d3ec17a662b4f85217012c8bd9f11db (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_gates, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 10 17:03:56 compute-0 systemd[1]: libpod-conmon-71aad2d36c6f9ff22791b75c26b68fd46d3ec17a662b4f85217012c8bd9f11db.scope: Deactivated successfully.
Jan 10 17:03:56 compute-0 python3.9[119318]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:03:57 compute-0 sudo[119314]: pam_unix(sudo:session): session closed for user root
Jan 10 17:03:57 compute-0 podman[119381]: 2026-01-10 17:03:57.112293385 +0000 UTC m=+0.044733355 container create a4ab2e6b25296b667f5b725c9d70580b193a5ef803488b1085ad0667572454c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_shtern, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 10 17:03:57 compute-0 systemd[1]: Started libpod-conmon-a4ab2e6b25296b667f5b725c9d70580b193a5ef803488b1085ad0667572454c6.scope.
Jan 10 17:03:57 compute-0 podman[119381]: 2026-01-10 17:03:57.091676813 +0000 UTC m=+0.024116783 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:03:57 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:03:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e84e7d4cddca5c5e7dfbca7d3d72b6ed1adb955ee45280d70ce0fe84c76b279/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 17:03:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e84e7d4cddca5c5e7dfbca7d3d72b6ed1adb955ee45280d70ce0fe84c76b279/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 17:03:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e84e7d4cddca5c5e7dfbca7d3d72b6ed1adb955ee45280d70ce0fe84c76b279/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 17:03:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e84e7d4cddca5c5e7dfbca7d3d72b6ed1adb955ee45280d70ce0fe84c76b279/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 17:03:57 compute-0 podman[119381]: 2026-01-10 17:03:57.209422158 +0000 UTC m=+0.141862128 container init a4ab2e6b25296b667f5b725c9d70580b193a5ef803488b1085ad0667572454c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_shtern, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 17:03:57 compute-0 podman[119381]: 2026-01-10 17:03:57.216130568 +0000 UTC m=+0.148570518 container start a4ab2e6b25296b667f5b725c9d70580b193a5ef803488b1085ad0667572454c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_shtern, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True)
Jan 10 17:03:57 compute-0 podman[119381]: 2026-01-10 17:03:57.220341497 +0000 UTC m=+0.152781457 container attach a4ab2e6b25296b667f5b725c9d70580b193a5ef803488b1085ad0667572454c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_shtern, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 10 17:03:57 compute-0 ceph-mon[75249]: pgmap v248: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:03:57 compute-0 sudo[119542]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lidmbhlbnsuctkpxeitlqpivufcnczqe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064637.162915-344-203743213652529/AnsiballZ_stat.py'
Jan 10 17:03:57 compute-0 sudo[119542]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:03:57 compute-0 python3.9[119547]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 17:03:57 compute-0 sudo[119542]: pam_unix(sudo:session): session closed for user root
Jan 10 17:03:57 compute-0 lvm[119647]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 10 17:03:57 compute-0 lvm[119647]: VG ceph_vg0 finished
Jan 10 17:03:57 compute-0 lvm[119649]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 10 17:03:57 compute-0 lvm[119649]: VG ceph_vg1 finished
Jan 10 17:03:57 compute-0 lvm[119658]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 10 17:03:57 compute-0 lvm[119658]: VG ceph_vg2 finished
Jan 10 17:03:58 compute-0 sudo[119684]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vhpbzjnkrpuvwujkecneywiqehyhupag ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064637.162915-344-203743213652529/AnsiballZ_file.py'
Jan 10 17:03:58 compute-0 sudo[119684]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:03:58 compute-0 sweet_shtern[119421]: {}
Jan 10 17:03:58 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v249: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:03:58 compute-0 systemd[1]: libpod-a4ab2e6b25296b667f5b725c9d70580b193a5ef803488b1085ad0667572454c6.scope: Deactivated successfully.
Jan 10 17:03:58 compute-0 podman[119381]: 2026-01-10 17:03:58.107095482 +0000 UTC m=+1.039535472 container died a4ab2e6b25296b667f5b725c9d70580b193a5ef803488b1085ad0667572454c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_shtern, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 17:03:58 compute-0 systemd[1]: libpod-a4ab2e6b25296b667f5b725c9d70580b193a5ef803488b1085ad0667572454c6.scope: Consumed 1.410s CPU time.
Jan 10 17:03:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-2e84e7d4cddca5c5e7dfbca7d3d72b6ed1adb955ee45280d70ce0fe84c76b279-merged.mount: Deactivated successfully.
Jan 10 17:03:58 compute-0 podman[119381]: 2026-01-10 17:03:58.16615729 +0000 UTC m=+1.098597230 container remove a4ab2e6b25296b667f5b725c9d70580b193a5ef803488b1085ad0667572454c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_shtern, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 10 17:03:58 compute-0 systemd[1]: libpod-conmon-a4ab2e6b25296b667f5b725c9d70580b193a5ef803488b1085ad0667572454c6.scope: Deactivated successfully.
Jan 10 17:03:58 compute-0 python3.9[119687]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:03:58 compute-0 sudo[119204]: pam_unix(sudo:session): session closed for user root
Jan 10 17:03:58 compute-0 sudo[119684]: pam_unix(sudo:session): session closed for user root
Jan 10 17:03:58 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 10 17:03:58 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:03:58 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 10 17:03:58 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:03:58 compute-0 sudo[119704]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 10 17:03:58 compute-0 sudo[119704]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:03:58 compute-0 sudo[119704]: pam_unix(sudo:session): session closed for user root
Jan 10 17:03:58 compute-0 sudo[119875]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eyngtxrptduleypsdojmnpujllhntxbw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064638.4559734-357-145704199260665/AnsiballZ_command.py'
Jan 10 17:03:58 compute-0 sudo[119875]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:03:59 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:03:59 compute-0 python3.9[119877]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 17:03:59 compute-0 sudo[119875]: pam_unix(sudo:session): session closed for user root
Jan 10 17:03:59 compute-0 ceph-mon[75249]: pgmap v249: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:03:59 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:03:59 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:03:59 compute-0 sudo[120030]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ogynuxqurxlmewzowranhkgtbjdvxidt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064639.2907646-365-129873244016914/AnsiballZ_blockinfile.py'
Jan 10 17:03:59 compute-0 sudo[120030]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:03:59 compute-0 python3.9[120032]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:04:00 compute-0 sudo[120030]: pam_unix(sudo:session): session closed for user root
Jan 10 17:04:00 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v250: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:04:00 compute-0 sudo[120182]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rklrmpdkwyiqwsqhpjthmyfiprbpagmd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064640.224044-374-6624699947364/AnsiballZ_file.py'
Jan 10 17:04:00 compute-0 sudo[120182]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:04:00 compute-0 python3.9[120184]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:04:00 compute-0 sudo[120182]: pam_unix(sudo:session): session closed for user root
Jan 10 17:04:01 compute-0 sudo[120334]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwjioihlqdrkkliuuvcxkruiidghnlar ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064640.8138607-374-30053693319098/AnsiballZ_file.py'
Jan 10 17:04:01 compute-0 sudo[120334]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:04:01 compute-0 python3.9[120336]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:04:01 compute-0 sudo[120334]: pam_unix(sudo:session): session closed for user root
Jan 10 17:04:01 compute-0 ceph-mon[75249]: pgmap v250: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:04:01 compute-0 sudo[120486]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hprxoytqftkeemfhawnsadsoutqjuvon ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064641.3969026-389-188808112331949/AnsiballZ_mount.py'
Jan 10 17:04:01 compute-0 sudo[120486]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:04:02 compute-0 python3.9[120488]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 10 17:04:02 compute-0 sudo[120486]: pam_unix(sudo:session): session closed for user root
Jan 10 17:04:02 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v251: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:04:02 compute-0 sudo[120638]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gyykmawarvggsgvtwhunbkzslbclwgoo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064642.1623683-389-275620303183835/AnsiballZ_mount.py'
Jan 10 17:04:02 compute-0 sudo[120638]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:04:02 compute-0 python3.9[120640]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 10 17:04:02 compute-0 sudo[120638]: pam_unix(sudo:session): session closed for user root
Jan 10 17:04:03 compute-0 sshd-session[112912]: Connection closed by 192.168.122.30 port 42400
Jan 10 17:04:03 compute-0 sshd-session[112909]: pam_unix(sshd:session): session closed for user zuul
Jan 10 17:04:03 compute-0 systemd[1]: session-40.scope: Deactivated successfully.
Jan 10 17:04:03 compute-0 systemd[1]: session-40.scope: Consumed 30.739s CPU time.
Jan 10 17:04:03 compute-0 systemd-logind[798]: Session 40 logged out. Waiting for processes to exit.
Jan 10 17:04:03 compute-0 systemd-logind[798]: Removed session 40.
Jan 10 17:04:03 compute-0 ceph-mon[75249]: pgmap v251: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:04:03 compute-0 sshd-session[120665]: Connection closed by authenticating user root 216.36.124.133 port 43530 [preauth]
Jan 10 17:04:04 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:04:04 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v252: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:04:05 compute-0 ceph-mon[75249]: pgmap v252: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:04:06 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v253: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:04:07 compute-0 ceph-mon[75249]: pgmap v253: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:04:08 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v254: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:04:08 compute-0 sshd-session[120667]: Accepted publickey for zuul from 192.168.122.30 port 40548 ssh2: ECDSA SHA256:YYROLJW/JwZAyyZtyl+88gzuUs1GqrQIhGb+AzXg9yc
Jan 10 17:04:08 compute-0 systemd-logind[798]: New session 41 of user zuul.
Jan 10 17:04:08 compute-0 systemd[1]: Started Session 41 of User zuul.
Jan 10 17:04:08 compute-0 sshd-session[120667]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 10 17:04:08 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:04:08 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:04:08 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:04:08 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:04:08 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:04:08 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:04:09 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:04:09 compute-0 ceph-mon[75249]: pgmap v254: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:04:09 compute-0 sudo[120820]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mwjwshqoxjoxxlwfesgheidfzmuvhdbz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064648.879557-16-37711569392610/AnsiballZ_tempfile.py'
Jan 10 17:04:09 compute-0 sudo[120820]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:04:09 compute-0 python3.9[120822]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Jan 10 17:04:09 compute-0 sudo[120820]: pam_unix(sudo:session): session closed for user root
Jan 10 17:04:10 compute-0 sudo[120972]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mjdittqbdpjvmgkbkywmvhrhyeglmzfx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064649.6838756-28-229396105159697/AnsiballZ_stat.py'
Jan 10 17:04:10 compute-0 sudo[120972]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:04:10 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v255: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:04:10 compute-0 python3.9[120974]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 10 17:04:10 compute-0 sudo[120972]: pam_unix(sudo:session): session closed for user root
Jan 10 17:04:10 compute-0 sudo[121126]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nuruwzppjcbhmdpobznlxrkltmqykmxw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064650.4713118-36-40885625488012/AnsiballZ_slurp.py'
Jan 10 17:04:10 compute-0 sudo[121126]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:04:11 compute-0 python3.9[121128]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts
Jan 10 17:04:11 compute-0 sudo[121126]: pam_unix(sudo:session): session closed for user root
Jan 10 17:04:11 compute-0 ceph-mon[75249]: pgmap v255: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:04:11 compute-0 sudo[121278]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ubymzlukzfjqbsjwiutomzryskkkfizi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064651.2719548-44-146808587901152/AnsiballZ_stat.py'
Jan 10 17:04:11 compute-0 sudo[121278]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:04:11 compute-0 python3.9[121280]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible.prj1z4_9 follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 17:04:11 compute-0 sudo[121278]: pam_unix(sudo:session): session closed for user root
Jan 10 17:04:12 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v256: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:04:12 compute-0 sudo[121403]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mlygrxduvbvckcgilzauloorxlywbqlh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064651.2719548-44-146808587901152/AnsiballZ_copy.py'
Jan 10 17:04:12 compute-0 sudo[121403]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:04:12 compute-0 ceph-mon[75249]: pgmap v256: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:04:12 compute-0 python3.9[121405]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible.prj1z4_9 mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1768064651.2719548-44-146808587901152/.source.prj1z4_9 _original_basename=.ucz4gksr follow=False checksum=c16efaf3fcf3c55e6b76526c00ad8db14a29321c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:04:12 compute-0 sudo[121403]: pam_unix(sudo:session): session closed for user root
Jan 10 17:04:13 compute-0 sudo[121555]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xlegdqgvmdaaxnpliehibpomuwlxuggc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064652.7034435-59-95035057823736/AnsiballZ_setup.py'
Jan 10 17:04:13 compute-0 sudo[121555]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:04:13 compute-0 python3.9[121557]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 10 17:04:13 compute-0 sudo[121555]: pam_unix(sudo:session): session closed for user root
Jan 10 17:04:14 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:04:14 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v257: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:04:14 compute-0 sudo[121707]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qbnsvscwtledreliebabuoszkqhgkhwb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064653.9370167-68-30563985313468/AnsiballZ_blockinfile.py'
Jan 10 17:04:14 compute-0 sudo[121707]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:04:14 compute-0 python3.9[121709]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLbf1u7QZKIo5G+YWiNhcXI+Bt6YV4GfE/ux3dizYMgWBt9o+PmlYYMiVREbRw0Bbw1ytXXbF5+nj3Xb2CXI8ussGl0WspjKSeiZ6iZLcZTiCJLgJ/9hsvwXR//dQk9MHjPU21/f9Bmm5bXO7JD6wyeZ6BhNNSRil+tMQ9dtlaRlLoSzr5CXtKSgvp0EnFO/wO0yIjn5vj0Kg53pKe6PklqqbDKQe4B3RTSjCo711H66GqFuA0OZDkpKEVqdQFy9HUPAxgflwamxh1bRZYQ4oZ+sRK0y7Aau5nyIxefmh+nrgkwpuGnfu/PBcFHlgDpGdK5SR2MN7oUwfJtJl+qp1MFaUz+TRF7THXK8e6MCD0RPGfqlim6D6qGfKkbBYM50kTncYakPtGOrLbf/hARiTSEduglbNBYv0vatpv1emwjOPwkAu3DZdOi4PokhOq+BnOnG95UH3ZzOWO+UnNEiCQgCu7NbzJOFb/KoBU8XRT1o8yPWdpwQ+mKGFE1PGsA7k=
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICVw/TzKh+QQYsI9HFUl2xKC/Iozkh6C2Rlm1r7qShYC
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIHuUq5M0wkVhsnk90cNjQOZixGqQR1X/PXyTQuPIQfBmEkOk4KlPkJk1al+bzULcCOXjdbnilDQbL6yRpQlhrU=
                                              create=True mode=0644 path=/tmp/ansible.prj1z4_9 state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:04:14 compute-0 sudo[121707]: pam_unix(sudo:session): session closed for user root
Jan 10 17:04:15 compute-0 ceph-mon[75249]: pgmap v257: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:04:15 compute-0 sudo[121859]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tljgbyhhddbvhgldmrjqvjyctwzuofot ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064654.7687688-76-46623822822018/AnsiballZ_command.py'
Jan 10 17:04:15 compute-0 sudo[121859]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:04:15 compute-0 python3.9[121861]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.prj1z4_9' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 17:04:15 compute-0 sudo[121859]: pam_unix(sudo:session): session closed for user root
Jan 10 17:04:16 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v258: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:04:16 compute-0 sudo[122013]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dywifvfqemxgauywiihbhctdzheorysg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064655.6521378-84-168044751789878/AnsiballZ_file.py'
Jan 10 17:04:16 compute-0 sudo[122013]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:04:16 compute-0 python3.9[122015]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.prj1z4_9 state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:04:16 compute-0 sudo[122013]: pam_unix(sudo:session): session closed for user root
Jan 10 17:04:16 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Jan 10 17:04:16 compute-0 sshd-session[120670]: Connection closed by 192.168.122.30 port 40548
Jan 10 17:04:16 compute-0 sshd-session[120667]: pam_unix(sshd:session): session closed for user zuul
Jan 10 17:04:16 compute-0 systemd[1]: session-41.scope: Deactivated successfully.
Jan 10 17:04:16 compute-0 systemd[1]: session-41.scope: Consumed 5.463s CPU time.
Jan 10 17:04:16 compute-0 systemd-logind[798]: Session 41 logged out. Waiting for processes to exit.
Jan 10 17:04:16 compute-0 systemd-logind[798]: Removed session 41.
Jan 10 17:04:17 compute-0 ceph-mon[75249]: pgmap v258: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:04:18 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v259: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:04:19 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:04:19 compute-0 ceph-mon[75249]: pgmap v259: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:04:20 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v260: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:04:21 compute-0 ceph-mon[75249]: pgmap v260: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:04:21 compute-0 sshd-session[122042]: Accepted publickey for zuul from 192.168.122.30 port 52758 ssh2: ECDSA SHA256:YYROLJW/JwZAyyZtyl+88gzuUs1GqrQIhGb+AzXg9yc
Jan 10 17:04:21 compute-0 systemd-logind[798]: New session 42 of user zuul.
Jan 10 17:04:21 compute-0 systemd[1]: Started Session 42 of User zuul.
Jan 10 17:04:21 compute-0 sshd-session[122042]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 10 17:04:22 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v261: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:04:22 compute-0 python3.9[122195]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 10 17:04:23 compute-0 ceph-mon[75249]: pgmap v261: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:04:23 compute-0 sudo[122349]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-paaxznegvulxexnczjoykurwakhkgkpa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064663.202727-27-266764045960969/AnsiballZ_systemd.py'
Jan 10 17:04:23 compute-0 sudo[122349]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:04:24 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:04:24 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v262: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:04:24 compute-0 python3.9[122351]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Jan 10 17:04:24 compute-0 sudo[122349]: pam_unix(sudo:session): session closed for user root
Jan 10 17:04:24 compute-0 sudo[122503]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vijgtkmqhdrbackoujfoqpoyplsnhvcg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064664.4571373-35-28325509339307/AnsiballZ_systemd.py'
Jan 10 17:04:24 compute-0 sudo[122503]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:04:25 compute-0 python3.9[122505]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 10 17:04:25 compute-0 sudo[122503]: pam_unix(sudo:session): session closed for user root
Jan 10 17:04:25 compute-0 ceph-mon[75249]: pgmap v262: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:04:25 compute-0 sudo[122656]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mpzxkhsmflxwytmjshhteioodifnukup ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064665.27602-44-24153959524768/AnsiballZ_command.py'
Jan 10 17:04:25 compute-0 sudo[122656]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:04:25 compute-0 python3.9[122658]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 17:04:25 compute-0 sudo[122656]: pam_unix(sudo:session): session closed for user root
Jan 10 17:04:26 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v263: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:04:26 compute-0 sudo[122809]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vofpyzsbxzyrypsrhjptwwtwcdjfzfrc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064666.0883904-52-254626049709325/AnsiballZ_stat.py'
Jan 10 17:04:26 compute-0 sudo[122809]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:04:26 compute-0 python3.9[122811]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 10 17:04:26 compute-0 sudo[122809]: pam_unix(sudo:session): session closed for user root
Jan 10 17:04:27 compute-0 ceph-mon[75249]: pgmap v263: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:04:27 compute-0 sudo[122961]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-darixzunxrebvkqjpbmnlsglfxhkuiei ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064666.9159591-61-29554298476828/AnsiballZ_file.py'
Jan 10 17:04:27 compute-0 sudo[122961]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:04:27 compute-0 python3.9[122963]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:04:27 compute-0 sudo[122961]: pam_unix(sudo:session): session closed for user root
Jan 10 17:04:27 compute-0 sshd-session[122045]: Connection closed by 192.168.122.30 port 52758
Jan 10 17:04:27 compute-0 sshd-session[122042]: pam_unix(sshd:session): session closed for user zuul
Jan 10 17:04:27 compute-0 systemd[1]: session-42.scope: Deactivated successfully.
Jan 10 17:04:27 compute-0 systemd[1]: session-42.scope: Consumed 4.146s CPU time.
Jan 10 17:04:27 compute-0 systemd-logind[798]: Session 42 logged out. Waiting for processes to exit.
Jan 10 17:04:27 compute-0 systemd-logind[798]: Removed session 42.
Jan 10 17:04:28 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v264: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:04:29 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:04:29 compute-0 ceph-mon[75249]: pgmap v264: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:04:30 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v265: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:04:31 compute-0 ceph-mon[75249]: pgmap v265: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:04:31 compute-0 sshd-session[71403]: Received disconnect from 38.102.83.82 port 45756:11: disconnected by user
Jan 10 17:04:31 compute-0 sshd-session[71403]: Disconnected from user zuul 38.102.83.82 port 45756
Jan 10 17:04:31 compute-0 sshd-session[71400]: pam_unix(sshd:session): session closed for user zuul
Jan 10 17:04:31 compute-0 systemd[1]: session-18.scope: Deactivated successfully.
Jan 10 17:04:31 compute-0 systemd[1]: session-18.scope: Consumed 1min 55.737s CPU time.
Jan 10 17:04:31 compute-0 systemd-logind[798]: Session 18 logged out. Waiting for processes to exit.
Jan 10 17:04:31 compute-0 systemd-logind[798]: Removed session 18.
Jan 10 17:04:32 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v266: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:04:32 compute-0 ceph-mon[75249]: pgmap v266: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:04:32 compute-0 sshd-session[122988]: Accepted publickey for zuul from 192.168.122.30 port 47148 ssh2: ECDSA SHA256:YYROLJW/JwZAyyZtyl+88gzuUs1GqrQIhGb+AzXg9yc
Jan 10 17:04:32 compute-0 systemd-logind[798]: New session 43 of user zuul.
Jan 10 17:04:32 compute-0 systemd[1]: Started Session 43 of User zuul.
Jan 10 17:04:32 compute-0 sshd-session[122988]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 10 17:04:33 compute-0 python3.9[123141]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 10 17:04:34 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:04:34 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v267: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:04:34 compute-0 sudo[123295]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vopknyerngsykdzqdztxjotvkefsifge ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064674.4141004-29-177549579465166/AnsiballZ_setup.py'
Jan 10 17:04:34 compute-0 sudo[123295]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:04:35 compute-0 python3.9[123297]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 10 17:04:35 compute-0 ceph-mon[75249]: pgmap v267: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:04:35 compute-0 sudo[123295]: pam_unix(sudo:session): session closed for user root
Jan 10 17:04:35 compute-0 sudo[123379]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fnfczhshubyjksyszexsjypbfxfcssnl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064674.4141004-29-177549579465166/AnsiballZ_dnf.py'
Jan 10 17:04:35 compute-0 sudo[123379]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:04:35 compute-0 python3.9[123381]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 10 17:04:36 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v268: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:04:37 compute-0 ceph-mon[75249]: pgmap v268: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:04:37 compute-0 sudo[123379]: pam_unix(sudo:session): session closed for user root
Jan 10 17:04:38 compute-0 python3.9[123532]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 17:04:38 compute-0 ceph-mgr[75538]: [balancer INFO root] Optimize plan auto_2026-01-10_17:04:38
Jan 10 17:04:38 compute-0 ceph-mgr[75538]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 10 17:04:38 compute-0 ceph-mgr[75538]: [balancer INFO root] do_upmap
Jan 10 17:04:38 compute-0 ceph-mgr[75538]: [balancer INFO root] pools ['backups', '.mgr', 'volumes', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'vms', 'images']
Jan 10 17:04:38 compute-0 ceph-mgr[75538]: [balancer INFO root] prepared 0/10 upmap changes
Jan 10 17:04:38 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v269: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:04:38 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:04:38 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:04:38 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:04:38 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:04:38 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:04:38 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:04:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 10 17:04:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 10 17:04:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 10 17:04:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 10 17:04:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 10 17:04:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 10 17:04:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 10 17:04:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 10 17:04:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 10 17:04:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 10 17:04:39 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:04:39 compute-0 ceph-mon[75249]: pgmap v269: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:04:39 compute-0 python3.9[123683]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 10 17:04:40 compute-0 python3.9[123833]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 10 17:04:40 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v270: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:04:40 compute-0 python3.9[123983]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 10 17:04:41 compute-0 sshd-session[122991]: Connection closed by 192.168.122.30 port 47148
Jan 10 17:04:41 compute-0 sshd-session[122988]: pam_unix(sshd:session): session closed for user zuul
Jan 10 17:04:41 compute-0 systemd[1]: session-43.scope: Deactivated successfully.
Jan 10 17:04:41 compute-0 systemd[1]: session-43.scope: Consumed 6.145s CPU time.
Jan 10 17:04:41 compute-0 systemd-logind[798]: Session 43 logged out. Waiting for processes to exit.
Jan 10 17:04:41 compute-0 systemd-logind[798]: Removed session 43.
Jan 10 17:04:41 compute-0 ceph-mon[75249]: pgmap v270: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:04:42 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v271: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:04:43 compute-0 ceph-mon[75249]: pgmap v271: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:04:44 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:04:44 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v272: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:04:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] _maybe_adjust
Jan 10 17:04:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:04:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 10 17:04:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:04:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 17:04:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:04:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 17:04:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:04:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 17:04:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:04:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 17:04:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:04:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 9.302004027771843e-07 of space, bias 4.0, pg target 0.0011162404833326212 quantized to 16 (current 16)
Jan 10 17:04:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:04:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 17:04:45 compute-0 ceph-mon[75249]: pgmap v272: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:04:46 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v273: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:04:46 compute-0 sshd-session[124009]: Accepted publickey for zuul from 192.168.122.30 port 42876 ssh2: ECDSA SHA256:YYROLJW/JwZAyyZtyl+88gzuUs1GqrQIhGb+AzXg9yc
Jan 10 17:04:46 compute-0 systemd-logind[798]: New session 44 of user zuul.
Jan 10 17:04:46 compute-0 systemd[1]: Started Session 44 of User zuul.
Jan 10 17:04:46 compute-0 sshd-session[124009]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 10 17:04:47 compute-0 ceph-mon[75249]: pgmap v273: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:04:47 compute-0 python3.9[124162]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 10 17:04:48 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v274: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:04:48 compute-0 sudo[124316]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yrjhvcdndhbpegrfwxjxjsqzrunixtns ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064688.4803357-45-99129920254817/AnsiballZ_file.py'
Jan 10 17:04:48 compute-0 sudo[124316]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:04:49 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:04:49 compute-0 python3.9[124318]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 10 17:04:49 compute-0 sudo[124316]: pam_unix(sudo:session): session closed for user root
Jan 10 17:04:49 compute-0 ceph-mon[75249]: pgmap v274: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:04:49 compute-0 sudo[124468]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ynucpipdyoaruxllkhovtnasflkgmgtf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064689.3592288-45-206631576837708/AnsiballZ_file.py'
Jan 10 17:04:49 compute-0 sudo[124468]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:04:49 compute-0 python3.9[124470]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 10 17:04:49 compute-0 sudo[124468]: pam_unix(sudo:session): session closed for user root
Jan 10 17:04:50 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v275: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:04:50 compute-0 sudo[124620]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-msxdpysdrajofqnhzlkwjogbafsitquj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064690.1058009-60-195841109043771/AnsiballZ_stat.py'
Jan 10 17:04:50 compute-0 sudo[124620]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:04:50 compute-0 python3.9[124622]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 17:04:50 compute-0 sudo[124620]: pam_unix(sudo:session): session closed for user root
Jan 10 17:04:51 compute-0 ceph-mon[75249]: pgmap v275: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:04:51 compute-0 sudo[124743]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nqsknlhczlsbcbtdmymlchykrvrrafvr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064690.1058009-60-195841109043771/AnsiballZ_copy.py'
Jan 10 17:04:51 compute-0 sudo[124743]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:04:51 compute-0 python3.9[124745]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768064690.1058009-60-195841109043771/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=e485d678466a488a60f8e482454471a355c36f72 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:04:51 compute-0 sudo[124743]: pam_unix(sudo:session): session closed for user root
Jan 10 17:04:51 compute-0 sudo[124895]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wjpzglwtnacuiroohkunococynvrubww ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064691.6447883-60-248199502016995/AnsiballZ_stat.py'
Jan 10 17:04:51 compute-0 sudo[124895]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:04:52 compute-0 python3.9[124897]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 17:04:52 compute-0 sudo[124895]: pam_unix(sudo:session): session closed for user root
Jan 10 17:04:52 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v276: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:04:52 compute-0 sudo[125018]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bcoqodwpzrhfhodhzuofdtmykzbujsme ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064691.6447883-60-248199502016995/AnsiballZ_copy.py'
Jan 10 17:04:52 compute-0 sudo[125018]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:04:52 compute-0 python3.9[125020]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768064691.6447883-60-248199502016995/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=ca038f02567930da0b541567198b9dabc46ea4df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:04:52 compute-0 sudo[125018]: pam_unix(sudo:session): session closed for user root
Jan 10 17:04:53 compute-0 sudo[125170]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wtbaagitqielafjwhteczmstlqaqnmrx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064692.8077638-60-127709154005490/AnsiballZ_stat.py'
Jan 10 17:04:53 compute-0 sudo[125170]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:04:53 compute-0 python3.9[125172]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 17:04:53 compute-0 ceph-mon[75249]: pgmap v276: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:04:53 compute-0 sudo[125170]: pam_unix(sudo:session): session closed for user root
Jan 10 17:04:53 compute-0 sudo[125293]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yqahnwdspravbzhbdffqckoszayginaw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064692.8077638-60-127709154005490/AnsiballZ_copy.py'
Jan 10 17:04:53 compute-0 sudo[125293]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:04:53 compute-0 python3.9[125295]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768064692.8077638-60-127709154005490/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=41ab701d91a5d2c0623e2f0f9a873502cb129bb7 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:04:53 compute-0 sudo[125293]: pam_unix(sudo:session): session closed for user root
Jan 10 17:04:54 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:04:54 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v277: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:04:54 compute-0 sudo[125445]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fykehtzqvdtfqizthdxjxdqchegolzma ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064694.1404984-104-139838737939225/AnsiballZ_file.py'
Jan 10 17:04:54 compute-0 sudo[125445]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:04:54 compute-0 python3.9[125447]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 10 17:04:54 compute-0 sudo[125445]: pam_unix(sudo:session): session closed for user root
Jan 10 17:04:55 compute-0 sudo[125597]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fcvuymzosixfdolcuzsvsperlrpdivrs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064694.7879822-104-250903054159754/AnsiballZ_file.py'
Jan 10 17:04:55 compute-0 sudo[125597]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:04:55 compute-0 python3.9[125599]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 10 17:04:55 compute-0 sudo[125597]: pam_unix(sudo:session): session closed for user root
Jan 10 17:04:55 compute-0 ceph-mon[75249]: pgmap v277: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:04:55 compute-0 sudo[125749]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vkkbqpyzacjkydqnuqxzhytofzrtrcct ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064695.4554915-119-262832685100687/AnsiballZ_stat.py'
Jan 10 17:04:55 compute-0 sudo[125749]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:04:55 compute-0 python3.9[125751]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 17:04:55 compute-0 sudo[125749]: pam_unix(sudo:session): session closed for user root
Jan 10 17:04:56 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v278: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:04:56 compute-0 sudo[125872]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pncatyyqlyqtfzliprydrogmdgoajdtw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064695.4554915-119-262832685100687/AnsiballZ_copy.py'
Jan 10 17:04:56 compute-0 sudo[125872]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:04:56 compute-0 python3.9[125874]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768064695.4554915-119-262832685100687/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=b6d15fd162bf0e10fa7a56e0e8f7a485557793ec backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:04:56 compute-0 sudo[125872]: pam_unix(sudo:session): session closed for user root
Jan 10 17:04:56 compute-0 sudo[126024]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hjohzwrtygdbwkrjdwasodkvzjllvmec ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064696.6584558-119-2037805664209/AnsiballZ_stat.py'
Jan 10 17:04:56 compute-0 sudo[126024]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:04:57 compute-0 python3.9[126026]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 17:04:57 compute-0 sudo[126024]: pam_unix(sudo:session): session closed for user root
Jan 10 17:04:57 compute-0 ceph-mon[75249]: pgmap v278: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:04:57 compute-0 sudo[126147]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tahetlyutjqssjrwqhfgbxwjekzvoakc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064696.6584558-119-2037805664209/AnsiballZ_copy.py'
Jan 10 17:04:57 compute-0 sudo[126147]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:04:57 compute-0 python3.9[126149]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768064696.6584558-119-2037805664209/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=3b8b82f07f1ef991370ee1a21f059d8a61d3668d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:04:57 compute-0 sudo[126147]: pam_unix(sudo:session): session closed for user root
Jan 10 17:04:58 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v279: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:04:58 compute-0 sudo[126299]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-olinzcxdilvbscpdvtcqzqxytalzojbz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064697.932487-119-244265231215869/AnsiballZ_stat.py'
Jan 10 17:04:58 compute-0 sudo[126299]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:04:58 compute-0 sudo[126302]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 17:04:58 compute-0 sudo[126302]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:04:58 compute-0 sudo[126302]: pam_unix(sudo:session): session closed for user root
Jan 10 17:04:58 compute-0 python3.9[126301]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 17:04:58 compute-0 sudo[126327]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 10 17:04:58 compute-0 sudo[126327]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:04:58 compute-0 sudo[126299]: pam_unix(sudo:session): session closed for user root
Jan 10 17:04:58 compute-0 sudo[126486]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pparzogpqsjhuohvtkvymcsoljdywfwq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064697.932487-119-244265231215869/AnsiballZ_copy.py'
Jan 10 17:04:58 compute-0 sudo[126486]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:04:59 compute-0 python3.9[126488]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768064697.932487-119-244265231215869/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=7af582baab9c3f815fac6ee51c17b6b6c5772501 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:04:59 compute-0 sudo[126486]: pam_unix(sudo:session): session closed for user root
Jan 10 17:04:59 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:04:59 compute-0 sudo[126327]: pam_unix(sudo:session): session closed for user root
Jan 10 17:04:59 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 17:04:59 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 17:04:59 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 10 17:04:59 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 10 17:04:59 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 10 17:04:59 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:04:59 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 10 17:04:59 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 10 17:04:59 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 10 17:04:59 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 10 17:04:59 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 17:04:59 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 17:04:59 compute-0 sudo[126530]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 17:04:59 compute-0 sudo[126530]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:04:59 compute-0 sudo[126530]: pam_unix(sudo:session): session closed for user root
Jan 10 17:04:59 compute-0 sudo[126555]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 10 17:04:59 compute-0 sudo[126555]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:04:59 compute-0 ceph-mon[75249]: pgmap v279: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:04:59 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 17:04:59 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 10 17:04:59 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:04:59 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 10 17:04:59 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 10 17:04:59 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 17:04:59 compute-0 podman[126670]: 2026-01-10 17:04:59.500609251 +0000 UTC m=+0.046702024 container create 0e62641bad7ae79a6655938670a84e504bb5c24421a414b0b47056e42da66446 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_satoshi, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 17:04:59 compute-0 systemd[1]: Started libpod-conmon-0e62641bad7ae79a6655938670a84e504bb5c24421a414b0b47056e42da66446.scope.
Jan 10 17:04:59 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:04:59 compute-0 sudo[126733]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rekenwegfnpuxmcqvfvrzeubgzxmmoaq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064699.253948-163-119311291577645/AnsiballZ_file.py'
Jan 10 17:04:59 compute-0 sudo[126733]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:04:59 compute-0 podman[126670]: 2026-01-10 17:04:59.481441238 +0000 UTC m=+0.027534051 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:04:59 compute-0 podman[126670]: 2026-01-10 17:04:59.57785682 +0000 UTC m=+0.123949613 container init 0e62641bad7ae79a6655938670a84e504bb5c24421a414b0b47056e42da66446 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_satoshi, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 17:04:59 compute-0 podman[126670]: 2026-01-10 17:04:59.584596181 +0000 UTC m=+0.130688954 container start 0e62641bad7ae79a6655938670a84e504bb5c24421a414b0b47056e42da66446 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_satoshi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 10 17:04:59 compute-0 podman[126670]: 2026-01-10 17:04:59.587488193 +0000 UTC m=+0.133581006 container attach 0e62641bad7ae79a6655938670a84e504bb5c24421a414b0b47056e42da66446 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_satoshi, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 10 17:04:59 compute-0 hardcore_satoshi[126734]: 167 167
Jan 10 17:04:59 compute-0 systemd[1]: libpod-0e62641bad7ae79a6655938670a84e504bb5c24421a414b0b47056e42da66446.scope: Deactivated successfully.
Jan 10 17:04:59 compute-0 podman[126670]: 2026-01-10 17:04:59.591007183 +0000 UTC m=+0.137099966 container died 0e62641bad7ae79a6655938670a84e504bb5c24421a414b0b47056e42da66446 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_satoshi, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 10 17:04:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-63704ade634d76d74647a926a86b2af5ae27d36694ab174f0a404e3fabcc32d6-merged.mount: Deactivated successfully.
Jan 10 17:04:59 compute-0 podman[126670]: 2026-01-10 17:04:59.627338452 +0000 UTC m=+0.173431225 container remove 0e62641bad7ae79a6655938670a84e504bb5c24421a414b0b47056e42da66446 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_satoshi, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 10 17:04:59 compute-0 systemd[1]: libpod-conmon-0e62641bad7ae79a6655938670a84e504bb5c24421a414b0b47056e42da66446.scope: Deactivated successfully.
Jan 10 17:04:59 compute-0 python3.9[126738]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 10 17:04:59 compute-0 sudo[126733]: pam_unix(sudo:session): session closed for user root
Jan 10 17:04:59 compute-0 podman[126761]: 2026-01-10 17:04:59.803503324 +0000 UTC m=+0.045274014 container create 2d29d0d6fdae40e18e57b49b2551c89c5e1c81addb82d4b79bd68c7a10b79d62 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_goodall, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 17:04:59 compute-0 systemd[1]: Started libpod-conmon-2d29d0d6fdae40e18e57b49b2551c89c5e1c81addb82d4b79bd68c7a10b79d62.scope.
Jan 10 17:04:59 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:04:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98f1676975d35dde4cf215a150bf41ee1120f42cd39ecfaf242ac2985c7b9732/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 17:04:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98f1676975d35dde4cf215a150bf41ee1120f42cd39ecfaf242ac2985c7b9732/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 17:04:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98f1676975d35dde4cf215a150bf41ee1120f42cd39ecfaf242ac2985c7b9732/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 17:04:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98f1676975d35dde4cf215a150bf41ee1120f42cd39ecfaf242ac2985c7b9732/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 17:04:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98f1676975d35dde4cf215a150bf41ee1120f42cd39ecfaf242ac2985c7b9732/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 10 17:04:59 compute-0 podman[126761]: 2026-01-10 17:04:59.780999197 +0000 UTC m=+0.022769887 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:04:59 compute-0 podman[126761]: 2026-01-10 17:04:59.884295674 +0000 UTC m=+0.126066354 container init 2d29d0d6fdae40e18e57b49b2551c89c5e1c81addb82d4b79bd68c7a10b79d62 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_goodall, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 17:04:59 compute-0 podman[126761]: 2026-01-10 17:04:59.892757143 +0000 UTC m=+0.134527843 container start 2d29d0d6fdae40e18e57b49b2551c89c5e1c81addb82d4b79bd68c7a10b79d62 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_goodall, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 17:04:59 compute-0 podman[126761]: 2026-01-10 17:04:59.897619811 +0000 UTC m=+0.139390581 container attach 2d29d0d6fdae40e18e57b49b2551c89c5e1c81addb82d4b79bd68c7a10b79d62 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_goodall, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 17:05:00 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v280: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:05:00 compute-0 sudo[126934]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fntadxrdsgctgrbobmxexszavytphfpk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064699.9331722-163-173452727434298/AnsiballZ_file.py'
Jan 10 17:05:00 compute-0 sudo[126934]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:05:00 compute-0 python3.9[126938]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 10 17:05:00 compute-0 sudo[126934]: pam_unix(sudo:session): session closed for user root
Jan 10 17:05:00 compute-0 interesting_goodall[126799]: --> passed data devices: 0 physical, 3 LVM
Jan 10 17:05:00 compute-0 interesting_goodall[126799]: --> All data devices are unavailable
Jan 10 17:05:00 compute-0 systemd[1]: libpod-2d29d0d6fdae40e18e57b49b2551c89c5e1c81addb82d4b79bd68c7a10b79d62.scope: Deactivated successfully.
Jan 10 17:05:00 compute-0 podman[126761]: 2026-01-10 17:05:00.490572573 +0000 UTC m=+0.732343263 container died 2d29d0d6fdae40e18e57b49b2551c89c5e1c81addb82d4b79bd68c7a10b79d62 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_goodall, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 17:05:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-98f1676975d35dde4cf215a150bf41ee1120f42cd39ecfaf242ac2985c7b9732-merged.mount: Deactivated successfully.
Jan 10 17:05:00 compute-0 podman[126761]: 2026-01-10 17:05:00.541459455 +0000 UTC m=+0.783230125 container remove 2d29d0d6fdae40e18e57b49b2551c89c5e1c81addb82d4b79bd68c7a10b79d62 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_goodall, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 10 17:05:00 compute-0 systemd[1]: libpod-conmon-2d29d0d6fdae40e18e57b49b2551c89c5e1c81addb82d4b79bd68c7a10b79d62.scope: Deactivated successfully.
Jan 10 17:05:00 compute-0 sudo[126555]: pam_unix(sudo:session): session closed for user root
Jan 10 17:05:00 compute-0 sudo[127023]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 17:05:00 compute-0 sudo[127023]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:05:00 compute-0 sudo[127023]: pam_unix(sudo:session): session closed for user root
Jan 10 17:05:00 compute-0 sudo[127065]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 -- lvm list --format json
Jan 10 17:05:00 compute-0 sudo[127065]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:05:00 compute-0 sudo[127162]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zxmloviwnbnihaoeviuyzuxgnmtizdkc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064700.563052-178-107697408756/AnsiballZ_stat.py'
Jan 10 17:05:00 compute-0 sudo[127162]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:05:01 compute-0 python3.9[127164]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 17:05:01 compute-0 podman[127176]: 2026-01-10 17:05:01.037000847 +0000 UTC m=+0.063741407 container create c8ddbd3e2610bd937c8c7554ec562a815742fc2e4c0abc4f71e52e3359cbfaf8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_shamir, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 17:05:01 compute-0 sudo[127162]: pam_unix(sudo:session): session closed for user root
Jan 10 17:05:01 compute-0 systemd[1]: Started libpod-conmon-c8ddbd3e2610bd937c8c7554ec562a815742fc2e4c0abc4f71e52e3359cbfaf8.scope.
Jan 10 17:05:01 compute-0 podman[127176]: 2026-01-10 17:05:01.009446346 +0000 UTC m=+0.036186996 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:05:01 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:05:01 compute-0 podman[127176]: 2026-01-10 17:05:01.136246299 +0000 UTC m=+0.162986959 container init c8ddbd3e2610bd937c8c7554ec562a815742fc2e4c0abc4f71e52e3359cbfaf8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_shamir, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 17:05:01 compute-0 podman[127176]: 2026-01-10 17:05:01.144017199 +0000 UTC m=+0.170757769 container start c8ddbd3e2610bd937c8c7554ec562a815742fc2e4c0abc4f71e52e3359cbfaf8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_shamir, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 17:05:01 compute-0 podman[127176]: 2026-01-10 17:05:01.148125336 +0000 UTC m=+0.174865976 container attach c8ddbd3e2610bd937c8c7554ec562a815742fc2e4c0abc4f71e52e3359cbfaf8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_shamir, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 17:05:01 compute-0 agitated_shamir[127193]: 167 167
Jan 10 17:05:01 compute-0 systemd[1]: libpod-c8ddbd3e2610bd937c8c7554ec562a815742fc2e4c0abc4f71e52e3359cbfaf8.scope: Deactivated successfully.
Jan 10 17:05:01 compute-0 podman[127176]: 2026-01-10 17:05:01.150987457 +0000 UTC m=+0.177728007 container died c8ddbd3e2610bd937c8c7554ec562a815742fc2e4c0abc4f71e52e3359cbfaf8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_shamir, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 10 17:05:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-3e06dd0c099695b8c2848401ed8c9c74708ac80e34bcafda5b02492e5ec2e7bb-merged.mount: Deactivated successfully.
Jan 10 17:05:01 compute-0 podman[127176]: 2026-01-10 17:05:01.189145038 +0000 UTC m=+0.215885578 container remove c8ddbd3e2610bd937c8c7554ec562a815742fc2e4c0abc4f71e52e3359cbfaf8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_shamir, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 10 17:05:01 compute-0 systemd[1]: libpod-conmon-c8ddbd3e2610bd937c8c7554ec562a815742fc2e4c0abc4f71e52e3359cbfaf8.scope: Deactivated successfully.
Jan 10 17:05:01 compute-0 ceph-mon[75249]: pgmap v280: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:05:01 compute-0 podman[127264]: 2026-01-10 17:05:01.33672554 +0000 UTC m=+0.040595031 container create a1e5ff5dcef0b08fbf5c0e9e27764fa5821dbe537b2a552104e32039d3b18db1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_jemison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 17:05:01 compute-0 systemd[1]: Started libpod-conmon-a1e5ff5dcef0b08fbf5c0e9e27764fa5821dbe537b2a552104e32039d3b18db1.scope.
Jan 10 17:05:01 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:05:01 compute-0 podman[127264]: 2026-01-10 17:05:01.31801173 +0000 UTC m=+0.021881271 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:05:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/331ebb8ba491a49325cce48d983ee6b56f2d97997da2eb68a9de9f6a97007762/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 17:05:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/331ebb8ba491a49325cce48d983ee6b56f2d97997da2eb68a9de9f6a97007762/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 17:05:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/331ebb8ba491a49325cce48d983ee6b56f2d97997da2eb68a9de9f6a97007762/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 17:05:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/331ebb8ba491a49325cce48d983ee6b56f2d97997da2eb68a9de9f6a97007762/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 17:05:01 compute-0 podman[127264]: 2026-01-10 17:05:01.436974401 +0000 UTC m=+0.140843922 container init a1e5ff5dcef0b08fbf5c0e9e27764fa5821dbe537b2a552104e32039d3b18db1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_jemison, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 17:05:01 compute-0 podman[127264]: 2026-01-10 17:05:01.444782322 +0000 UTC m=+0.148651813 container start a1e5ff5dcef0b08fbf5c0e9e27764fa5821dbe537b2a552104e32039d3b18db1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_jemison, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 17:05:01 compute-0 podman[127264]: 2026-01-10 17:05:01.447934371 +0000 UTC m=+0.151803872 container attach a1e5ff5dcef0b08fbf5c0e9e27764fa5821dbe537b2a552104e32039d3b18db1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_jemison, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 17:05:01 compute-0 sudo[127359]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hyjapaksfkeaggyoudtnlrpltqibqhyj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064700.563052-178-107697408756/AnsiballZ_copy.py'
Jan 10 17:05:01 compute-0 sudo[127359]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:05:01 compute-0 python3.9[127361]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768064700.563052-178-107697408756/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=96f3925768934c7395b739536fa8f7b4d1baf946 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:05:01 compute-0 sudo[127359]: pam_unix(sudo:session): session closed for user root
Jan 10 17:05:01 compute-0 competent_jemison[127320]: {
Jan 10 17:05:01 compute-0 competent_jemison[127320]:     "0": [
Jan 10 17:05:01 compute-0 competent_jemison[127320]:         {
Jan 10 17:05:01 compute-0 competent_jemison[127320]:             "devices": [
Jan 10 17:05:01 compute-0 competent_jemison[127320]:                 "/dev/loop3"
Jan 10 17:05:01 compute-0 competent_jemison[127320]:             ],
Jan 10 17:05:01 compute-0 competent_jemison[127320]:             "lv_name": "ceph_lv0",
Jan 10 17:05:01 compute-0 competent_jemison[127320]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 10 17:05:01 compute-0 competent_jemison[127320]:             "lv_size": "21470642176",
Jan 10 17:05:01 compute-0 competent_jemison[127320]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9aa1dcc9-88f4-49c0-be40-744313964d3e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 17:05:01 compute-0 competent_jemison[127320]:             "lv_uuid": "fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2",
Jan 10 17:05:01 compute-0 competent_jemison[127320]:             "name": "ceph_lv0",
Jan 10 17:05:01 compute-0 competent_jemison[127320]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 10 17:05:01 compute-0 competent_jemison[127320]:             "tags": {
Jan 10 17:05:01 compute-0 competent_jemison[127320]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 10 17:05:01 compute-0 competent_jemison[127320]:                 "ceph.block_uuid": "fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2",
Jan 10 17:05:01 compute-0 competent_jemison[127320]:                 "ceph.cephx_lockbox_secret": "",
Jan 10 17:05:01 compute-0 competent_jemison[127320]:                 "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 17:05:01 compute-0 competent_jemison[127320]:                 "ceph.cluster_name": "ceph",
Jan 10 17:05:01 compute-0 competent_jemison[127320]:                 "ceph.crush_device_class": "",
Jan 10 17:05:01 compute-0 competent_jemison[127320]:                 "ceph.encrypted": "0",
Jan 10 17:05:01 compute-0 competent_jemison[127320]:                 "ceph.objectstore": "bluestore",
Jan 10 17:05:01 compute-0 competent_jemison[127320]:                 "ceph.osd_fsid": "9aa1dcc9-88f4-49c0-be40-744313964d3e",
Jan 10 17:05:01 compute-0 competent_jemison[127320]:                 "ceph.osd_id": "0",
Jan 10 17:05:01 compute-0 competent_jemison[127320]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 10 17:05:01 compute-0 competent_jemison[127320]:                 "ceph.type": "block",
Jan 10 17:05:01 compute-0 competent_jemison[127320]:                 "ceph.vdo": "0",
Jan 10 17:05:01 compute-0 competent_jemison[127320]:                 "ceph.with_tpm": "0"
Jan 10 17:05:01 compute-0 competent_jemison[127320]:             },
Jan 10 17:05:01 compute-0 competent_jemison[127320]:             "type": "block",
Jan 10 17:05:01 compute-0 competent_jemison[127320]:             "vg_name": "ceph_vg0"
Jan 10 17:05:01 compute-0 competent_jemison[127320]:         }
Jan 10 17:05:01 compute-0 competent_jemison[127320]:     ],
Jan 10 17:05:01 compute-0 competent_jemison[127320]:     "1": [
Jan 10 17:05:01 compute-0 competent_jemison[127320]:         {
Jan 10 17:05:01 compute-0 competent_jemison[127320]:             "devices": [
Jan 10 17:05:01 compute-0 competent_jemison[127320]:                 "/dev/loop4"
Jan 10 17:05:01 compute-0 competent_jemison[127320]:             ],
Jan 10 17:05:01 compute-0 competent_jemison[127320]:             "lv_name": "ceph_lv1",
Jan 10 17:05:01 compute-0 competent_jemison[127320]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 10 17:05:01 compute-0 competent_jemison[127320]:             "lv_size": "21470642176",
Jan 10 17:05:01 compute-0 competent_jemison[127320]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e8e31518-65ae-476c-891c-e2fc550d0a1c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 17:05:01 compute-0 competent_jemison[127320]:             "lv_uuid": "jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE",
Jan 10 17:05:01 compute-0 competent_jemison[127320]:             "name": "ceph_lv1",
Jan 10 17:05:01 compute-0 competent_jemison[127320]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 10 17:05:01 compute-0 competent_jemison[127320]:             "tags": {
Jan 10 17:05:01 compute-0 competent_jemison[127320]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 10 17:05:01 compute-0 competent_jemison[127320]:                 "ceph.block_uuid": "jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE",
Jan 10 17:05:01 compute-0 competent_jemison[127320]:                 "ceph.cephx_lockbox_secret": "",
Jan 10 17:05:01 compute-0 competent_jemison[127320]:                 "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 17:05:01 compute-0 competent_jemison[127320]:                 "ceph.cluster_name": "ceph",
Jan 10 17:05:01 compute-0 competent_jemison[127320]:                 "ceph.crush_device_class": "",
Jan 10 17:05:01 compute-0 competent_jemison[127320]:                 "ceph.encrypted": "0",
Jan 10 17:05:01 compute-0 competent_jemison[127320]:                 "ceph.objectstore": "bluestore",
Jan 10 17:05:01 compute-0 competent_jemison[127320]:                 "ceph.osd_fsid": "e8e31518-65ae-476c-891c-e2fc550d0a1c",
Jan 10 17:05:01 compute-0 competent_jemison[127320]:                 "ceph.osd_id": "1",
Jan 10 17:05:01 compute-0 competent_jemison[127320]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 10 17:05:01 compute-0 competent_jemison[127320]:                 "ceph.type": "block",
Jan 10 17:05:01 compute-0 competent_jemison[127320]:                 "ceph.vdo": "0",
Jan 10 17:05:01 compute-0 competent_jemison[127320]:                 "ceph.with_tpm": "0"
Jan 10 17:05:01 compute-0 competent_jemison[127320]:             },
Jan 10 17:05:01 compute-0 competent_jemison[127320]:             "type": "block",
Jan 10 17:05:01 compute-0 competent_jemison[127320]:             "vg_name": "ceph_vg1"
Jan 10 17:05:01 compute-0 competent_jemison[127320]:         }
Jan 10 17:05:01 compute-0 competent_jemison[127320]:     ],
Jan 10 17:05:01 compute-0 competent_jemison[127320]:     "2": [
Jan 10 17:05:01 compute-0 competent_jemison[127320]:         {
Jan 10 17:05:01 compute-0 competent_jemison[127320]:             "devices": [
Jan 10 17:05:01 compute-0 competent_jemison[127320]:                 "/dev/loop5"
Jan 10 17:05:01 compute-0 competent_jemison[127320]:             ],
Jan 10 17:05:01 compute-0 competent_jemison[127320]:             "lv_name": "ceph_lv2",
Jan 10 17:05:01 compute-0 competent_jemison[127320]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 10 17:05:01 compute-0 competent_jemison[127320]:             "lv_size": "21470642176",
Jan 10 17:05:01 compute-0 competent_jemison[127320]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=87473727-6468-4f68-8371-e0bf60edaa43,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 17:05:01 compute-0 competent_jemison[127320]:             "lv_uuid": "47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL",
Jan 10 17:05:01 compute-0 competent_jemison[127320]:             "name": "ceph_lv2",
Jan 10 17:05:01 compute-0 competent_jemison[127320]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 10 17:05:01 compute-0 competent_jemison[127320]:             "tags": {
Jan 10 17:05:01 compute-0 competent_jemison[127320]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 10 17:05:01 compute-0 competent_jemison[127320]:                 "ceph.block_uuid": "47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL",
Jan 10 17:05:01 compute-0 competent_jemison[127320]:                 "ceph.cephx_lockbox_secret": "",
Jan 10 17:05:01 compute-0 competent_jemison[127320]:                 "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 17:05:01 compute-0 competent_jemison[127320]:                 "ceph.cluster_name": "ceph",
Jan 10 17:05:01 compute-0 competent_jemison[127320]:                 "ceph.crush_device_class": "",
Jan 10 17:05:01 compute-0 competent_jemison[127320]:                 "ceph.encrypted": "0",
Jan 10 17:05:01 compute-0 competent_jemison[127320]:                 "ceph.objectstore": "bluestore",
Jan 10 17:05:01 compute-0 competent_jemison[127320]:                 "ceph.osd_fsid": "87473727-6468-4f68-8371-e0bf60edaa43",
Jan 10 17:05:01 compute-0 competent_jemison[127320]:                 "ceph.osd_id": "2",
Jan 10 17:05:01 compute-0 competent_jemison[127320]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 10 17:05:01 compute-0 competent_jemison[127320]:                 "ceph.type": "block",
Jan 10 17:05:01 compute-0 competent_jemison[127320]:                 "ceph.vdo": "0",
Jan 10 17:05:01 compute-0 competent_jemison[127320]:                 "ceph.with_tpm": "0"
Jan 10 17:05:01 compute-0 competent_jemison[127320]:             },
Jan 10 17:05:01 compute-0 competent_jemison[127320]:             "type": "block",
Jan 10 17:05:01 compute-0 competent_jemison[127320]:             "vg_name": "ceph_vg2"
Jan 10 17:05:01 compute-0 competent_jemison[127320]:         }
Jan 10 17:05:01 compute-0 competent_jemison[127320]:     ]
Jan 10 17:05:01 compute-0 competent_jemison[127320]: }
Jan 10 17:05:01 compute-0 systemd[1]: libpod-a1e5ff5dcef0b08fbf5c0e9e27764fa5821dbe537b2a552104e32039d3b18db1.scope: Deactivated successfully.
Jan 10 17:05:01 compute-0 podman[127264]: 2026-01-10 17:05:01.774766943 +0000 UTC m=+0.478636444 container died a1e5ff5dcef0b08fbf5c0e9e27764fa5821dbe537b2a552104e32039d3b18db1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_jemison, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 10 17:05:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-331ebb8ba491a49325cce48d983ee6b56f2d97997da2eb68a9de9f6a97007762-merged.mount: Deactivated successfully.
Jan 10 17:05:01 compute-0 podman[127264]: 2026-01-10 17:05:01.820871059 +0000 UTC m=+0.524740560 container remove a1e5ff5dcef0b08fbf5c0e9e27764fa5821dbe537b2a552104e32039d3b18db1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_jemison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 10 17:05:01 compute-0 systemd[1]: libpod-conmon-a1e5ff5dcef0b08fbf5c0e9e27764fa5821dbe537b2a552104e32039d3b18db1.scope: Deactivated successfully.
Jan 10 17:05:01 compute-0 sudo[127065]: pam_unix(sudo:session): session closed for user root
Jan 10 17:05:01 compute-0 sudo[127448]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 17:05:01 compute-0 sudo[127448]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:05:01 compute-0 sudo[127448]: pam_unix(sudo:session): session closed for user root
Jan 10 17:05:01 compute-0 sudo[127491]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 -- raw list --format json
Jan 10 17:05:01 compute-0 sudo[127491]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:05:02 compute-0 sudo[127577]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bqzwwwdqnghsqjujaopitrdgostwybji ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064701.8219924-178-257598700437892/AnsiballZ_stat.py'
Jan 10 17:05:02 compute-0 sudo[127577]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:05:02 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v281: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:05:02 compute-0 python3.9[127579]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 17:05:02 compute-0 podman[127593]: 2026-01-10 17:05:02.315272229 +0000 UTC m=+0.081442399 container create df57d0067d6342c971e8789c4ff80d52a7ccb4a40b0149254f26591c765e928c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_cray, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 17:05:02 compute-0 sudo[127577]: pam_unix(sudo:session): session closed for user root
Jan 10 17:05:02 compute-0 systemd[1]: Started libpod-conmon-df57d0067d6342c971e8789c4ff80d52a7ccb4a40b0149254f26591c765e928c.scope.
Jan 10 17:05:02 compute-0 podman[127593]: 2026-01-10 17:05:02.280210075 +0000 UTC m=+0.046380325 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:05:02 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:05:02 compute-0 podman[127593]: 2026-01-10 17:05:02.402897922 +0000 UTC m=+0.169068082 container init df57d0067d6342c971e8789c4ff80d52a7ccb4a40b0149254f26591c765e928c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_cray, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 10 17:05:02 compute-0 podman[127593]: 2026-01-10 17:05:02.409460228 +0000 UTC m=+0.175630388 container start df57d0067d6342c971e8789c4ff80d52a7ccb4a40b0149254f26591c765e928c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_cray, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 10 17:05:02 compute-0 podman[127593]: 2026-01-10 17:05:02.412956877 +0000 UTC m=+0.179127037 container attach df57d0067d6342c971e8789c4ff80d52a7ccb4a40b0149254f26591c765e928c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_cray, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 17:05:02 compute-0 competent_cray[127616]: 167 167
Jan 10 17:05:02 compute-0 systemd[1]: libpod-df57d0067d6342c971e8789c4ff80d52a7ccb4a40b0149254f26591c765e928c.scope: Deactivated successfully.
Jan 10 17:05:02 compute-0 podman[127593]: 2026-01-10 17:05:02.415006895 +0000 UTC m=+0.181177045 container died df57d0067d6342c971e8789c4ff80d52a7ccb4a40b0149254f26591c765e928c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_cray, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 10 17:05:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-1bd30b322662a431d83068439c6aaf28b83e591affb3c6e4d4394564f0f0a93b-merged.mount: Deactivated successfully.
Jan 10 17:05:02 compute-0 podman[127593]: 2026-01-10 17:05:02.450639884 +0000 UTC m=+0.216810044 container remove df57d0067d6342c971e8789c4ff80d52a7ccb4a40b0149254f26591c765e928c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_cray, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 10 17:05:02 compute-0 systemd[1]: libpod-conmon-df57d0067d6342c971e8789c4ff80d52a7ccb4a40b0149254f26591c765e928c.scope: Deactivated successfully.
Jan 10 17:05:02 compute-0 podman[127708]: 2026-01-10 17:05:02.623332788 +0000 UTC m=+0.062026729 container create 79a1f67cbe7a3604e9f1f02a1ce39db41c1340de9a107f382d94cc02c9d3eb46 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_cartwright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 17:05:02 compute-0 systemd[1]: Started libpod-conmon-79a1f67cbe7a3604e9f1f02a1ce39db41c1340de9a107f382d94cc02c9d3eb46.scope.
Jan 10 17:05:02 compute-0 sudo[127771]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uanagmtqozdilggtafcfllulytdwmcsh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064701.8219924-178-257598700437892/AnsiballZ_copy.py'
Jan 10 17:05:02 compute-0 sudo[127771]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:05:02 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:05:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a515995dd6c066f3c03a6897f7d13d9a8c7ae3ba67e5527d1b298aff9608f25/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 17:05:02 compute-0 podman[127708]: 2026-01-10 17:05:02.602907379 +0000 UTC m=+0.041601370 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:05:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a515995dd6c066f3c03a6897f7d13d9a8c7ae3ba67e5527d1b298aff9608f25/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 17:05:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a515995dd6c066f3c03a6897f7d13d9a8c7ae3ba67e5527d1b298aff9608f25/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 17:05:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a515995dd6c066f3c03a6897f7d13d9a8c7ae3ba67e5527d1b298aff9608f25/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 17:05:02 compute-0 podman[127708]: 2026-01-10 17:05:02.709800468 +0000 UTC m=+0.148494459 container init 79a1f67cbe7a3604e9f1f02a1ce39db41c1340de9a107f382d94cc02c9d3eb46 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_cartwright, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 10 17:05:02 compute-0 podman[127708]: 2026-01-10 17:05:02.723236609 +0000 UTC m=+0.161930600 container start 79a1f67cbe7a3604e9f1f02a1ce39db41c1340de9a107f382d94cc02c9d3eb46 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_cartwright, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 17:05:02 compute-0 podman[127708]: 2026-01-10 17:05:02.72753293 +0000 UTC m=+0.166226891 container attach 79a1f67cbe7a3604e9f1f02a1ce39db41c1340de9a107f382d94cc02c9d3eb46 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_cartwright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS)
Jan 10 17:05:02 compute-0 python3.9[127776]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768064701.8219924-178-257598700437892/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=3b8b82f07f1ef991370ee1a21f059d8a61d3668d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:05:02 compute-0 sudo[127771]: pam_unix(sudo:session): session closed for user root
Jan 10 17:05:03 compute-0 sudo[127973]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gqreokcyeilqxmlqiyzhsudeyzjckuxa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064703.0010564-178-242534940115453/AnsiballZ_stat.py'
Jan 10 17:05:03 compute-0 sudo[127973]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:05:03 compute-0 ceph-mon[75249]: pgmap v281: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:05:03 compute-0 python3.9[127981]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 17:05:03 compute-0 lvm[128005]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 10 17:05:03 compute-0 lvm[128005]: VG ceph_vg1 finished
Jan 10 17:05:03 compute-0 lvm[128002]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 10 17:05:03 compute-0 lvm[128002]: VG ceph_vg0 finished
Jan 10 17:05:03 compute-0 sudo[127973]: pam_unix(sudo:session): session closed for user root
Jan 10 17:05:03 compute-0 lvm[128007]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 10 17:05:03 compute-0 lvm[128007]: VG ceph_vg2 finished
Jan 10 17:05:03 compute-0 ecstatic_cartwright[127772]: {}
Jan 10 17:05:03 compute-0 systemd[1]: libpod-79a1f67cbe7a3604e9f1f02a1ce39db41c1340de9a107f382d94cc02c9d3eb46.scope: Deactivated successfully.
Jan 10 17:05:03 compute-0 systemd[1]: libpod-79a1f67cbe7a3604e9f1f02a1ce39db41c1340de9a107f382d94cc02c9d3eb46.scope: Consumed 1.347s CPU time.
Jan 10 17:05:03 compute-0 podman[128057]: 2026-01-10 17:05:03.629329923 +0000 UTC m=+0.025560965 container died 79a1f67cbe7a3604e9f1f02a1ce39db41c1340de9a107f382d94cc02c9d3eb46 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_cartwright, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 10 17:05:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-0a515995dd6c066f3c03a6897f7d13d9a8c7ae3ba67e5527d1b298aff9608f25-merged.mount: Deactivated successfully.
Jan 10 17:05:03 compute-0 podman[128057]: 2026-01-10 17:05:03.671311313 +0000 UTC m=+0.067542335 container remove 79a1f67cbe7a3604e9f1f02a1ce39db41c1340de9a107f382d94cc02c9d3eb46 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_cartwright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 10 17:05:03 compute-0 systemd[1]: libpod-conmon-79a1f67cbe7a3604e9f1f02a1ce39db41c1340de9a107f382d94cc02c9d3eb46.scope: Deactivated successfully.
Jan 10 17:05:03 compute-0 sudo[127491]: pam_unix(sudo:session): session closed for user root
Jan 10 17:05:03 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 10 17:05:03 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:05:03 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 10 17:05:03 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:05:03 compute-0 sudo[128168]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zsohotdafbnahkhegoulgkrhcfiibpvb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064703.0010564-178-242534940115453/AnsiballZ_copy.py'
Jan 10 17:05:03 compute-0 sudo[128119]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 10 17:05:03 compute-0 sudo[128168]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:05:03 compute-0 sudo[128119]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:05:03 compute-0 sudo[128119]: pam_unix(sudo:session): session closed for user root
Jan 10 17:05:04 compute-0 python3.9[128171]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768064703.0010564-178-242534940115453/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=c56b196327ab38c96598f9582974c28b6e44c1a4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:05:04 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:05:04 compute-0 sudo[128168]: pam_unix(sudo:session): session closed for user root
Jan 10 17:05:04 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v282: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:05:04 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:05:04 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:05:04 compute-0 ceph-mon[75249]: pgmap v282: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:05:05 compute-0 sudo[128322]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zliswgxyvsalrnfmtpftjgccfbwdlzaz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064704.8230674-238-234221665677981/AnsiballZ_file.py'
Jan 10 17:05:05 compute-0 sudo[128322]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:05:05 compute-0 python3.9[128324]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 10 17:05:05 compute-0 sudo[128322]: pam_unix(sudo:session): session closed for user root
Jan 10 17:05:05 compute-0 sudo[128474]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gqsvslkqmpdivxnfunhscbaofpkvdzzh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064705.5200217-246-103613909924887/AnsiballZ_stat.py'
Jan 10 17:05:05 compute-0 sudo[128474]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:05:06 compute-0 python3.9[128476]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 17:05:06 compute-0 sudo[128474]: pam_unix(sudo:session): session closed for user root
Jan 10 17:05:06 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v283: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:05:06 compute-0 sudo[128597]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lkdubyrubqwfkqzhdoxatbzpmvngusju ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064705.5200217-246-103613909924887/AnsiballZ_copy.py'
Jan 10 17:05:06 compute-0 sudo[128597]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:05:06 compute-0 python3.9[128599]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768064705.5200217-246-103613909924887/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=1c1aa104eb1736f59ba6477b43a84ef8e828e0b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:05:06 compute-0 sudo[128597]: pam_unix(sudo:session): session closed for user root
Jan 10 17:05:07 compute-0 ceph-mon[75249]: pgmap v283: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:05:07 compute-0 sudo[128749]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-udyessnqgjwwhwmnbzgcrskyqrssgnpp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064707.0535884-262-279787653755576/AnsiballZ_file.py'
Jan 10 17:05:07 compute-0 sudo[128749]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:05:07 compute-0 python3.9[128751]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 10 17:05:07 compute-0 sudo[128749]: pam_unix(sudo:session): session closed for user root
Jan 10 17:05:08 compute-0 sudo[128901]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nldmcaeonetphgavemspyyteammccrcp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064707.7797709-270-99606809918452/AnsiballZ_stat.py'
Jan 10 17:05:08 compute-0 sudo[128901]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:05:08 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v284: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:05:08 compute-0 python3.9[128903]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 17:05:08 compute-0 sudo[128901]: pam_unix(sudo:session): session closed for user root
Jan 10 17:05:08 compute-0 sudo[129024]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xvasayxevmgvqhbylthehygzxpcicplv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064707.7797709-270-99606809918452/AnsiballZ_copy.py'
Jan 10 17:05:08 compute-0 sudo[129024]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:05:08 compute-0 python3.9[129026]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768064707.7797709-270-99606809918452/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=1c1aa104eb1736f59ba6477b43a84ef8e828e0b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:05:08 compute-0 sudo[129024]: pam_unix(sudo:session): session closed for user root
Jan 10 17:05:08 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:05:08 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:05:08 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:05:08 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:05:08 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:05:08 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:05:09 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:05:09 compute-0 ceph-mon[75249]: pgmap v284: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:05:09 compute-0 sudo[129176]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lezdjbnpmuqdfgwnpllwvzmoscjfeyiq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064709.0443332-286-95912773041104/AnsiballZ_file.py'
Jan 10 17:05:09 compute-0 sudo[129176]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:05:09 compute-0 python3.9[129178]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 10 17:05:09 compute-0 sudo[129176]: pam_unix(sudo:session): session closed for user root
Jan 10 17:05:09 compute-0 sudo[129328]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hiznvbcucyirkklwmjlpqzxxisbupxcp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064709.690493-294-255146557262320/AnsiballZ_stat.py'
Jan 10 17:05:09 compute-0 sudo[129328]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:05:10 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v285: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:05:10 compute-0 python3.9[129330]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 17:05:10 compute-0 sudo[129328]: pam_unix(sudo:session): session closed for user root
Jan 10 17:05:10 compute-0 sudo[129451]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dkompyzcjbffmfectdgwjmzgxwdynkbk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064709.690493-294-255146557262320/AnsiballZ_copy.py'
Jan 10 17:05:10 compute-0 sudo[129451]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:05:10 compute-0 python3.9[129453]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768064709.690493-294-255146557262320/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=1c1aa104eb1736f59ba6477b43a84ef8e828e0b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:05:10 compute-0 sudo[129451]: pam_unix(sudo:session): session closed for user root
Jan 10 17:05:11 compute-0 ceph-mon[75249]: pgmap v285: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:05:11 compute-0 sudo[129603]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pdgdetexgessfumevnyqwkqceotvvrna ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064710.9725304-310-210317353094706/AnsiballZ_file.py'
Jan 10 17:05:11 compute-0 sudo[129603]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:05:11 compute-0 python3.9[129605]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 10 17:05:11 compute-0 sudo[129603]: pam_unix(sudo:session): session closed for user root
Jan 10 17:05:11 compute-0 sudo[129755]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ibfzabzwrewjonudzjoqorsdjfiordvn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064711.6718209-318-149066320588620/AnsiballZ_stat.py'
Jan 10 17:05:11 compute-0 sudo[129755]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:05:12 compute-0 python3.9[129757]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 17:05:12 compute-0 sudo[129755]: pam_unix(sudo:session): session closed for user root
Jan 10 17:05:12 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v286: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:05:12 compute-0 sudo[129878]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uklcdmggyfpcsqfpbvgejjoobxstwfny ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064711.6718209-318-149066320588620/AnsiballZ_copy.py'
Jan 10 17:05:12 compute-0 sudo[129878]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:05:12 compute-0 python3.9[129880]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768064711.6718209-318-149066320588620/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=1c1aa104eb1736f59ba6477b43a84ef8e828e0b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:05:12 compute-0 sudo[129878]: pam_unix(sudo:session): session closed for user root
Jan 10 17:05:13 compute-0 sudo[130030]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eeoorzmewwyroajbpeggsqrwvexkrnpj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064712.8334198-334-109669533169330/AnsiballZ_file.py'
Jan 10 17:05:13 compute-0 sudo[130030]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:05:13 compute-0 ceph-mon[75249]: pgmap v286: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:05:13 compute-0 python3.9[130032]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 10 17:05:13 compute-0 sudo[130030]: pam_unix(sudo:session): session closed for user root
Jan 10 17:05:13 compute-0 sudo[130182]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wnxriwmyuqsyairvuzlacisirkqzjipl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064713.4758482-342-182869410645144/AnsiballZ_stat.py'
Jan 10 17:05:13 compute-0 sudo[130182]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:05:13 compute-0 python3.9[130184]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 17:05:13 compute-0 sudo[130182]: pam_unix(sudo:session): session closed for user root
Jan 10 17:05:14 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:05:14 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v287: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:05:14 compute-0 sudo[130305]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qaisctzzynehotipdttrbbqqxkqhcnha ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064713.4758482-342-182869410645144/AnsiballZ_copy.py'
Jan 10 17:05:14 compute-0 sudo[130305]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:05:14 compute-0 python3.9[130307]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768064713.4758482-342-182869410645144/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=1c1aa104eb1736f59ba6477b43a84ef8e828e0b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:05:14 compute-0 sudo[130305]: pam_unix(sudo:session): session closed for user root
Jan 10 17:05:14 compute-0 sudo[130457]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xidlwyppyzirjleohxojiqywinglrltl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064714.6774914-358-78863995130317/AnsiballZ_file.py'
Jan 10 17:05:14 compute-0 sudo[130457]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:05:15 compute-0 python3.9[130459]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 10 17:05:15 compute-0 sudo[130457]: pam_unix(sudo:session): session closed for user root
Jan 10 17:05:15 compute-0 ceph-mon[75249]: pgmap v287: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:05:15 compute-0 sudo[130609]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qcdmkwxzqhnzsoxprusizpioskaasmno ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064715.3562958-366-253020009294174/AnsiballZ_stat.py'
Jan 10 17:05:15 compute-0 sudo[130609]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:05:15 compute-0 python3.9[130611]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 17:05:15 compute-0 sudo[130609]: pam_unix(sudo:session): session closed for user root
Jan 10 17:05:16 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v288: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:05:16 compute-0 sudo[130732]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-twhqnbpzfmshnffccgcxgekycfoqvekq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064715.3562958-366-253020009294174/AnsiballZ_copy.py'
Jan 10 17:05:16 compute-0 sudo[130732]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:05:16 compute-0 python3.9[130734]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768064715.3562958-366-253020009294174/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=1c1aa104eb1736f59ba6477b43a84ef8e828e0b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:05:16 compute-0 sudo[130732]: pam_unix(sudo:session): session closed for user root
Jan 10 17:05:16 compute-0 sshd-session[124012]: Connection closed by 192.168.122.30 port 42876
Jan 10 17:05:16 compute-0 sshd-session[124009]: pam_unix(sshd:session): session closed for user zuul
Jan 10 17:05:16 compute-0 systemd[1]: session-44.scope: Deactivated successfully.
Jan 10 17:05:16 compute-0 systemd[1]: session-44.scope: Consumed 23.937s CPU time.
Jan 10 17:05:16 compute-0 systemd-logind[798]: Session 44 logged out. Waiting for processes to exit.
Jan 10 17:05:16 compute-0 systemd-logind[798]: Removed session 44.
Jan 10 17:05:17 compute-0 ceph-mon[75249]: pgmap v288: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:05:18 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v289: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:05:18 compute-0 ceph-mon[75249]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #18. Immutable memtables: 0.
Jan 10 17:05:18 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:05:18.275194) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 10 17:05:18 compute-0 ceph-mon[75249]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 18
Jan 10 17:05:18 compute-0 ceph-mon[75249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768064718275464, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 6689, "num_deletes": 251, "total_data_size": 7852383, "memory_usage": 8005440, "flush_reason": "Manual Compaction"}
Jan 10 17:05:18 compute-0 ceph-mon[75249]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #19: started
Jan 10 17:05:18 compute-0 ceph-mon[75249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768064718322565, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 19, "file_size": 5822486, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 146, "largest_seqno": 6832, "table_properties": {"data_size": 5798706, "index_size": 15474, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 7237, "raw_key_size": 63657, "raw_average_key_size": 22, "raw_value_size": 5743790, "raw_average_value_size": 2002, "num_data_blocks": 693, "num_entries": 2869, "num_filter_entries": 2869, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768064238, "oldest_key_time": 1768064238, "file_creation_time": 1768064718, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f71f9c2-f3c5-4fc3-bcd9-6ffe346ae9d4", "db_session_id": "VPFJD76VNV79HUMFHEYZ", "orig_file_number": 19, "seqno_to_time_mapping": "N/A"}}
Jan 10 17:05:18 compute-0 ceph-mon[75249]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 47385 microseconds, and 18567 cpu microseconds.
Jan 10 17:05:18 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:05:18.322648) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #19: 5822486 bytes OK
Jan 10 17:05:18 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:05:18.322728) [db/memtable_list.cc:519] [default] Level-0 commit table #19 started
Jan 10 17:05:18 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:05:18.324977) [db/memtable_list.cc:722] [default] Level-0 commit table #19: memtable #1 done
Jan 10 17:05:18 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:05:18.325011) EVENT_LOG_v1 {"time_micros": 1768064718325005, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [3, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0}
Jan 10 17:05:18 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:05:18.325070) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[3 0 0 0 0 0 0] max score 0.75
Jan 10 17:05:18 compute-0 ceph-mon[75249]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 7824041, prev total WAL file size 7824041, number of live WAL files 2.
Jan 10 17:05:18 compute-0 ceph-mon[75249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000014.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 10 17:05:18 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:05:18.327318) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730030' seq:72057594037927935, type:22 .. '7061786F7300323532' seq:0, type:0; will stop at (end)
Jan 10 17:05:18 compute-0 ceph-mon[75249]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 3@0 files to L6, score -1.00
Jan 10 17:05:18 compute-0 ceph-mon[75249]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [19(5686KB) 13(58KB) 8(1944B)]
Jan 10 17:05:18 compute-0 ceph-mon[75249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768064718327647, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [19, 13, 8], "score": -1, "input_data_size": 5884390, "oldest_snapshot_seqno": -1}
Jan 10 17:05:18 compute-0 ceph-mon[75249]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #20: 2695 keys, 5837320 bytes, temperature: kUnknown
Jan 10 17:05:18 compute-0 ceph-mon[75249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768064718385050, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 20, "file_size": 5837320, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 5813885, "index_size": 15582, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 6789, "raw_key_size": 62097, "raw_average_key_size": 23, "raw_value_size": 5760310, "raw_average_value_size": 2137, "num_data_blocks": 698, "num_entries": 2695, "num_filter_entries": 2695, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768064235, "oldest_key_time": 0, "file_creation_time": 1768064718, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f71f9c2-f3c5-4fc3-bcd9-6ffe346ae9d4", "db_session_id": "VPFJD76VNV79HUMFHEYZ", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}}
Jan 10 17:05:18 compute-0 ceph-mon[75249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 10 17:05:18 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:05:18.385565) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 3@0 files to L6 => 5837320 bytes
Jan 10 17:05:18 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:05:18.387059) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 102.4 rd, 101.6 wr, level 6, files in(3, 0) out(1 +0 blob) MB in(5.6, 0.0 +0.0 blob) out(5.6 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 2984, records dropped: 289 output_compression: NoCompression
Jan 10 17:05:18 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:05:18.387081) EVENT_LOG_v1 {"time_micros": 1768064718387068, "job": 4, "event": "compaction_finished", "compaction_time_micros": 57461, "compaction_time_cpu_micros": 30019, "output_level": 6, "num_output_files": 1, "total_output_size": 5837320, "num_input_records": 2984, "num_output_records": 2695, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 10 17:05:18 compute-0 ceph-mon[75249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000019.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 10 17:05:18 compute-0 ceph-mon[75249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768064718388783, "job": 4, "event": "table_file_deletion", "file_number": 19}
Jan 10 17:05:18 compute-0 ceph-mon[75249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000013.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 10 17:05:18 compute-0 ceph-mon[75249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768064718388874, "job": 4, "event": "table_file_deletion", "file_number": 13}
Jan 10 17:05:18 compute-0 ceph-mon[75249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 10 17:05:18 compute-0 ceph-mon[75249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768064718388958, "job": 4, "event": "table_file_deletion", "file_number": 8}
Jan 10 17:05:18 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:05:18.326903) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 10 17:05:19 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:05:19 compute-0 ceph-mon[75249]: pgmap v289: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:05:20 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v290: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:05:21 compute-0 ceph-mon[75249]: pgmap v290: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:05:22 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v291: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:05:22 compute-0 sshd-session[130760]: Accepted publickey for zuul from 192.168.122.30 port 46908 ssh2: ECDSA SHA256:YYROLJW/JwZAyyZtyl+88gzuUs1GqrQIhGb+AzXg9yc
Jan 10 17:05:22 compute-0 systemd-logind[798]: New session 45 of user zuul.
Jan 10 17:05:22 compute-0 systemd[1]: Started Session 45 of User zuul.
Jan 10 17:05:22 compute-0 sshd-session[130760]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 10 17:05:23 compute-0 sudo[130913]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vhhbqtcyhcyoobitytldingrluemjctf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064722.5207508-17-87972273444882/AnsiballZ_file.py'
Jan 10 17:05:23 compute-0 sudo[130913]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:05:23 compute-0 python3.9[130915]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:05:23 compute-0 sudo[130913]: pam_unix(sudo:session): session closed for user root
Jan 10 17:05:23 compute-0 ceph-mon[75249]: pgmap v291: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:05:23 compute-0 sudo[131065]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmewmitoivucqosqrkfewaqbigdhdjgm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064723.5338562-29-24083593284221/AnsiballZ_stat.py'
Jan 10 17:05:23 compute-0 sudo[131065]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:05:24 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:05:24 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v292: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:05:24 compute-0 python3.9[131067]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 17:05:24 compute-0 sudo[131065]: pam_unix(sudo:session): session closed for user root
Jan 10 17:05:24 compute-0 sudo[131188]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vnrovblmfpgepiouqfqyegjxhnzlrqwe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064723.5338562-29-24083593284221/AnsiballZ_copy.py'
Jan 10 17:05:24 compute-0 sudo[131188]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:05:24 compute-0 python3.9[131190]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1768064723.5338562-29-24083593284221/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=7cc641ddc3c198361b04b7e13e353930d285d63f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:05:24 compute-0 sudo[131188]: pam_unix(sudo:session): session closed for user root
Jan 10 17:05:25 compute-0 ceph-mon[75249]: pgmap v292: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:05:25 compute-0 sudo[131340]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-smnfhgvzxjplfuimhxzzjvfmaktdijzk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064725.1008286-29-266859983038931/AnsiballZ_stat.py'
Jan 10 17:05:25 compute-0 sudo[131340]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:05:25 compute-0 python3.9[131342]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 17:05:25 compute-0 sudo[131340]: pam_unix(sudo:session): session closed for user root
Jan 10 17:05:26 compute-0 sudo[131463]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nzhawzwfxtplylgvyoysuivmurogzghl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064725.1008286-29-266859983038931/AnsiballZ_copy.py'
Jan 10 17:05:26 compute-0 sudo[131463]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:05:26 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v293: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:05:26 compute-0 python3.9[131465]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1768064725.1008286-29-266859983038931/.source.conf _original_basename=ceph.conf follow=False checksum=212a91a5c6ea3008fced76612c32c83bbed76d72 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:05:26 compute-0 sudo[131463]: pam_unix(sudo:session): session closed for user root
Jan 10 17:05:26 compute-0 sshd-session[130763]: Connection closed by 192.168.122.30 port 46908
Jan 10 17:05:26 compute-0 sshd-session[130760]: pam_unix(sshd:session): session closed for user zuul
Jan 10 17:05:26 compute-0 systemd[1]: session-45.scope: Deactivated successfully.
Jan 10 17:05:26 compute-0 systemd[1]: session-45.scope: Consumed 2.882s CPU time.
Jan 10 17:05:26 compute-0 systemd-logind[798]: Session 45 logged out. Waiting for processes to exit.
Jan 10 17:05:26 compute-0 systemd-logind[798]: Removed session 45.
Jan 10 17:05:27 compute-0 ceph-mon[75249]: pgmap v293: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:05:28 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v294: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:05:29 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:05:29 compute-0 ceph-mon[75249]: pgmap v294: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:05:30 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v295: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:05:31 compute-0 ceph-mon[75249]: pgmap v295: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:05:31 compute-0 sshd-session[131490]: Accepted publickey for zuul from 192.168.122.30 port 44964 ssh2: ECDSA SHA256:YYROLJW/JwZAyyZtyl+88gzuUs1GqrQIhGb+AzXg9yc
Jan 10 17:05:31 compute-0 systemd-logind[798]: New session 46 of user zuul.
Jan 10 17:05:31 compute-0 systemd[1]: Started Session 46 of User zuul.
Jan 10 17:05:31 compute-0 sshd-session[131490]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 10 17:05:32 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v296: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:05:32 compute-0 ceph-mon[75249]: pgmap v296: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:05:33 compute-0 python3.9[131643]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 10 17:05:34 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:05:34 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v297: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:05:34 compute-0 sudo[131797]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ijljqvxtqkfujihdrnqysyfcwhjwngsk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064733.7225204-29-128151364623913/AnsiballZ_file.py'
Jan 10 17:05:34 compute-0 sudo[131797]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:05:34 compute-0 python3.9[131799]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 10 17:05:34 compute-0 sudo[131797]: pam_unix(sudo:session): session closed for user root
Jan 10 17:05:34 compute-0 sudo[131949]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jcxjoidtypurwoqzinqikcdmoutuwhtk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064734.5718143-29-215374308064877/AnsiballZ_file.py'
Jan 10 17:05:34 compute-0 sudo[131949]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:05:35 compute-0 python3.9[131951]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 10 17:05:35 compute-0 sudo[131949]: pam_unix(sudo:session): session closed for user root
Jan 10 17:05:35 compute-0 ceph-mon[75249]: pgmap v297: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:05:35 compute-0 python3.9[132101]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 10 17:05:36 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v298: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:05:36 compute-0 sudo[132251]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hgpuewmyhaqztevbvedqqlvruiscdpgp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064736.006076-52-212185022258975/AnsiballZ_seboolean.py'
Jan 10 17:05:36 compute-0 sudo[132251]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:05:36 compute-0 python3.9[132253]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Jan 10 17:05:37 compute-0 ceph-mon[75249]: pgmap v298: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:05:38 compute-0 ceph-mgr[75538]: [balancer INFO root] Optimize plan auto_2026-01-10_17:05:38
Jan 10 17:05:38 compute-0 ceph-mgr[75538]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 10 17:05:38 compute-0 ceph-mgr[75538]: [balancer INFO root] do_upmap
Jan 10 17:05:38 compute-0 ceph-mgr[75538]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'backups', 'volumes', 'cephfs.cephfs.data', 'vms', '.mgr', 'images']
Jan 10 17:05:38 compute-0 ceph-mgr[75538]: [balancer INFO root] prepared 0/10 upmap changes
Jan 10 17:05:38 compute-0 sudo[132251]: pam_unix(sudo:session): session closed for user root
Jan 10 17:05:38 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v299: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:05:38 compute-0 sudo[132407]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dbajenehllcqmwqflametvmfyciuwsjh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064738.405041-62-2764787729037/AnsiballZ_setup.py'
Jan 10 17:05:38 compute-0 dbus-broker-launch[777]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Jan 10 17:05:38 compute-0 sudo[132407]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:05:38 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:05:38 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:05:38 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:05:38 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:05:38 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:05:38 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:05:39 compute-0 python3.9[132409]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 10 17:05:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 10 17:05:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 10 17:05:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 10 17:05:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 10 17:05:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 10 17:05:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 10 17:05:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 10 17:05:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 10 17:05:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 10 17:05:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 10 17:05:39 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:05:39 compute-0 ceph-mon[75249]: pgmap v299: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:05:39 compute-0 sudo[132407]: pam_unix(sudo:session): session closed for user root
Jan 10 17:05:39 compute-0 sudo[132491]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-osiifcpmewkpmbbrcbocfymztycpqlwd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064738.405041-62-2764787729037/AnsiballZ_dnf.py'
Jan 10 17:05:39 compute-0 sudo[132491]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:05:39 compute-0 python3.9[132493]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 10 17:05:40 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v300: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:05:41 compute-0 sudo[132491]: pam_unix(sudo:session): session closed for user root
Jan 10 17:05:41 compute-0 ceph-mon[75249]: pgmap v300: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:05:42 compute-0 sudo[132644]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qpmfbbbwqjdhltzbblfuskbxhdagkisx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064741.3948393-74-244058244763313/AnsiballZ_systemd.py'
Jan 10 17:05:42 compute-0 sudo[132644]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:05:42 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v301: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:05:42 compute-0 python3.9[132646]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 10 17:05:42 compute-0 sudo[132644]: pam_unix(sudo:session): session closed for user root
Jan 10 17:05:43 compute-0 sudo[132799]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ghcugevrbhabmlankzovtozjgjxxzqgn ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1768064742.6069279-82-144178101443343/AnsiballZ_edpm_nftables_snippet.py'
Jan 10 17:05:43 compute-0 sudo[132799]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:05:43 compute-0 ceph-mon[75249]: pgmap v301: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:05:43 compute-0 python3[132801]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks
                                             rule:
                                               proto: udp
                                               dport: 4789
                                           - rule_name: 119 neutron geneve networks
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               state: ["UNTRACKED"]
                                           - rule_name: 120 neutron geneve networks no conntrack
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               table: raw
                                               chain: OUTPUT
                                               jump: NOTRACK
                                               action: append
                                               state: []
                                           - rule_name: 121 neutron geneve networks no conntrack
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               table: raw
                                               chain: PREROUTING
                                               jump: NOTRACK
                                               action: append
                                               state: []
                                            dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Jan 10 17:05:43 compute-0 sudo[132799]: pam_unix(sudo:session): session closed for user root
Jan 10 17:05:43 compute-0 sudo[132951]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kzqfdsycticitcrwylbbzqxscgwhkghx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064743.640126-91-38642724668111/AnsiballZ_file.py'
Jan 10 17:05:43 compute-0 sudo[132951]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:05:44 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:05:44 compute-0 python3.9[132953]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:05:44 compute-0 sudo[132951]: pam_unix(sudo:session): session closed for user root
Jan 10 17:05:44 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v302: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:05:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] _maybe_adjust
Jan 10 17:05:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:05:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 10 17:05:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:05:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 17:05:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:05:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 17:05:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:05:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 17:05:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:05:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 17:05:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:05:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 9.302004027771843e-07 of space, bias 4.0, pg target 0.0011162404833326212 quantized to 16 (current 16)
Jan 10 17:05:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:05:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 17:05:44 compute-0 sudo[133103]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wforpimmomuyckkioswklekmioicfmzy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064744.3090243-99-269352166061785/AnsiballZ_stat.py'
Jan 10 17:05:44 compute-0 sudo[133103]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:05:45 compute-0 python3.9[133105]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 17:05:45 compute-0 sudo[133103]: pam_unix(sudo:session): session closed for user root
Jan 10 17:05:45 compute-0 ceph-mon[75249]: pgmap v302: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:05:45 compute-0 sudo[133181]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wacwnrkpbslqbgcxbmawbxzdcunczxyk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064744.3090243-99-269352166061785/AnsiballZ_file.py'
Jan 10 17:05:45 compute-0 sudo[133181]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:05:45 compute-0 python3.9[133183]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:05:45 compute-0 sudo[133181]: pam_unix(sudo:session): session closed for user root
Jan 10 17:05:46 compute-0 sudo[133333]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uzgmtpqnkobelyyibkbfcxzhygwfvufi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064745.6794055-111-15635956455635/AnsiballZ_stat.py'
Jan 10 17:05:46 compute-0 sudo[133333]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:05:46 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v303: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:05:46 compute-0 python3.9[133335]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 17:05:46 compute-0 sudo[133333]: pam_unix(sudo:session): session closed for user root
Jan 10 17:05:46 compute-0 sudo[133411]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mpkgpucknjhztbnswehowzfqdswutnpt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064745.6794055-111-15635956455635/AnsiballZ_file.py'
Jan 10 17:05:46 compute-0 sudo[133411]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:05:46 compute-0 python3.9[133413]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.5492imh7 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:05:46 compute-0 sudo[133411]: pam_unix(sudo:session): session closed for user root
Jan 10 17:05:47 compute-0 sudo[133563]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-puotlrgdrivqslxcgrsupcyggneovhqo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064746.8616257-123-9166988318284/AnsiballZ_stat.py'
Jan 10 17:05:47 compute-0 sudo[133563]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:05:47 compute-0 ceph-mon[75249]: pgmap v303: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:05:47 compute-0 python3.9[133565]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 17:05:47 compute-0 sudo[133563]: pam_unix(sudo:session): session closed for user root
Jan 10 17:05:47 compute-0 sudo[133641]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fppcpekgrmzuklehwfjxycgtfnujuugh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064746.8616257-123-9166988318284/AnsiballZ_file.py'
Jan 10 17:05:47 compute-0 sudo[133641]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:05:47 compute-0 python3.9[133643]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:05:47 compute-0 sudo[133641]: pam_unix(sudo:session): session closed for user root
Jan 10 17:05:48 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v304: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:05:48 compute-0 sudo[133793]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vhfsttguwgzxkfoqbsfajlwqnfhiedrk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064748.0043123-136-277277059984206/AnsiballZ_command.py'
Jan 10 17:05:48 compute-0 sudo[133793]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:05:48 compute-0 python3.9[133795]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 17:05:48 compute-0 sudo[133793]: pam_unix(sudo:session): session closed for user root
Jan 10 17:05:49 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:05:49 compute-0 sudo[133948]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mhjllosbqhcqhnybanmfqtlmsygcmmbn ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1768064748.7915447-144-180740301742461/AnsiballZ_edpm_nftables_from_files.py'
Jan 10 17:05:49 compute-0 sudo[133948]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:05:49 compute-0 sshd-session[133849]: Invalid user user1 from 216.36.124.133 port 44482
Jan 10 17:05:49 compute-0 ceph-mon[75249]: pgmap v304: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:05:49 compute-0 python3[133950]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 10 17:05:49 compute-0 sshd-session[133849]: Connection closed by invalid user user1 216.36.124.133 port 44482 [preauth]
Jan 10 17:05:49 compute-0 sudo[133948]: pam_unix(sudo:session): session closed for user root
Jan 10 17:05:49 compute-0 sudo[134100]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-twwurzyiqgppnjybxuyrgpxedorghivq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064749.5778897-152-125728905625632/AnsiballZ_stat.py'
Jan 10 17:05:49 compute-0 sudo[134100]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:05:50 compute-0 python3.9[134102]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 17:05:50 compute-0 sudo[134100]: pam_unix(sudo:session): session closed for user root
Jan 10 17:05:50 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v305: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:05:50 compute-0 sudo[134225]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ktxgmdlqeuhovjifuhoudbvyjxvhwzvq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064749.5778897-152-125728905625632/AnsiballZ_copy.py'
Jan 10 17:05:50 compute-0 sudo[134225]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:05:50 compute-0 python3.9[134227]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768064749.5778897-152-125728905625632/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:05:50 compute-0 sudo[134225]: pam_unix(sudo:session): session closed for user root
Jan 10 17:05:51 compute-0 ceph-mon[75249]: pgmap v305: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:05:51 compute-0 sudo[134377]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wxjfvnndvyzinmaawdtdejufpnpyzcwp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064751.04687-167-144701447996540/AnsiballZ_stat.py'
Jan 10 17:05:51 compute-0 sudo[134377]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:05:51 compute-0 python3.9[134379]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 17:05:51 compute-0 sudo[134377]: pam_unix(sudo:session): session closed for user root
Jan 10 17:05:52 compute-0 sudo[134502]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vymmrhunveebfipgzrmxxkvaosjcbruq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064751.04687-167-144701447996540/AnsiballZ_copy.py'
Jan 10 17:05:52 compute-0 sudo[134502]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:05:52 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v306: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:05:52 compute-0 python3.9[134504]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768064751.04687-167-144701447996540/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:05:52 compute-0 sudo[134502]: pam_unix(sudo:session): session closed for user root
Jan 10 17:05:52 compute-0 sudo[134654]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iiimnrexnnzysxqulcdecyrudvfyncei ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064752.4682178-182-233430569845012/AnsiballZ_stat.py'
Jan 10 17:05:52 compute-0 sudo[134654]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:05:52 compute-0 python3.9[134656]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 17:05:53 compute-0 sudo[134654]: pam_unix(sudo:session): session closed for user root
Jan 10 17:05:53 compute-0 sudo[134779]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mbfmbklqgilzettqammatsvnkzvpggwt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064752.4682178-182-233430569845012/AnsiballZ_copy.py'
Jan 10 17:05:53 compute-0 sudo[134779]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:05:53 compute-0 ceph-mon[75249]: pgmap v306: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:05:53 compute-0 python3.9[134781]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768064752.4682178-182-233430569845012/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:05:53 compute-0 sudo[134779]: pam_unix(sudo:session): session closed for user root
Jan 10 17:05:54 compute-0 sudo[134931]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bnxcmbvcwympeencmkfbfqirraovjkvh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064753.670583-197-60708047697570/AnsiballZ_stat.py'
Jan 10 17:05:54 compute-0 sudo[134931]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:05:54 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:05:54 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v307: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:05:54 compute-0 python3.9[134933]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 17:05:54 compute-0 sudo[134931]: pam_unix(sudo:session): session closed for user root
Jan 10 17:05:54 compute-0 sudo[135056]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ledtuiaolnutrqprhniryadgmbmvvsmz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064753.670583-197-60708047697570/AnsiballZ_copy.py'
Jan 10 17:05:54 compute-0 sudo[135056]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:05:54 compute-0 python3.9[135058]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768064753.670583-197-60708047697570/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:05:54 compute-0 sudo[135056]: pam_unix(sudo:session): session closed for user root
Jan 10 17:05:55 compute-0 sudo[135208]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-byilozdyvjggbxdnsyhqdygvtudwzgnl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064754.9803271-212-31590854392819/AnsiballZ_stat.py'
Jan 10 17:05:55 compute-0 sudo[135208]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:05:55 compute-0 ceph-mon[75249]: pgmap v307: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:05:55 compute-0 python3.9[135210]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 17:05:55 compute-0 sudo[135208]: pam_unix(sudo:session): session closed for user root
Jan 10 17:05:55 compute-0 sudo[135333]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qarnxenbgcoffejoseskdtbzhsutrtwz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064754.9803271-212-31590854392819/AnsiballZ_copy.py'
Jan 10 17:05:55 compute-0 sudo[135333]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:05:56 compute-0 python3.9[135335]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768064754.9803271-212-31590854392819/.source.nft follow=False _original_basename=ruleset.j2 checksum=bdba38546f86123f1927359d89789bd211aba99d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:05:56 compute-0 sudo[135333]: pam_unix(sudo:session): session closed for user root
Jan 10 17:05:56 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v308: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:05:56 compute-0 sudo[135485]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-behixywpuxkhxtpzbheduzigfybblrim ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064756.3213723-227-33422176710120/AnsiballZ_file.py'
Jan 10 17:05:56 compute-0 sudo[135485]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:05:56 compute-0 python3.9[135487]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:05:56 compute-0 sudo[135485]: pam_unix(sudo:session): session closed for user root
Jan 10 17:05:57 compute-0 sudo[135637]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aacdcogirgsskekjmggdhwsfivukfonk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064756.957621-235-32507352457030/AnsiballZ_command.py'
Jan 10 17:05:57 compute-0 sudo[135637]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:05:57 compute-0 ceph-mon[75249]: pgmap v308: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:05:57 compute-0 python3.9[135639]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 17:05:57 compute-0 sudo[135637]: pam_unix(sudo:session): session closed for user root
Jan 10 17:05:57 compute-0 sudo[135792]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dmabeqansiozauhydubxwpwlvnvjabcz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064757.5612571-243-201739871307298/AnsiballZ_blockinfile.py'
Jan 10 17:05:57 compute-0 sudo[135792]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:05:58 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v309: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:05:58 compute-0 python3.9[135794]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:05:58 compute-0 sudo[135792]: pam_unix(sudo:session): session closed for user root
Jan 10 17:05:58 compute-0 sudo[135944]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-waarayibttbsvhefdtbfwurgsmfagona ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064758.3804593-252-240466254860667/AnsiballZ_command.py'
Jan 10 17:05:58 compute-0 sudo[135944]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:05:58 compute-0 python3.9[135946]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 17:05:58 compute-0 sudo[135944]: pam_unix(sudo:session): session closed for user root
Jan 10 17:05:59 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:05:59 compute-0 ceph-mon[75249]: pgmap v309: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:05:59 compute-0 sudo[136097]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wmculifaaubqupiigxmtkymwadycebjc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064759.0952137-260-24548205504082/AnsiballZ_stat.py'
Jan 10 17:05:59 compute-0 sudo[136097]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:05:59 compute-0 python3.9[136099]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 10 17:05:59 compute-0 sudo[136097]: pam_unix(sudo:session): session closed for user root
Jan 10 17:06:00 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v310: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:06:00 compute-0 sudo[136251]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vzaaaxnoporkavojugbwwbxkatomlmwi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064759.8833315-268-152194936960073/AnsiballZ_command.py'
Jan 10 17:06:00 compute-0 sudo[136251]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:06:00 compute-0 python3.9[136253]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 17:06:00 compute-0 sudo[136251]: pam_unix(sudo:session): session closed for user root
Jan 10 17:06:00 compute-0 sudo[136406]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bbfidzkbdkqnolklkbzsqlfeuakhjnrb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064760.5973208-276-50283814682159/AnsiballZ_file.py'
Jan 10 17:06:00 compute-0 sudo[136406]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:06:01 compute-0 python3.9[136408]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:06:01 compute-0 sudo[136406]: pam_unix(sudo:session): session closed for user root
Jan 10 17:06:01 compute-0 ceph-mon[75249]: pgmap v310: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:06:02 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v311: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:06:02 compute-0 python3.9[136558]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 10 17:06:03 compute-0 sudo[136709]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ooogqpshoucebqhmtphcorhwmxkvmbrl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064762.818731-316-30623388054320/AnsiballZ_command.py'
Jan 10 17:06:03 compute-0 sudo[136709]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:06:03 compute-0 python3.9[136711]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:0e:0a:9d:bd:06:c0" external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch 
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 17:06:03 compute-0 ovs-vsctl[136712]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:0e:0a:9d:bd:06:c0 external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Jan 10 17:06:03 compute-0 sudo[136709]: pam_unix(sudo:session): session closed for user root
Jan 10 17:06:03 compute-0 ceph-mon[75249]: pgmap v311: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:06:03 compute-0 sudo[136862]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rmvnngwpxeiuaivfzgqwbumxkdhxgqec ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064763.5360346-325-202537139618993/AnsiballZ_command.py'
Jan 10 17:06:03 compute-0 sudo[136862]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:06:03 compute-0 sudo[136865]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 17:06:03 compute-0 sudo[136865]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:06:03 compute-0 sudo[136865]: pam_unix(sudo:session): session closed for user root
Jan 10 17:06:03 compute-0 sudo[136890]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 10 17:06:03 compute-0 sudo[136890]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:06:04 compute-0 python3.9[136864]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ovs-vsctl show | grep -q "Manager"
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 17:06:04 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:06:04 compute-0 sudo[136862]: pam_unix(sudo:session): session closed for user root
Jan 10 17:06:04 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v312: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:06:04 compute-0 sudo[137089]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fuirutfasadtxoqyvzpuekiciwtgdlpg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064764.2365272-333-20855786365711/AnsiballZ_command.py'
Jan 10 17:06:04 compute-0 sudo[137089]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:06:04 compute-0 sudo[136890]: pam_unix(sudo:session): session closed for user root
Jan 10 17:06:04 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 17:06:04 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 17:06:04 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 10 17:06:04 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 10 17:06:04 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 10 17:06:04 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:06:04 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 10 17:06:04 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 10 17:06:04 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 10 17:06:04 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 10 17:06:04 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 17:06:04 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 17:06:04 compute-0 sudo[137102]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 17:06:04 compute-0 sudo[137102]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:06:04 compute-0 sudo[137102]: pam_unix(sudo:session): session closed for user root
Jan 10 17:06:04 compute-0 python3.9[137097]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 17:06:04 compute-0 ovs-vsctl[137140]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Jan 10 17:06:04 compute-0 sudo[137127]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 10 17:06:04 compute-0 sudo[137127]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:06:04 compute-0 sudo[137089]: pam_unix(sudo:session): session closed for user root
Jan 10 17:06:04 compute-0 podman[137242]: 2026-01-10 17:06:04.959325158 +0000 UTC m=+0.034802595 container create fba20398cdfacfe3b4a3588d841dbf5866387864e1265dbf64ec82f3bf545fbd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_murdock, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 17:06:05 compute-0 systemd[1]: Started libpod-conmon-fba20398cdfacfe3b4a3588d841dbf5866387864e1265dbf64ec82f3bf545fbd.scope.
Jan 10 17:06:05 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:06:05 compute-0 podman[137242]: 2026-01-10 17:06:04.944553663 +0000 UTC m=+0.020031120 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:06:05 compute-0 podman[137242]: 2026-01-10 17:06:05.051059403 +0000 UTC m=+0.126536860 container init fba20398cdfacfe3b4a3588d841dbf5866387864e1265dbf64ec82f3bf545fbd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_murdock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 17:06:05 compute-0 podman[137242]: 2026-01-10 17:06:05.057387087 +0000 UTC m=+0.132864524 container start fba20398cdfacfe3b4a3588d841dbf5866387864e1265dbf64ec82f3bf545fbd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_murdock, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 17:06:05 compute-0 podman[137242]: 2026-01-10 17:06:05.060237845 +0000 UTC m=+0.135715362 container attach fba20398cdfacfe3b4a3588d841dbf5866387864e1265dbf64ec82f3bf545fbd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_murdock, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 10 17:06:05 compute-0 focused_murdock[137289]: 167 167
Jan 10 17:06:05 compute-0 systemd[1]: libpod-fba20398cdfacfe3b4a3588d841dbf5866387864e1265dbf64ec82f3bf545fbd.scope: Deactivated successfully.
Jan 10 17:06:05 compute-0 podman[137242]: 2026-01-10 17:06:05.063968627 +0000 UTC m=+0.139446084 container died fba20398cdfacfe3b4a3588d841dbf5866387864e1265dbf64ec82f3bf545fbd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_murdock, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 10 17:06:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-4e4b7d23fb7ec8f404bc43782a1727de2042d10d9b073e6d6835b9b5937243a1-merged.mount: Deactivated successfully.
Jan 10 17:06:05 compute-0 podman[137242]: 2026-01-10 17:06:05.102632317 +0000 UTC m=+0.178109754 container remove fba20398cdfacfe3b4a3588d841dbf5866387864e1265dbf64ec82f3bf545fbd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_murdock, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 17:06:05 compute-0 systemd[1]: libpod-conmon-fba20398cdfacfe3b4a3588d841dbf5866387864e1265dbf64ec82f3bf545fbd.scope: Deactivated successfully.
Jan 10 17:06:05 compute-0 podman[137355]: 2026-01-10 17:06:05.260057593 +0000 UTC m=+0.045099087 container create 0c97a0044c7abb5a6a1e76536c30b6ab710cbbd1a908806c06e0fbdfc981f41f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_rhodes, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 17:06:05 compute-0 python3.9[137344]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 10 17:06:05 compute-0 systemd[1]: Started libpod-conmon-0c97a0044c7abb5a6a1e76536c30b6ab710cbbd1a908806c06e0fbdfc981f41f.scope.
Jan 10 17:06:05 compute-0 podman[137355]: 2026-01-10 17:06:05.239912731 +0000 UTC m=+0.024954245 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:06:05 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:06:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68cd0829cf029a9df6abd0bda56f746a8e92c21c700fe5d7aa69c77bc2a47835/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 17:06:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68cd0829cf029a9df6abd0bda56f746a8e92c21c700fe5d7aa69c77bc2a47835/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 17:06:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68cd0829cf029a9df6abd0bda56f746a8e92c21c700fe5d7aa69c77bc2a47835/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 17:06:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68cd0829cf029a9df6abd0bda56f746a8e92c21c700fe5d7aa69c77bc2a47835/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 17:06:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68cd0829cf029a9df6abd0bda56f746a8e92c21c700fe5d7aa69c77bc2a47835/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 10 17:06:05 compute-0 podman[137355]: 2026-01-10 17:06:05.372056144 +0000 UTC m=+0.157097638 container init 0c97a0044c7abb5a6a1e76536c30b6ab710cbbd1a908806c06e0fbdfc981f41f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_rhodes, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 17:06:05 compute-0 podman[137355]: 2026-01-10 17:06:05.380768013 +0000 UTC m=+0.165809507 container start 0c97a0044c7abb5a6a1e76536c30b6ab710cbbd1a908806c06e0fbdfc981f41f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_rhodes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 17:06:05 compute-0 podman[137355]: 2026-01-10 17:06:05.384062163 +0000 UTC m=+0.169103657 container attach 0c97a0044c7abb5a6a1e76536c30b6ab710cbbd1a908806c06e0fbdfc981f41f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_rhodes, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 17:06:05 compute-0 ceph-mon[75249]: pgmap v312: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:06:05 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 17:06:05 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 10 17:06:05 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:06:05 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 10 17:06:05 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 10 17:06:05 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 17:06:05 compute-0 sudo[137538]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-faoigphtavadxsydxjuxcjvtuvcbnyjw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064765.5098515-350-84624112983540/AnsiballZ_file.py'
Jan 10 17:06:05 compute-0 sudo[137538]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:06:05 compute-0 keen_rhodes[137374]: --> passed data devices: 0 physical, 3 LVM
Jan 10 17:06:05 compute-0 keen_rhodes[137374]: --> All data devices are unavailable
Jan 10 17:06:05 compute-0 systemd[1]: libpod-0c97a0044c7abb5a6a1e76536c30b6ab710cbbd1a908806c06e0fbdfc981f41f.scope: Deactivated successfully.
Jan 10 17:06:05 compute-0 podman[137355]: 2026-01-10 17:06:05.958313657 +0000 UTC m=+0.743355161 container died 0c97a0044c7abb5a6a1e76536c30b6ab710cbbd1a908806c06e0fbdfc981f41f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_rhodes, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 17:06:05 compute-0 python3.9[137540]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 10 17:06:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-68cd0829cf029a9df6abd0bda56f746a8e92c21c700fe5d7aa69c77bc2a47835-merged.mount: Deactivated successfully.
Jan 10 17:06:06 compute-0 sudo[137538]: pam_unix(sudo:session): session closed for user root
Jan 10 17:06:06 compute-0 podman[137355]: 2026-01-10 17:06:06.009790748 +0000 UTC m=+0.794832242 container remove 0c97a0044c7abb5a6a1e76536c30b6ab710cbbd1a908806c06e0fbdfc981f41f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_rhodes, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 10 17:06:06 compute-0 systemd[1]: libpod-conmon-0c97a0044c7abb5a6a1e76536c30b6ab710cbbd1a908806c06e0fbdfc981f41f.scope: Deactivated successfully.
Jan 10 17:06:06 compute-0 sudo[137127]: pam_unix(sudo:session): session closed for user root
Jan 10 17:06:06 compute-0 sudo[137578]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 17:06:06 compute-0 sudo[137578]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:06:06 compute-0 sudo[137578]: pam_unix(sudo:session): session closed for user root
Jan 10 17:06:06 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v313: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:06:06 compute-0 sudo[137606]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 -- lvm list --format json
Jan 10 17:06:06 compute-0 sudo[137606]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:06:06 compute-0 sudo[137781]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tccjsytgtenzzztdvlzjkocgwrzslldh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064766.2001462-358-220778247418993/AnsiballZ_stat.py'
Jan 10 17:06:06 compute-0 sudo[137781]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:06:06 compute-0 podman[137752]: 2026-01-10 17:06:06.597897092 +0000 UTC m=+0.070564006 container create c9db3bf65a533a15cccb459a5e33a575522280795452f2dad1b327011d7fe8ec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_mendeleev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 17:06:06 compute-0 systemd[1]: Started libpod-conmon-c9db3bf65a533a15cccb459a5e33a575522280795452f2dad1b327011d7fe8ec.scope.
Jan 10 17:06:06 compute-0 podman[137752]: 2026-01-10 17:06:06.568779763 +0000 UTC m=+0.041446717 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:06:06 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:06:06 compute-0 podman[137752]: 2026-01-10 17:06:06.702982572 +0000 UTC m=+0.175649456 container init c9db3bf65a533a15cccb459a5e33a575522280795452f2dad1b327011d7fe8ec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_mendeleev, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 10 17:06:06 compute-0 podman[137752]: 2026-01-10 17:06:06.715616019 +0000 UTC m=+0.188282923 container start c9db3bf65a533a15cccb459a5e33a575522280795452f2dad1b327011d7fe8ec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_mendeleev, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 17:06:06 compute-0 podman[137752]: 2026-01-10 17:06:06.720195944 +0000 UTC m=+0.192862848 container attach c9db3bf65a533a15cccb459a5e33a575522280795452f2dad1b327011d7fe8ec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_mendeleev, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 10 17:06:06 compute-0 zealous_mendeleev[137788]: 167 167
Jan 10 17:06:06 compute-0 systemd[1]: libpod-c9db3bf65a533a15cccb459a5e33a575522280795452f2dad1b327011d7fe8ec.scope: Deactivated successfully.
Jan 10 17:06:06 compute-0 podman[137752]: 2026-01-10 17:06:06.723346571 +0000 UTC m=+0.196013435 container died c9db3bf65a533a15cccb459a5e33a575522280795452f2dad1b327011d7fe8ec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_mendeleev, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 10 17:06:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-a4e11053760cdabbe73411ba514e0219a675a2e81a91329aa0cec200da5ede09-merged.mount: Deactivated successfully.
Jan 10 17:06:06 compute-0 podman[137752]: 2026-01-10 17:06:06.776733534 +0000 UTC m=+0.249400408 container remove c9db3bf65a533a15cccb459a5e33a575522280795452f2dad1b327011d7fe8ec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_mendeleev, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True)
Jan 10 17:06:06 compute-0 systemd[1]: libpod-conmon-c9db3bf65a533a15cccb459a5e33a575522280795452f2dad1b327011d7fe8ec.scope: Deactivated successfully.
Jan 10 17:06:06 compute-0 python3.9[137785]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 17:06:06 compute-0 sudo[137781]: pam_unix(sudo:session): session closed for user root
Jan 10 17:06:06 compute-0 podman[137835]: 2026-01-10 17:06:06.961811218 +0000 UTC m=+0.053284202 container create 7a02d0b23ce16db462e0fea6d14ac4a0f182a5765ad9906efdc3c12f755987b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_lalande, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True)
Jan 10 17:06:06 compute-0 systemd[1]: Started libpod-conmon-7a02d0b23ce16db462e0fea6d14ac4a0f182a5765ad9906efdc3c12f755987b6.scope.
Jan 10 17:06:07 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:06:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e2bbbaaab85e1878fe74adeee61f3b6d2e0b7c859eab06544070743a499bf4d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 17:06:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e2bbbaaab85e1878fe74adeee61f3b6d2e0b7c859eab06544070743a499bf4d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 17:06:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e2bbbaaab85e1878fe74adeee61f3b6d2e0b7c859eab06544070743a499bf4d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 17:06:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e2bbbaaab85e1878fe74adeee61f3b6d2e0b7c859eab06544070743a499bf4d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 17:06:07 compute-0 podman[137835]: 2026-01-10 17:06:06.941138182 +0000 UTC m=+0.032611206 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:06:07 compute-0 podman[137835]: 2026-01-10 17:06:07.037867743 +0000 UTC m=+0.129340737 container init 7a02d0b23ce16db462e0fea6d14ac4a0f182a5765ad9906efdc3c12f755987b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_lalande, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True)
Jan 10 17:06:07 compute-0 podman[137835]: 2026-01-10 17:06:07.046290874 +0000 UTC m=+0.137763878 container start 7a02d0b23ce16db462e0fea6d14ac4a0f182a5765ad9906efdc3c12f755987b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_lalande, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 17:06:07 compute-0 podman[137835]: 2026-01-10 17:06:07.049967105 +0000 UTC m=+0.141440089 container attach 7a02d0b23ce16db462e0fea6d14ac4a0f182a5765ad9906efdc3c12f755987b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_lalande, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 17:06:07 compute-0 sudo[137908]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gqtppwqdssuxulxouwahynmuneseaart ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064766.2001462-358-220778247418993/AnsiballZ_file.py'
Jan 10 17:06:07 compute-0 sudo[137908]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:06:07 compute-0 python3.9[137911]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 10 17:06:07 compute-0 sudo[137908]: pam_unix(sudo:session): session closed for user root
Jan 10 17:06:07 compute-0 vigilant_lalande[137878]: {
Jan 10 17:06:07 compute-0 vigilant_lalande[137878]:     "0": [
Jan 10 17:06:07 compute-0 vigilant_lalande[137878]:         {
Jan 10 17:06:07 compute-0 vigilant_lalande[137878]:             "devices": [
Jan 10 17:06:07 compute-0 vigilant_lalande[137878]:                 "/dev/loop3"
Jan 10 17:06:07 compute-0 vigilant_lalande[137878]:             ],
Jan 10 17:06:07 compute-0 vigilant_lalande[137878]:             "lv_name": "ceph_lv0",
Jan 10 17:06:07 compute-0 vigilant_lalande[137878]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 10 17:06:07 compute-0 vigilant_lalande[137878]:             "lv_size": "21470642176",
Jan 10 17:06:07 compute-0 vigilant_lalande[137878]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9aa1dcc9-88f4-49c0-be40-744313964d3e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 17:06:07 compute-0 vigilant_lalande[137878]:             "lv_uuid": "fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2",
Jan 10 17:06:07 compute-0 vigilant_lalande[137878]:             "name": "ceph_lv0",
Jan 10 17:06:07 compute-0 vigilant_lalande[137878]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 10 17:06:07 compute-0 vigilant_lalande[137878]:             "tags": {
Jan 10 17:06:07 compute-0 vigilant_lalande[137878]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 10 17:06:07 compute-0 vigilant_lalande[137878]:                 "ceph.block_uuid": "fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2",
Jan 10 17:06:07 compute-0 vigilant_lalande[137878]:                 "ceph.cephx_lockbox_secret": "",
Jan 10 17:06:07 compute-0 vigilant_lalande[137878]:                 "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 17:06:07 compute-0 vigilant_lalande[137878]:                 "ceph.cluster_name": "ceph",
Jan 10 17:06:07 compute-0 vigilant_lalande[137878]:                 "ceph.crush_device_class": "",
Jan 10 17:06:07 compute-0 vigilant_lalande[137878]:                 "ceph.encrypted": "0",
Jan 10 17:06:07 compute-0 vigilant_lalande[137878]:                 "ceph.objectstore": "bluestore",
Jan 10 17:06:07 compute-0 vigilant_lalande[137878]:                 "ceph.osd_fsid": "9aa1dcc9-88f4-49c0-be40-744313964d3e",
Jan 10 17:06:07 compute-0 vigilant_lalande[137878]:                 "ceph.osd_id": "0",
Jan 10 17:06:07 compute-0 vigilant_lalande[137878]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 10 17:06:07 compute-0 vigilant_lalande[137878]:                 "ceph.type": "block",
Jan 10 17:06:07 compute-0 vigilant_lalande[137878]:                 "ceph.vdo": "0",
Jan 10 17:06:07 compute-0 vigilant_lalande[137878]:                 "ceph.with_tpm": "0"
Jan 10 17:06:07 compute-0 vigilant_lalande[137878]:             },
Jan 10 17:06:07 compute-0 vigilant_lalande[137878]:             "type": "block",
Jan 10 17:06:07 compute-0 vigilant_lalande[137878]:             "vg_name": "ceph_vg0"
Jan 10 17:06:07 compute-0 vigilant_lalande[137878]:         }
Jan 10 17:06:07 compute-0 vigilant_lalande[137878]:     ],
Jan 10 17:06:07 compute-0 vigilant_lalande[137878]:     "1": [
Jan 10 17:06:07 compute-0 vigilant_lalande[137878]:         {
Jan 10 17:06:07 compute-0 vigilant_lalande[137878]:             "devices": [
Jan 10 17:06:07 compute-0 vigilant_lalande[137878]:                 "/dev/loop4"
Jan 10 17:06:07 compute-0 vigilant_lalande[137878]:             ],
Jan 10 17:06:07 compute-0 vigilant_lalande[137878]:             "lv_name": "ceph_lv1",
Jan 10 17:06:07 compute-0 vigilant_lalande[137878]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 10 17:06:07 compute-0 vigilant_lalande[137878]:             "lv_size": "21470642176",
Jan 10 17:06:07 compute-0 vigilant_lalande[137878]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e8e31518-65ae-476c-891c-e2fc550d0a1c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 17:06:07 compute-0 vigilant_lalande[137878]:             "lv_uuid": "jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE",
Jan 10 17:06:07 compute-0 vigilant_lalande[137878]:             "name": "ceph_lv1",
Jan 10 17:06:07 compute-0 vigilant_lalande[137878]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 10 17:06:07 compute-0 vigilant_lalande[137878]:             "tags": {
Jan 10 17:06:07 compute-0 vigilant_lalande[137878]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 10 17:06:07 compute-0 vigilant_lalande[137878]:                 "ceph.block_uuid": "jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE",
Jan 10 17:06:07 compute-0 vigilant_lalande[137878]:                 "ceph.cephx_lockbox_secret": "",
Jan 10 17:06:07 compute-0 vigilant_lalande[137878]:                 "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 17:06:07 compute-0 vigilant_lalande[137878]:                 "ceph.cluster_name": "ceph",
Jan 10 17:06:07 compute-0 vigilant_lalande[137878]:                 "ceph.crush_device_class": "",
Jan 10 17:06:07 compute-0 vigilant_lalande[137878]:                 "ceph.encrypted": "0",
Jan 10 17:06:07 compute-0 vigilant_lalande[137878]:                 "ceph.objectstore": "bluestore",
Jan 10 17:06:07 compute-0 vigilant_lalande[137878]:                 "ceph.osd_fsid": "e8e31518-65ae-476c-891c-e2fc550d0a1c",
Jan 10 17:06:07 compute-0 vigilant_lalande[137878]:                 "ceph.osd_id": "1",
Jan 10 17:06:07 compute-0 vigilant_lalande[137878]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 10 17:06:07 compute-0 vigilant_lalande[137878]:                 "ceph.type": "block",
Jan 10 17:06:07 compute-0 vigilant_lalande[137878]:                 "ceph.vdo": "0",
Jan 10 17:06:07 compute-0 vigilant_lalande[137878]:                 "ceph.with_tpm": "0"
Jan 10 17:06:07 compute-0 vigilant_lalande[137878]:             },
Jan 10 17:06:07 compute-0 vigilant_lalande[137878]:             "type": "block",
Jan 10 17:06:07 compute-0 vigilant_lalande[137878]:             "vg_name": "ceph_vg1"
Jan 10 17:06:07 compute-0 vigilant_lalande[137878]:         }
Jan 10 17:06:07 compute-0 vigilant_lalande[137878]:     ],
Jan 10 17:06:07 compute-0 vigilant_lalande[137878]:     "2": [
Jan 10 17:06:07 compute-0 vigilant_lalande[137878]:         {
Jan 10 17:06:07 compute-0 vigilant_lalande[137878]:             "devices": [
Jan 10 17:06:07 compute-0 vigilant_lalande[137878]:                 "/dev/loop5"
Jan 10 17:06:07 compute-0 vigilant_lalande[137878]:             ],
Jan 10 17:06:07 compute-0 vigilant_lalande[137878]:             "lv_name": "ceph_lv2",
Jan 10 17:06:07 compute-0 vigilant_lalande[137878]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 10 17:06:07 compute-0 vigilant_lalande[137878]:             "lv_size": "21470642176",
Jan 10 17:06:07 compute-0 vigilant_lalande[137878]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=87473727-6468-4f68-8371-e0bf60edaa43,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 17:06:07 compute-0 vigilant_lalande[137878]:             "lv_uuid": "47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL",
Jan 10 17:06:07 compute-0 vigilant_lalande[137878]:             "name": "ceph_lv2",
Jan 10 17:06:07 compute-0 vigilant_lalande[137878]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 10 17:06:07 compute-0 vigilant_lalande[137878]:             "tags": {
Jan 10 17:06:07 compute-0 vigilant_lalande[137878]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 10 17:06:07 compute-0 vigilant_lalande[137878]:                 "ceph.block_uuid": "47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL",
Jan 10 17:06:07 compute-0 vigilant_lalande[137878]:                 "ceph.cephx_lockbox_secret": "",
Jan 10 17:06:07 compute-0 vigilant_lalande[137878]:                 "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 17:06:07 compute-0 vigilant_lalande[137878]:                 "ceph.cluster_name": "ceph",
Jan 10 17:06:07 compute-0 vigilant_lalande[137878]:                 "ceph.crush_device_class": "",
Jan 10 17:06:07 compute-0 vigilant_lalande[137878]:                 "ceph.encrypted": "0",
Jan 10 17:06:07 compute-0 vigilant_lalande[137878]:                 "ceph.objectstore": "bluestore",
Jan 10 17:06:07 compute-0 vigilant_lalande[137878]:                 "ceph.osd_fsid": "87473727-6468-4f68-8371-e0bf60edaa43",
Jan 10 17:06:07 compute-0 vigilant_lalande[137878]:                 "ceph.osd_id": "2",
Jan 10 17:06:07 compute-0 vigilant_lalande[137878]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 10 17:06:07 compute-0 vigilant_lalande[137878]:                 "ceph.type": "block",
Jan 10 17:06:07 compute-0 vigilant_lalande[137878]:                 "ceph.vdo": "0",
Jan 10 17:06:07 compute-0 vigilant_lalande[137878]:                 "ceph.with_tpm": "0"
Jan 10 17:06:07 compute-0 vigilant_lalande[137878]:             },
Jan 10 17:06:07 compute-0 vigilant_lalande[137878]:             "type": "block",
Jan 10 17:06:07 compute-0 vigilant_lalande[137878]:             "vg_name": "ceph_vg2"
Jan 10 17:06:07 compute-0 vigilant_lalande[137878]:         }
Jan 10 17:06:07 compute-0 vigilant_lalande[137878]:     ]
Jan 10 17:06:07 compute-0 vigilant_lalande[137878]: }
Jan 10 17:06:07 compute-0 systemd[1]: libpod-7a02d0b23ce16db462e0fea6d14ac4a0f182a5765ad9906efdc3c12f755987b6.scope: Deactivated successfully.
Jan 10 17:06:07 compute-0 podman[137835]: 2026-01-10 17:06:07.451154393 +0000 UTC m=+0.542627417 container died 7a02d0b23ce16db462e0fea6d14ac4a0f182a5765ad9906efdc3c12f755987b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_lalande, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 17:06:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-9e2bbbaaab85e1878fe74adeee61f3b6d2e0b7c859eab06544070743a499bf4d-merged.mount: Deactivated successfully.
Jan 10 17:06:07 compute-0 podman[137835]: 2026-01-10 17:06:07.509623316 +0000 UTC m=+0.601096310 container remove 7a02d0b23ce16db462e0fea6d14ac4a0f182a5765ad9906efdc3c12f755987b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_lalande, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 17:06:07 compute-0 systemd[1]: libpod-conmon-7a02d0b23ce16db462e0fea6d14ac4a0f182a5765ad9906efdc3c12f755987b6.scope: Deactivated successfully.
Jan 10 17:06:07 compute-0 ceph-mon[75249]: pgmap v313: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:06:07 compute-0 sudo[137606]: pam_unix(sudo:session): session closed for user root
Jan 10 17:06:07 compute-0 sudo[138000]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 17:06:07 compute-0 sudo[138000]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:06:07 compute-0 sudo[138000]: pam_unix(sudo:session): session closed for user root
Jan 10 17:06:07 compute-0 sudo[138054]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 -- raw list --format json
Jan 10 17:06:07 compute-0 sudo[138054]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:06:07 compute-0 sudo[138129]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yaupdjoiieapdxsvxxbejspehbcvbvei ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064767.525557-358-227677292764785/AnsiballZ_stat.py'
Jan 10 17:06:07 compute-0 sudo[138129]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:06:07 compute-0 podman[138144]: 2026-01-10 17:06:07.963836778 +0000 UTC m=+0.042907797 container create 97573d3a33e0b90ec3bf5a965467451e63096a3c8f19a17faa9e470890ca4d2c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_dewdney, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 17:06:07 compute-0 python3.9[138131]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 17:06:08 compute-0 systemd[1]: Started libpod-conmon-97573d3a33e0b90ec3bf5a965467451e63096a3c8f19a17faa9e470890ca4d2c.scope.
Jan 10 17:06:08 compute-0 sudo[138129]: pam_unix(sudo:session): session closed for user root
Jan 10 17:06:08 compute-0 podman[138144]: 2026-01-10 17:06:07.948087456 +0000 UTC m=+0.027158495 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:06:08 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:06:08 compute-0 podman[138144]: 2026-01-10 17:06:08.064898889 +0000 UTC m=+0.143969908 container init 97573d3a33e0b90ec3bf5a965467451e63096a3c8f19a17faa9e470890ca4d2c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_dewdney, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 10 17:06:08 compute-0 podman[138144]: 2026-01-10 17:06:08.070895923 +0000 UTC m=+0.149966942 container start 97573d3a33e0b90ec3bf5a965467451e63096a3c8f19a17faa9e470890ca4d2c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_dewdney, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 10 17:06:08 compute-0 podman[138144]: 2026-01-10 17:06:08.074552653 +0000 UTC m=+0.153623672 container attach 97573d3a33e0b90ec3bf5a965467451e63096a3c8f19a17faa9e470890ca4d2c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_dewdney, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 17:06:08 compute-0 adoring_dewdney[138163]: 167 167
Jan 10 17:06:08 compute-0 systemd[1]: libpod-97573d3a33e0b90ec3bf5a965467451e63096a3c8f19a17faa9e470890ca4d2c.scope: Deactivated successfully.
Jan 10 17:06:08 compute-0 podman[138144]: 2026-01-10 17:06:08.07735591 +0000 UTC m=+0.156426929 container died 97573d3a33e0b90ec3bf5a965467451e63096a3c8f19a17faa9e470890ca4d2c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_dewdney, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 10 17:06:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-7f4470869fa72f9c7077527baa5ea2668b5484f728db6f34ae3b45603db0cb47-merged.mount: Deactivated successfully.
Jan 10 17:06:08 compute-0 podman[138144]: 2026-01-10 17:06:08.116867403 +0000 UTC m=+0.195938422 container remove 97573d3a33e0b90ec3bf5a965467451e63096a3c8f19a17faa9e470890ca4d2c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_dewdney, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 10 17:06:08 compute-0 systemd[1]: libpod-conmon-97573d3a33e0b90ec3bf5a965467451e63096a3c8f19a17faa9e470890ca4d2c.scope: Deactivated successfully.
Jan 10 17:06:08 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v314: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:06:08 compute-0 sudo[138259]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-adyjjllhwhqgggwemckolpqzgywsoxqr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064767.525557-358-227677292764785/AnsiballZ_file.py'
Jan 10 17:06:08 compute-0 sudo[138259]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:06:08 compute-0 podman[138258]: 2026-01-10 17:06:08.28271456 +0000 UTC m=+0.045232471 container create 339e3eba406512ab45b4c4b1c65f553b34aa5b14b169ffbbc915d70f5c32a910 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_benz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 17:06:08 compute-0 systemd[1]: Started libpod-conmon-339e3eba406512ab45b4c4b1c65f553b34aa5b14b169ffbbc915d70f5c32a910.scope.
Jan 10 17:06:08 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:06:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9a64a5354f73828a27cfd3e17157f7382d32325a647daba825f9b8d083d970e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 17:06:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9a64a5354f73828a27cfd3e17157f7382d32325a647daba825f9b8d083d970e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 17:06:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9a64a5354f73828a27cfd3e17157f7382d32325a647daba825f9b8d083d970e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 17:06:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9a64a5354f73828a27cfd3e17157f7382d32325a647daba825f9b8d083d970e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 17:06:08 compute-0 podman[138258]: 2026-01-10 17:06:08.351069924 +0000 UTC m=+0.113587905 container init 339e3eba406512ab45b4c4b1c65f553b34aa5b14b169ffbbc915d70f5c32a910 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_benz, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 10 17:06:08 compute-0 podman[138258]: 2026-01-10 17:06:08.261052596 +0000 UTC m=+0.023570567 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:06:08 compute-0 podman[138258]: 2026-01-10 17:06:08.361908481 +0000 UTC m=+0.124426372 container start 339e3eba406512ab45b4c4b1c65f553b34aa5b14b169ffbbc915d70f5c32a910 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_benz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 10 17:06:08 compute-0 podman[138258]: 2026-01-10 17:06:08.365529471 +0000 UTC m=+0.128047412 container attach 339e3eba406512ab45b4c4b1c65f553b34aa5b14b169ffbbc915d70f5c32a910 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_benz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 10 17:06:08 compute-0 python3.9[138270]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 10 17:06:08 compute-0 sudo[138259]: pam_unix(sudo:session): session closed for user root
Jan 10 17:06:08 compute-0 sudo[138492]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gcyonbfbmjjfyrcdcndwddqohhmhrykw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064768.6676505-381-4996714871238/AnsiballZ_file.py'
Jan 10 17:06:08 compute-0 sudo[138492]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:06:08 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:06:08 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:06:08 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:06:08 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:06:08 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:06:08 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:06:09 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:06:09 compute-0 lvm[138511]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 10 17:06:09 compute-0 lvm[138511]: VG ceph_vg2 finished
Jan 10 17:06:09 compute-0 lvm[138509]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 10 17:06:09 compute-0 lvm[138509]: VG ceph_vg1 finished
Jan 10 17:06:09 compute-0 lvm[138508]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 10 17:06:09 compute-0 lvm[138508]: VG ceph_vg0 finished
Jan 10 17:06:09 compute-0 python3.9[138496]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:06:09 compute-0 sudo[138492]: pam_unix(sudo:session): session closed for user root
Jan 10 17:06:09 compute-0 amazing_benz[138278]: {}
Jan 10 17:06:09 compute-0 systemd[1]: libpod-339e3eba406512ab45b4c4b1c65f553b34aa5b14b169ffbbc915d70f5c32a910.scope: Deactivated successfully.
Jan 10 17:06:09 compute-0 systemd[1]: libpod-339e3eba406512ab45b4c4b1c65f553b34aa5b14b169ffbbc915d70f5c32a910.scope: Consumed 1.370s CPU time.
Jan 10 17:06:09 compute-0 podman[138258]: 2026-01-10 17:06:09.219586785 +0000 UTC m=+0.982104676 container died 339e3eba406512ab45b4c4b1c65f553b34aa5b14b169ffbbc915d70f5c32a910 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_benz, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 17:06:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-e9a64a5354f73828a27cfd3e17157f7382d32325a647daba825f9b8d083d970e-merged.mount: Deactivated successfully.
Jan 10 17:06:09 compute-0 podman[138258]: 2026-01-10 17:06:09.272114115 +0000 UTC m=+1.034632016 container remove 339e3eba406512ab45b4c4b1c65f553b34aa5b14b169ffbbc915d70f5c32a910 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_benz, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 10 17:06:09 compute-0 systemd[1]: libpod-conmon-339e3eba406512ab45b4c4b1c65f553b34aa5b14b169ffbbc915d70f5c32a910.scope: Deactivated successfully.
Jan 10 17:06:09 compute-0 sudo[138054]: pam_unix(sudo:session): session closed for user root
Jan 10 17:06:09 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 10 17:06:09 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:06:09 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 10 17:06:09 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:06:09 compute-0 sudo[138623]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 10 17:06:09 compute-0 sudo[138623]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:06:09 compute-0 sudo[138623]: pam_unix(sudo:session): session closed for user root
Jan 10 17:06:09 compute-0 sudo[138698]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ptposmkymhvijhaxrrlhwfqkczkgruue ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064769.2835045-389-80307210170571/AnsiballZ_stat.py'
Jan 10 17:06:09 compute-0 sudo[138698]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:06:09 compute-0 ceph-mon[75249]: pgmap v314: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:06:09 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:06:09 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:06:09 compute-0 python3.9[138700]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 17:06:09 compute-0 sudo[138698]: pam_unix(sudo:session): session closed for user root
Jan 10 17:06:09 compute-0 sudo[138776]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bwgfdikjgyibxaycfzvhyzwowxiifzup ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064769.2835045-389-80307210170571/AnsiballZ_file.py'
Jan 10 17:06:09 compute-0 sudo[138776]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:06:10 compute-0 python3.9[138778]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:06:10 compute-0 sudo[138776]: pam_unix(sudo:session): session closed for user root
Jan 10 17:06:10 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v315: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:06:10 compute-0 sudo[138928]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ubrvtnpgbjpdqnriozixkexkhbcuvbaz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064770.32858-401-151392210275181/AnsiballZ_stat.py'
Jan 10 17:06:10 compute-0 sudo[138928]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:06:10 compute-0 python3.9[138930]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 17:06:10 compute-0 sudo[138928]: pam_unix(sudo:session): session closed for user root
Jan 10 17:06:11 compute-0 sudo[139006]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gylzenuemqmcombaxyihvsnvofojwhms ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064770.32858-401-151392210275181/AnsiballZ_file.py'
Jan 10 17:06:11 compute-0 sudo[139006]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:06:11 compute-0 python3.9[139008]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:06:11 compute-0 sudo[139006]: pam_unix(sudo:session): session closed for user root
Jan 10 17:06:11 compute-0 ceph-mon[75249]: pgmap v315: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:06:11 compute-0 sudo[139158]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iyalmpquyqzljwanpfyxqbsfaylfrwms ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064771.423209-413-232087316435497/AnsiballZ_systemd.py'
Jan 10 17:06:11 compute-0 sudo[139158]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:06:11 compute-0 python3.9[139160]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 10 17:06:11 compute-0 systemd[1]: Reloading.
Jan 10 17:06:12 compute-0 systemd-rc-local-generator[139186]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 17:06:12 compute-0 systemd-sysv-generator[139190]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 10 17:06:12 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v316: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:06:12 compute-0 sudo[139158]: pam_unix(sudo:session): session closed for user root
Jan 10 17:06:12 compute-0 ceph-mon[75249]: pgmap v316: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:06:12 compute-0 sudo[139347]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iffjftxlgzaxgedawbpbmejpdijvaynl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064772.5306396-421-113713177634582/AnsiballZ_stat.py'
Jan 10 17:06:12 compute-0 sudo[139347]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:06:12 compute-0 python3.9[139349]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 17:06:13 compute-0 sudo[139347]: pam_unix(sudo:session): session closed for user root
Jan 10 17:06:13 compute-0 sudo[139425]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cwrnwpddagsvhddwezuwajqtfotixvru ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064772.5306396-421-113713177634582/AnsiballZ_file.py'
Jan 10 17:06:13 compute-0 sudo[139425]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:06:13 compute-0 python3.9[139427]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:06:13 compute-0 sudo[139425]: pam_unix(sudo:session): session closed for user root
Jan 10 17:06:13 compute-0 sudo[139577]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qjjdynhomvwsgvrancrpywwytaxdngvk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064773.6645815-433-197214800790799/AnsiballZ_stat.py'
Jan 10 17:06:13 compute-0 sudo[139577]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:06:14 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:06:14 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v317: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:06:14 compute-0 python3.9[139579]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 17:06:14 compute-0 sudo[139577]: pam_unix(sudo:session): session closed for user root
Jan 10 17:06:14 compute-0 sudo[139655]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-evdhaojxiheehzmddtkglwzucntyxixg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064773.6645815-433-197214800790799/AnsiballZ_file.py'
Jan 10 17:06:14 compute-0 sudo[139655]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:06:14 compute-0 python3.9[139657]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:06:14 compute-0 sudo[139655]: pam_unix(sudo:session): session closed for user root
Jan 10 17:06:15 compute-0 ceph-mon[75249]: pgmap v317: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:06:15 compute-0 sudo[139807]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fnnujoaunzjcbrwmmywxzromzbsclfpa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064774.9268084-445-174137645979858/AnsiballZ_systemd.py'
Jan 10 17:06:15 compute-0 sudo[139807]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:06:15 compute-0 python3.9[139809]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 10 17:06:15 compute-0 systemd[1]: Reloading.
Jan 10 17:06:15 compute-0 systemd-rc-local-generator[139837]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 17:06:15 compute-0 systemd-sysv-generator[139840]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 10 17:06:15 compute-0 systemd[1]: Starting Create netns directory...
Jan 10 17:06:15 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 10 17:06:15 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 10 17:06:15 compute-0 systemd[1]: Finished Create netns directory.
Jan 10 17:06:15 compute-0 sudo[139807]: pam_unix(sudo:session): session closed for user root
Jan 10 17:06:16 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v318: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:06:16 compute-0 sudo[140000]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uzlzrqhlkmllkiqqchrtndmogcllesqt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064776.142318-455-75394969789441/AnsiballZ_file.py'
Jan 10 17:06:16 compute-0 sudo[140000]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:06:16 compute-0 python3.9[140002]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 10 17:06:16 compute-0 sudo[140000]: pam_unix(sudo:session): session closed for user root
Jan 10 17:06:17 compute-0 ceph-mon[75249]: pgmap v318: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:06:17 compute-0 sudo[140152]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gfeynqeobbqqzabzikeosqxkvpunbkck ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064776.947781-463-260263131858153/AnsiballZ_stat.py'
Jan 10 17:06:17 compute-0 sudo[140152]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:06:17 compute-0 python3.9[140154]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 17:06:17 compute-0 sudo[140152]: pam_unix(sudo:session): session closed for user root
Jan 10 17:06:17 compute-0 sudo[140275]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-limnpbefkbrrpfgzbxqqciopnhzvhudg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064776.947781-463-260263131858153/AnsiballZ_copy.py'
Jan 10 17:06:17 compute-0 sudo[140275]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:06:18 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v319: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:06:18 compute-0 python3.9[140277]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1768064776.947781-463-260263131858153/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 10 17:06:18 compute-0 sudo[140275]: pam_unix(sudo:session): session closed for user root
Jan 10 17:06:18 compute-0 sudo[140427]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-umzkpwuzolmiskahexhkekgxlokhtfmz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064778.5387452-480-174158555059355/AnsiballZ_file.py'
Jan 10 17:06:18 compute-0 sudo[140427]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:06:19 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:06:19 compute-0 python3.9[140429]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:06:19 compute-0 sudo[140427]: pam_unix(sudo:session): session closed for user root
Jan 10 17:06:19 compute-0 ceph-mon[75249]: pgmap v319: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:06:19 compute-0 sudo[140579]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-geijozhrcanluxyegusfnwwsivrxzuuk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064779.3009782-488-255700052715049/AnsiballZ_file.py'
Jan 10 17:06:19 compute-0 sudo[140579]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:06:19 compute-0 python3.9[140581]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 10 17:06:19 compute-0 sudo[140579]: pam_unix(sudo:session): session closed for user root
Jan 10 17:06:20 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v320: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:06:20 compute-0 sudo[140731]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dwqgczzldowplwrhfkpepaurompjmvnz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064780.1669548-496-7010102275505/AnsiballZ_stat.py'
Jan 10 17:06:20 compute-0 sudo[140731]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:06:20 compute-0 python3.9[140733]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 17:06:20 compute-0 sudo[140731]: pam_unix(sudo:session): session closed for user root
Jan 10 17:06:21 compute-0 sudo[140854]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jhqlbvauvhnyicjqutkxhhpmrooeqdpf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064780.1669548-496-7010102275505/AnsiballZ_copy.py'
Jan 10 17:06:21 compute-0 sudo[140854]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:06:21 compute-0 python3.9[140856]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1768064780.1669548-496-7010102275505/.source.json _original_basename=.rr0bslbi follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:06:21 compute-0 ceph-mon[75249]: pgmap v320: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:06:21 compute-0 sudo[140854]: pam_unix(sudo:session): session closed for user root
Jan 10 17:06:22 compute-0 python3.9[141006]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:06:22 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v321: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:06:23 compute-0 ceph-mon[75249]: pgmap v321: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:06:24 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:06:24 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v322: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:06:24 compute-0 sudo[141427]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zksniirsgpdmmagsvexxabmofttgbnyh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064783.7823572-536-166474156251135/AnsiballZ_container_config_data.py'
Jan 10 17:06:24 compute-0 sudo[141427]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:06:24 compute-0 python3.9[141429]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Jan 10 17:06:24 compute-0 sudo[141427]: pam_unix(sudo:session): session closed for user root
Jan 10 17:06:25 compute-0 ceph-mon[75249]: pgmap v322: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:06:25 compute-0 sudo[141579]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rmifxvfyibtfahqidbjhyfpxsrqbpypn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064784.9039934-547-165125117922498/AnsiballZ_container_config_hash.py'
Jan 10 17:06:25 compute-0 sudo[141579]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:06:25 compute-0 python3.9[141581]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 10 17:06:25 compute-0 sudo[141579]: pam_unix(sudo:session): session closed for user root
Jan 10 17:06:26 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v323: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:06:26 compute-0 sudo[141731]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-powsrlibourtonsjagzgxauhsnaqictz ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1768064785.8261528-557-265758692259335/AnsiballZ_edpm_container_manage.py'
Jan 10 17:06:26 compute-0 sudo[141731]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:06:26 compute-0 python3[141733]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json containers=['ovn_controller'] log_base_path=/var/log/containers/stdouts debug=False
Jan 10 17:06:27 compute-0 ceph-mon[75249]: pgmap v323: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:06:28 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v324: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:06:29 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:06:29 compute-0 ceph-mon[75249]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #21. Immutable memtables: 0.
Jan 10 17:06:29 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:06:29.083175) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 10 17:06:29 compute-0 ceph-mon[75249]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 21
Jan 10 17:06:29 compute-0 ceph-mon[75249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768064789083385, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 777, "num_deletes": 251, "total_data_size": 700175, "memory_usage": 714144, "flush_reason": "Manual Compaction"}
Jan 10 17:06:29 compute-0 ceph-mon[75249]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #22: started
Jan 10 17:06:29 compute-0 ceph-mon[75249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768064789089317, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 22, "file_size": 448426, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 6833, "largest_seqno": 7609, "table_properties": {"data_size": 445179, "index_size": 1091, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 8280, "raw_average_key_size": 19, "raw_value_size": 438295, "raw_average_value_size": 1031, "num_data_blocks": 52, "num_entries": 425, "num_filter_entries": 425, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768064719, "oldest_key_time": 1768064719, "file_creation_time": 1768064789, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f71f9c2-f3c5-4fc3-bcd9-6ffe346ae9d4", "db_session_id": "VPFJD76VNV79HUMFHEYZ", "orig_file_number": 22, "seqno_to_time_mapping": "N/A"}}
Jan 10 17:06:29 compute-0 ceph-mon[75249]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 6145 microseconds, and 3384 cpu microseconds.
Jan 10 17:06:29 compute-0 ceph-mon[75249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 10 17:06:29 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:06:29.089378) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #22: 448426 bytes OK
Jan 10 17:06:29 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:06:29.089408) [db/memtable_list.cc:519] [default] Level-0 commit table #22 started
Jan 10 17:06:29 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:06:29.090867) [db/memtable_list.cc:722] [default] Level-0 commit table #22: memtable #1 done
Jan 10 17:06:29 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:06:29.090895) EVENT_LOG_v1 {"time_micros": 1768064789090885, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 10 17:06:29 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:06:29.090926) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 10 17:06:29 compute-0 ceph-mon[75249]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 696261, prev total WAL file size 696261, number of live WAL files 2.
Jan 10 17:06:29 compute-0 ceph-mon[75249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000018.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 10 17:06:29 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:06:29.091954) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740030' seq:72057594037927935, type:22 .. '6D67727374617400323532' seq:0, type:0; will stop at (end)
Jan 10 17:06:29 compute-0 ceph-mon[75249]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 10 17:06:29 compute-0 ceph-mon[75249]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [22(437KB)], [20(5700KB)]
Jan 10 17:06:29 compute-0 ceph-mon[75249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768064789092106, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [22], "files_L6": [20], "score": -1, "input_data_size": 6285746, "oldest_snapshot_seqno": -1}
Jan 10 17:06:29 compute-0 ceph-mon[75249]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #23: 2635 keys, 4600375 bytes, temperature: kUnknown
Jan 10 17:06:29 compute-0 ceph-mon[75249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768064789135181, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 23, "file_size": 4600375, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 4580544, "index_size": 12170, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 6597, "raw_key_size": 61213, "raw_average_key_size": 23, "raw_value_size": 4531027, "raw_average_value_size": 1719, "num_data_blocks": 550, "num_entries": 2635, "num_filter_entries": 2635, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768064235, "oldest_key_time": 0, "file_creation_time": 1768064789, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f71f9c2-f3c5-4fc3-bcd9-6ffe346ae9d4", "db_session_id": "VPFJD76VNV79HUMFHEYZ", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Jan 10 17:06:29 compute-0 ceph-mon[75249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 10 17:06:29 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:06:29.135762) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 4600375 bytes
Jan 10 17:06:29 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:06:29.137480) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 145.6 rd, 106.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 5.6 +0.0 blob) out(4.4 +0.0 blob), read-write-amplify(24.3) write-amplify(10.3) OK, records in: 3120, records dropped: 485 output_compression: NoCompression
Jan 10 17:06:29 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:06:29.137515) EVENT_LOG_v1 {"time_micros": 1768064789137493, "job": 6, "event": "compaction_finished", "compaction_time_micros": 43166, "compaction_time_cpu_micros": 21223, "output_level": 6, "num_output_files": 1, "total_output_size": 4600375, "num_input_records": 3120, "num_output_records": 2635, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 10 17:06:29 compute-0 ceph-mon[75249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000022.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 10 17:06:29 compute-0 ceph-mon[75249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768064789137923, "job": 6, "event": "table_file_deletion", "file_number": 22}
Jan 10 17:06:29 compute-0 ceph-mon[75249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 10 17:06:29 compute-0 ceph-mon[75249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768064789139170, "job": 6, "event": "table_file_deletion", "file_number": 20}
Jan 10 17:06:29 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:06:29.091660) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 10 17:06:29 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:06:29.139286) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 10 17:06:29 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:06:29.139294) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 10 17:06:29 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:06:29.139295) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 10 17:06:29 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:06:29.139297) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 10 17:06:29 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:06:29.139299) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 10 17:06:29 compute-0 ceph-mon[75249]: pgmap v324: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:06:30 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v325: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:06:31 compute-0 ceph-mon[75249]: pgmap v325: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:06:32 compute-0 podman[141746]: 2026-01-10 17:06:32.055921262 +0000 UTC m=+5.323549676 image pull a17927617ef5a603f0594ee0d6df65aabdc9e0303ccc5a52c36f193de33ee0fe quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Jan 10 17:06:32 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v326: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:06:32 compute-0 podman[141863]: 2026-01-10 17:06:32.151123932 +0000 UTC m=+0.020923405 image pull a17927617ef5a603f0594ee0d6df65aabdc9e0303ccc5a52c36f193de33ee0fe quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Jan 10 17:06:32 compute-0 podman[141863]: 2026-01-10 17:06:32.840787488 +0000 UTC m=+0.710586981 container create a2049b55215718ead6719d161f51721cb7ba61ad8a59a8a1174e05b17008fa6f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Jan 10 17:06:32 compute-0 python3[141733]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Jan 10 17:06:32 compute-0 ceph-mon[75249]: pgmap v326: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:06:32 compute-0 sudo[141731]: pam_unix(sudo:session): session closed for user root
Jan 10 17:06:33 compute-0 sudo[142051]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ravafaetngehqsbbkfkdnnwzfwcubplp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064793.1862917-565-54854689488301/AnsiballZ_stat.py'
Jan 10 17:06:33 compute-0 sudo[142051]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:06:33 compute-0 python3.9[142053]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 10 17:06:33 compute-0 sudo[142051]: pam_unix(sudo:session): session closed for user root
Jan 10 17:06:34 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:06:34 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v327: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:06:34 compute-0 sudo[142205]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jxtxtxjfaosnmdutcuijujvqecgzkoqw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064794.0212774-574-123603009567044/AnsiballZ_file.py'
Jan 10 17:06:34 compute-0 sudo[142205]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:06:34 compute-0 python3.9[142207]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:06:34 compute-0 sudo[142205]: pam_unix(sudo:session): session closed for user root
Jan 10 17:06:34 compute-0 sudo[142281]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pjxbqkoufgpobiawlmdyptqzhqtghszp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064794.0212774-574-123603009567044/AnsiballZ_stat.py'
Jan 10 17:06:34 compute-0 sudo[142281]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:06:35 compute-0 python3.9[142283]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 10 17:06:35 compute-0 sudo[142281]: pam_unix(sudo:session): session closed for user root
Jan 10 17:06:35 compute-0 ceph-mon[75249]: pgmap v327: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:06:35 compute-0 sudo[142432]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fuskebmpuudpajxrtuktktcryhrltzia ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064795.1105042-574-111213912652636/AnsiballZ_copy.py'
Jan 10 17:06:35 compute-0 sudo[142432]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:06:36 compute-0 python3.9[142434]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1768064795.1105042-574-111213912652636/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:06:36 compute-0 sudo[142432]: pam_unix(sudo:session): session closed for user root
Jan 10 17:06:36 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v328: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:06:36 compute-0 sudo[142508]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eqboisgcfujywbwthqtfwchxvtfxwbty ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064795.1105042-574-111213912652636/AnsiballZ_systemd.py'
Jan 10 17:06:36 compute-0 sudo[142508]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:06:36 compute-0 python3.9[142510]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 10 17:06:36 compute-0 systemd[1]: Reloading.
Jan 10 17:06:36 compute-0 systemd-rc-local-generator[142538]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 17:06:36 compute-0 systemd-sysv-generator[142541]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 10 17:06:37 compute-0 sudo[142508]: pam_unix(sudo:session): session closed for user root
Jan 10 17:06:37 compute-0 sudo[142620]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hkrdntskojtsnoxbehjrffishadhumrc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064795.1105042-574-111213912652636/AnsiballZ_systemd.py'
Jan 10 17:06:37 compute-0 sudo[142620]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:06:37 compute-0 ceph-mon[75249]: pgmap v328: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:06:37 compute-0 python3.9[142622]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 10 17:06:37 compute-0 systemd[1]: Reloading.
Jan 10 17:06:37 compute-0 systemd-rc-local-generator[142651]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 17:06:37 compute-0 systemd-sysv-generator[142655]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 10 17:06:38 compute-0 ceph-mgr[75538]: [balancer INFO root] Optimize plan auto_2026-01-10_17:06:38
Jan 10 17:06:38 compute-0 ceph-mgr[75538]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 10 17:06:38 compute-0 ceph-mgr[75538]: [balancer INFO root] do_upmap
Jan 10 17:06:38 compute-0 ceph-mgr[75538]: [balancer INFO root] pools ['.mgr', 'vms', 'images', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'volumes', 'backups']
Jan 10 17:06:38 compute-0 ceph-mgr[75538]: [balancer INFO root] prepared 0/10 upmap changes
Jan 10 17:06:38 compute-0 systemd[1]: Starting ovn_controller container...
Jan 10 17:06:38 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v329: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:06:38 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:06:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f89c32a35016ce12f156ce7d42ff13e3d9db2ffe7b430448d292e6f318ab3d1b/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Jan 10 17:06:38 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run a2049b55215718ead6719d161f51721cb7ba61ad8a59a8a1174e05b17008fa6f.
Jan 10 17:06:38 compute-0 podman[142663]: 2026-01-10 17:06:38.335249349 +0000 UTC m=+0.167162614 container init a2049b55215718ead6719d161f51721cb7ba61ad8a59a8a1174e05b17008fa6f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 10 17:06:38 compute-0 ovn_controller[142678]: + sudo -E kolla_set_configs
Jan 10 17:06:38 compute-0 podman[142663]: 2026-01-10 17:06:38.377052115 +0000 UTC m=+0.208965320 container start a2049b55215718ead6719d161f51721cb7ba61ad8a59a8a1174e05b17008fa6f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 10 17:06:38 compute-0 edpm-start-podman-container[142663]: ovn_controller
Jan 10 17:06:38 compute-0 systemd[1]: Created slice User Slice of UID 0.
Jan 10 17:06:38 compute-0 systemd[1]: Starting User Runtime Directory /run/user/0...
Jan 10 17:06:38 compute-0 systemd[1]: Finished User Runtime Directory /run/user/0.
Jan 10 17:06:38 compute-0 systemd[1]: Starting User Manager for UID 0...
Jan 10 17:06:38 compute-0 edpm-start-podman-container[142662]: Creating additional drop-in dependency for "ovn_controller" (a2049b55215718ead6719d161f51721cb7ba61ad8a59a8a1174e05b17008fa6f)
Jan 10 17:06:38 compute-0 systemd[142718]: pam_unix(systemd-user:session): session opened for user root(uid=0) by root(uid=0)
Jan 10 17:06:38 compute-0 podman[142685]: 2026-01-10 17:06:38.487866573 +0000 UTC m=+0.093933176 container health_status a2049b55215718ead6719d161f51721cb7ba61ad8a59a8a1174e05b17008fa6f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 10 17:06:38 compute-0 systemd[1]: a2049b55215718ead6719d161f51721cb7ba61ad8a59a8a1174e05b17008fa6f-43958acf483cf7eb.service: Main process exited, code=exited, status=1/FAILURE
Jan 10 17:06:38 compute-0 systemd[1]: a2049b55215718ead6719d161f51721cb7ba61ad8a59a8a1174e05b17008fa6f-43958acf483cf7eb.service: Failed with result 'exit-code'.
Jan 10 17:06:38 compute-0 systemd[1]: Reloading.
Jan 10 17:06:38 compute-0 systemd-rc-local-generator[142762]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 17:06:38 compute-0 systemd-sysv-generator[142767]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 10 17:06:38 compute-0 systemd[142718]: Queued start job for default target Main User Target.
Jan 10 17:06:38 compute-0 systemd[142718]: Created slice User Application Slice.
Jan 10 17:06:38 compute-0 systemd[142718]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Jan 10 17:06:38 compute-0 systemd[142718]: Started Daily Cleanup of User's Temporary Directories.
Jan 10 17:06:38 compute-0 systemd[142718]: Reached target Paths.
Jan 10 17:06:38 compute-0 systemd[142718]: Reached target Timers.
Jan 10 17:06:38 compute-0 systemd[142718]: Starting D-Bus User Message Bus Socket...
Jan 10 17:06:38 compute-0 systemd[142718]: Starting Create User's Volatile Files and Directories...
Jan 10 17:06:38 compute-0 systemd[142718]: Finished Create User's Volatile Files and Directories.
Jan 10 17:06:38 compute-0 systemd[142718]: Listening on D-Bus User Message Bus Socket.
Jan 10 17:06:38 compute-0 systemd[142718]: Reached target Sockets.
Jan 10 17:06:38 compute-0 systemd[142718]: Reached target Basic System.
Jan 10 17:06:38 compute-0 systemd[142718]: Reached target Main User Target.
Jan 10 17:06:38 compute-0 systemd[142718]: Startup finished in 205ms.
Jan 10 17:06:38 compute-0 systemd[1]: Started User Manager for UID 0.
Jan 10 17:06:38 compute-0 systemd[1]: Started Session c1 of User root.
Jan 10 17:06:38 compute-0 systemd[1]: Started ovn_controller container.
Jan 10 17:06:38 compute-0 sudo[142620]: pam_unix(sudo:session): session closed for user root
Jan 10 17:06:38 compute-0 ovn_controller[142678]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 10 17:06:38 compute-0 ovn_controller[142678]: INFO:__main__:Validating config file
Jan 10 17:06:38 compute-0 ovn_controller[142678]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 10 17:06:38 compute-0 ovn_controller[142678]: INFO:__main__:Writing out command to execute
Jan 10 17:06:38 compute-0 systemd[1]: session-c1.scope: Deactivated successfully.
Jan 10 17:06:38 compute-0 ovn_controller[142678]: ++ cat /run_command
Jan 10 17:06:38 compute-0 ovn_controller[142678]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Jan 10 17:06:38 compute-0 ovn_controller[142678]: + ARGS=
Jan 10 17:06:38 compute-0 ovn_controller[142678]: + sudo kolla_copy_cacerts
Jan 10 17:06:38 compute-0 systemd[1]: Started Session c2 of User root.
Jan 10 17:06:38 compute-0 systemd[1]: session-c2.scope: Deactivated successfully.
Jan 10 17:06:38 compute-0 ovn_controller[142678]: + [[ ! -n '' ]]
Jan 10 17:06:38 compute-0 ovn_controller[142678]: + . kolla_extend_start
Jan 10 17:06:38 compute-0 ovn_controller[142678]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Jan 10 17:06:38 compute-0 ovn_controller[142678]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Jan 10 17:06:38 compute-0 ovn_controller[142678]: + umask 0022
Jan 10 17:06:38 compute-0 ovn_controller[142678]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Jan 10 17:06:38 compute-0 ovn_controller[142678]: 2026-01-10T17:06:38Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Jan 10 17:06:38 compute-0 ovn_controller[142678]: 2026-01-10T17:06:38Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Jan 10 17:06:38 compute-0 ovn_controller[142678]: 2026-01-10T17:06:38Z|00003|main|INFO|OVN internal version is : [24.03.8-20.33.0-76.8]
Jan 10 17:06:38 compute-0 ovn_controller[142678]: 2026-01-10T17:06:38Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Jan 10 17:06:38 compute-0 ovn_controller[142678]: 2026-01-10T17:06:38Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Jan 10 17:06:38 compute-0 ovn_controller[142678]: 2026-01-10T17:06:38Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Jan 10 17:06:38 compute-0 NetworkManager[49047]: <info>  [1768064798.9816] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/16)
Jan 10 17:06:38 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:06:38 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:06:38 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:06:38 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:06:38 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:06:38 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:06:38 compute-0 NetworkManager[49047]: <info>  [1768064798.9827] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 10 17:06:38 compute-0 NetworkManager[49047]: <warn>  [1768064798.9830] device (br-int)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 10 17:06:38 compute-0 NetworkManager[49047]: <info>  [1768064798.9841] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Jan 10 17:06:38 compute-0 NetworkManager[49047]: <info>  [1768064798.9851] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/18)
Jan 10 17:06:38 compute-0 NetworkManager[49047]: <info>  [1768064798.9857] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Jan 10 17:06:38 compute-0 ovn_controller[142678]: 2026-01-10T17:06:38Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Jan 10 17:06:38 compute-0 kernel: br-int: entered promiscuous mode
Jan 10 17:06:38 compute-0 ovn_controller[142678]: 2026-01-10T17:06:38Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 10 17:06:38 compute-0 ovn_controller[142678]: 2026-01-10T17:06:38Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 10 17:06:38 compute-0 ovn_controller[142678]: 2026-01-10T17:06:38Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported
Jan 10 17:06:38 compute-0 ovn_controller[142678]: 2026-01-10T17:06:38Z|00011|features|INFO|OVS Feature: ct_flush, state: supported
Jan 10 17:06:38 compute-0 ovn_controller[142678]: 2026-01-10T17:06:38Z|00012|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Jan 10 17:06:38 compute-0 ovn_controller[142678]: 2026-01-10T17:06:38Z|00013|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Jan 10 17:06:38 compute-0 ovn_controller[142678]: 2026-01-10T17:06:38Z|00014|main|INFO|OVS feature set changed, force recompute.
Jan 10 17:06:38 compute-0 ovn_controller[142678]: 2026-01-10T17:06:38Z|00015|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 10 17:06:38 compute-0 ovn_controller[142678]: 2026-01-10T17:06:38Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 10 17:06:38 compute-0 ovn_controller[142678]: 2026-01-10T17:06:38Z|00017|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Jan 10 17:06:38 compute-0 ovn_controller[142678]: 2026-01-10T17:06:38Z|00018|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Jan 10 17:06:38 compute-0 ovn_controller[142678]: 2026-01-10T17:06:38Z|00019|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Jan 10 17:06:38 compute-0 ovn_controller[142678]: 2026-01-10T17:06:38Z|00020|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 10 17:06:39 compute-0 ovn_controller[142678]: 2026-01-10T17:06:39Z|00021|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 10 17:06:39 compute-0 ovn_controller[142678]: 2026-01-10T17:06:39Z|00022|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Jan 10 17:06:39 compute-0 ovn_controller[142678]: 2026-01-10T17:06:39Z|00023|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Jan 10 17:06:39 compute-0 ovn_controller[142678]: 2026-01-10T17:06:39Z|00024|main|INFO|OVS feature set changed, force recompute.
Jan 10 17:06:39 compute-0 ovn_controller[142678]: 2026-01-10T17:06:39Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 10 17:06:39 compute-0 ovn_controller[142678]: 2026-01-10T17:06:39Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 10 17:06:39 compute-0 ovn_controller[142678]: 2026-01-10T17:06:39Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 10 17:06:39 compute-0 ovn_controller[142678]: 2026-01-10T17:06:39Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 10 17:06:39 compute-0 NetworkManager[49047]: <info>  [1768064799.0119] manager: (ovn-198a04-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Jan 10 17:06:39 compute-0 kernel: genev_sys_6081: entered promiscuous mode
Jan 10 17:06:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 10 17:06:39 compute-0 NetworkManager[49047]: <info>  [1768064799.0372] device (genev_sys_6081): carrier: link connected
Jan 10 17:06:39 compute-0 NetworkManager[49047]: <info>  [1768064799.0375] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/20)
Jan 10 17:06:39 compute-0 ovn_controller[142678]: 2026-01-10T17:06:39Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 10 17:06:39 compute-0 systemd-udevd[142814]: Network interface NamePolicy= disabled on kernel command line.
Jan 10 17:06:39 compute-0 systemd-udevd[142815]: Network interface NamePolicy= disabled on kernel command line.
Jan 10 17:06:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 10 17:06:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 10 17:06:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 10 17:06:39 compute-0 ovn_controller[142678]: 2026-01-10T17:06:39Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 10 17:06:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 10 17:06:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 10 17:06:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 10 17:06:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 10 17:06:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 10 17:06:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 10 17:06:39 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:06:39 compute-0 ceph-mon[75249]: pgmap v329: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:06:39 compute-0 python3.9[142945]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Jan 10 17:06:40 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v330: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:06:40 compute-0 sudo[143095]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-runvaalrcbvfnnvxkvfawupopfqdvedm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064800.2842486-619-239591908359752/AnsiballZ_stat.py'
Jan 10 17:06:40 compute-0 sudo[143095]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:06:40 compute-0 python3.9[143097]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 17:06:40 compute-0 sudo[143095]: pam_unix(sudo:session): session closed for user root
Jan 10 17:06:41 compute-0 sudo[143218]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-utfticnbmkzycnreakzrzrxmbbfstuqy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064800.2842486-619-239591908359752/AnsiballZ_copy.py'
Jan 10 17:06:41 compute-0 sudo[143218]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:06:41 compute-0 python3.9[143220]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1768064800.2842486-619-239591908359752/.source.yaml _original_basename=.rn5e8xzu follow=False checksum=3b0154ee2942b8cfaec22aa738d9e56f48fa5c3e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:06:41 compute-0 sudo[143218]: pam_unix(sudo:session): session closed for user root
Jan 10 17:06:41 compute-0 ceph-mon[75249]: pgmap v330: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:06:41 compute-0 sudo[143370]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qvelnllbcxwwrisgwbhhdqtroulkkoed ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064801.562979-634-249237555726413/AnsiballZ_command.py'
Jan 10 17:06:41 compute-0 sudo[143370]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:06:42 compute-0 python3.9[143372]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 17:06:42 compute-0 ovs-vsctl[143373]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Jan 10 17:06:42 compute-0 sudo[143370]: pam_unix(sudo:session): session closed for user root
Jan 10 17:06:42 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v331: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:06:42 compute-0 sudo[143523]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-utwbucnwaublryqntchvlrkvuelogwfy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064802.230437-642-52606500554532/AnsiballZ_command.py'
Jan 10 17:06:42 compute-0 sudo[143523]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:06:42 compute-0 python3.9[143525]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 17:06:42 compute-0 ovs-vsctl[143527]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Jan 10 17:06:42 compute-0 sudo[143523]: pam_unix(sudo:session): session closed for user root
Jan 10 17:06:43 compute-0 sudo[143678]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-irifxrwgkwfhmuqcrqgjommnlvjcwkxp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064803.1532621-656-148240659591517/AnsiballZ_command.py'
Jan 10 17:06:43 compute-0 sudo[143678]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:06:43 compute-0 ceph-mon[75249]: pgmap v331: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:06:43 compute-0 python3.9[143680]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 17:06:43 compute-0 ovs-vsctl[143681]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Jan 10 17:06:43 compute-0 sudo[143678]: pam_unix(sudo:session): session closed for user root
Jan 10 17:06:44 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:06:44 compute-0 sshd-session[131493]: Connection closed by 192.168.122.30 port 44964
Jan 10 17:06:44 compute-0 sshd-session[131490]: pam_unix(sshd:session): session closed for user zuul
Jan 10 17:06:44 compute-0 systemd[1]: session-46.scope: Deactivated successfully.
Jan 10 17:06:44 compute-0 systemd[1]: session-46.scope: Consumed 1min 2.304s CPU time.
Jan 10 17:06:44 compute-0 systemd-logind[798]: Session 46 logged out. Waiting for processes to exit.
Jan 10 17:06:44 compute-0 systemd-logind[798]: Removed session 46.
Jan 10 17:06:44 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v332: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:06:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] _maybe_adjust
Jan 10 17:06:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:06:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 10 17:06:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:06:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 17:06:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:06:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 17:06:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:06:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 17:06:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:06:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 17:06:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:06:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 9.302004027771843e-07 of space, bias 4.0, pg target 0.0011162404833326212 quantized to 16 (current 16)
Jan 10 17:06:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:06:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 17:06:45 compute-0 ceph-mon[75249]: pgmap v332: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:06:46 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v333: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:06:47 compute-0 ceph-mon[75249]: pgmap v333: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:06:48 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v334: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:06:48 compute-0 systemd[1]: Stopping User Manager for UID 0...
Jan 10 17:06:48 compute-0 systemd[142718]: Activating special unit Exit the Session...
Jan 10 17:06:48 compute-0 systemd[142718]: Stopped target Main User Target.
Jan 10 17:06:48 compute-0 systemd[142718]: Stopped target Basic System.
Jan 10 17:06:48 compute-0 systemd[142718]: Stopped target Paths.
Jan 10 17:06:48 compute-0 systemd[142718]: Stopped target Sockets.
Jan 10 17:06:48 compute-0 systemd[142718]: Stopped target Timers.
Jan 10 17:06:48 compute-0 systemd[142718]: Stopped Daily Cleanup of User's Temporary Directories.
Jan 10 17:06:48 compute-0 systemd[142718]: Closed D-Bus User Message Bus Socket.
Jan 10 17:06:48 compute-0 systemd[142718]: Stopped Create User's Volatile Files and Directories.
Jan 10 17:06:48 compute-0 systemd[142718]: Removed slice User Application Slice.
Jan 10 17:06:48 compute-0 systemd[142718]: Reached target Shutdown.
Jan 10 17:06:48 compute-0 systemd[142718]: Finished Exit the Session.
Jan 10 17:06:48 compute-0 systemd[142718]: Reached target Exit the Session.
Jan 10 17:06:48 compute-0 systemd[1]: user@0.service: Deactivated successfully.
Jan 10 17:06:48 compute-0 systemd[1]: Stopped User Manager for UID 0.
Jan 10 17:06:49 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/0...
Jan 10 17:06:49 compute-0 systemd[1]: run-user-0.mount: Deactivated successfully.
Jan 10 17:06:49 compute-0 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Jan 10 17:06:49 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/0.
Jan 10 17:06:49 compute-0 systemd[1]: Removed slice User Slice of UID 0.
Jan 10 17:06:49 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:06:49 compute-0 sshd-session[143707]: Accepted publickey for zuul from 192.168.122.30 port 33280 ssh2: ECDSA SHA256:YYROLJW/JwZAyyZtyl+88gzuUs1GqrQIhGb+AzXg9yc
Jan 10 17:06:49 compute-0 systemd-logind[798]: New session 48 of user zuul.
Jan 10 17:06:49 compute-0 systemd[1]: Started Session 48 of User zuul.
Jan 10 17:06:49 compute-0 sshd-session[143707]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 10 17:06:49 compute-0 ceph-mon[75249]: pgmap v334: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:06:50 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v335: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:06:50 compute-0 python3.9[143860]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 10 17:06:51 compute-0 sudo[144014]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pyemkqgxlayezstucjqnduebvyldgecj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064811.0420232-29-264830687075196/AnsiballZ_file.py'
Jan 10 17:06:51 compute-0 sudo[144014]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:06:51 compute-0 python3.9[144016]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/openstack/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 10 17:06:51 compute-0 sudo[144014]: pam_unix(sudo:session): session closed for user root
Jan 10 17:06:51 compute-0 ceph-mon[75249]: pgmap v335: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:06:52 compute-0 sudo[144166]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fxtjvyfyljcdqzvleukkquknirnlfjjs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064811.8712258-29-217174731461999/AnsiballZ_file.py'
Jan 10 17:06:52 compute-0 sudo[144166]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:06:52 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v336: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:06:52 compute-0 python3.9[144168]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 10 17:06:52 compute-0 sudo[144166]: pam_unix(sudo:session): session closed for user root
Jan 10 17:06:52 compute-0 sudo[144318]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hivyielkzawwvlmkddqjgsbrfsfivaje ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064812.5502732-29-239217185845418/AnsiballZ_file.py'
Jan 10 17:06:52 compute-0 sudo[144318]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:06:53 compute-0 python3.9[144320]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 10 17:06:53 compute-0 sudo[144318]: pam_unix(sudo:session): session closed for user root
Jan 10 17:06:53 compute-0 sudo[144470]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-icrnhkgukylldlclsofrulfztkdmzuyn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064813.2736473-29-65732019404653/AnsiballZ_file.py'
Jan 10 17:06:53 compute-0 sudo[144470]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:06:53 compute-0 python3.9[144472]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 10 17:06:53 compute-0 sudo[144470]: pam_unix(sudo:session): session closed for user root
Jan 10 17:06:53 compute-0 ceph-mon[75249]: pgmap v336: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:06:54 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:06:54 compute-0 sudo[144622]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ylsyfbmjroeyuayazangddkanbqbydfn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064813.934003-29-82439728529745/AnsiballZ_file.py'
Jan 10 17:06:54 compute-0 sudo[144622]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:06:54 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v337: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:06:54 compute-0 python3.9[144624]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 10 17:06:54 compute-0 sudo[144622]: pam_unix(sudo:session): session closed for user root
Jan 10 17:06:55 compute-0 python3.9[144774]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 10 17:06:55 compute-0 sudo[144924]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yeqcrhhftclgbaswendobppmdyurxewh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064815.3600457-73-162949523142027/AnsiballZ_seboolean.py'
Jan 10 17:06:55 compute-0 sudo[144924]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:06:55 compute-0 ceph-mon[75249]: pgmap v337: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:06:56 compute-0 python3.9[144926]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Jan 10 17:06:56 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v338: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:06:56 compute-0 sudo[144924]: pam_unix(sudo:session): session closed for user root
Jan 10 17:06:57 compute-0 python3.9[145077]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 17:06:57 compute-0 ceph-mon[75249]: pgmap v338: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:06:58 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v339: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:06:58 compute-0 python3.9[145198]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1768064816.9693048-81-192797117813710/.source follow=False _original_basename=haproxy.j2 checksum=a5072e7b19ca96a1f495d94f97f31903737cfd27 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 10 17:06:58 compute-0 ceph-mon[75249]: pgmap v339: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:06:59 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:06:59 compute-0 python3.9[145348]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 17:06:59 compute-0 python3.9[145469]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1768064818.6912684-96-281359478951605/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 10 17:07:00 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v340: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:07:00 compute-0 sudo[145619]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gbiawlfycphygyvkjxwwflpoazkfgakh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064820.0276573-113-234376519865589/AnsiballZ_setup.py'
Jan 10 17:07:00 compute-0 sudo[145619]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:07:00 compute-0 python3.9[145621]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 10 17:07:00 compute-0 sudo[145619]: pam_unix(sudo:session): session closed for user root
Jan 10 17:07:01 compute-0 ceph-mon[75249]: pgmap v340: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:07:01 compute-0 sudo[145703]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dayhicumbmvskurdpdpeqtykbqxmlwzp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064820.0276573-113-234376519865589/AnsiballZ_dnf.py'
Jan 10 17:07:01 compute-0 sudo[145703]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:07:01 compute-0 python3.9[145705]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 10 17:07:02 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v341: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:07:03 compute-0 sudo[145703]: pam_unix(sudo:session): session closed for user root
Jan 10 17:07:03 compute-0 ceph-mon[75249]: pgmap v341: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:07:04 compute-0 sudo[145856]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uprczrmbwzrnaxhyhrnjvbeeytqgwzur ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064823.4376035-125-182166286882145/AnsiballZ_systemd.py'
Jan 10 17:07:04 compute-0 sudo[145856]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:07:04 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:07:04 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v342: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:07:04 compute-0 python3.9[145858]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 10 17:07:04 compute-0 sudo[145856]: pam_unix(sudo:session): session closed for user root
Jan 10 17:07:05 compute-0 python3.9[146011]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 17:07:05 compute-0 python3.9[146132]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1768064824.706127-133-247400209816762/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 10 17:07:05 compute-0 ceph-mon[75249]: pgmap v342: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:07:06 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v343: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:07:06 compute-0 python3.9[146282]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 17:07:06 compute-0 python3.9[146403]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1768064825.883845-133-135031237273606/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 10 17:07:07 compute-0 ceph-mon[75249]: pgmap v343: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:07:08 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v344: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:07:08 compute-0 python3.9[146553]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 17:07:08 compute-0 ovn_controller[142678]: 2026-01-10T17:07:08Z|00025|memory|INFO|16384 kB peak resident set size after 29.8 seconds
Jan 10 17:07:08 compute-0 ovn_controller[142678]: 2026-01-10T17:07:08Z|00026|memory|INFO|idl-cells-OVN_Southbound:239 idl-cells-Open_vSwitch:528 ofctrl_desired_flow_usage-KB:5 ofctrl_installed_flow_usage-KB:4 ofctrl_sb_flow_ref_usage-KB:2
Jan 10 17:07:08 compute-0 podman[146648]: 2026-01-10 17:07:08.784683772 +0000 UTC m=+0.170203790 container health_status a2049b55215718ead6719d161f51721cb7ba61ad8a59a8a1174e05b17008fa6f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 10 17:07:08 compute-0 python3.9[146684]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1768064827.7651436-177-116100544987916/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 10 17:07:08 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:07:08 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:07:08 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:07:08 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:07:08 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:07:08 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:07:09 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:07:09 compute-0 python3.9[146850]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 17:07:09 compute-0 sudo[146851]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 17:07:09 compute-0 sudo[146851]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:07:09 compute-0 sudo[146851]: pam_unix(sudo:session): session closed for user root
Jan 10 17:07:09 compute-0 sudo[146876]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 10 17:07:09 compute-0 sudo[146876]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:07:09 compute-0 ceph-mon[75249]: pgmap v344: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:07:10 compute-0 python3.9[147035]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1768064829.0276961-177-62603822168689/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 10 17:07:10 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v345: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:07:10 compute-0 sudo[146876]: pam_unix(sudo:session): session closed for user root
Jan 10 17:07:10 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 17:07:10 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 17:07:10 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 10 17:07:10 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 10 17:07:10 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 10 17:07:10 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:07:10 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 10 17:07:10 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 10 17:07:10 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 10 17:07:10 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 10 17:07:10 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 17:07:10 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 17:07:10 compute-0 sudo[147100]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 17:07:10 compute-0 sudo[147100]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:07:10 compute-0 sudo[147100]: pam_unix(sudo:session): session closed for user root
Jan 10 17:07:10 compute-0 sudo[147125]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 10 17:07:10 compute-0 sudo[147125]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:07:10 compute-0 podman[147214]: 2026-01-10 17:07:10.656989154 +0000 UTC m=+0.041997677 container create f08a367ef13e9791d80ecd65591c2c69b880f80649a881e41a85995465e39f54 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_diffie, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 17:07:10 compute-0 systemd[1]: Started libpod-conmon-f08a367ef13e9791d80ecd65591c2c69b880f80649a881e41a85995465e39f54.scope.
Jan 10 17:07:10 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:07:10 compute-0 podman[147214]: 2026-01-10 17:07:10.637966003 +0000 UTC m=+0.022974556 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:07:10 compute-0 podman[147214]: 2026-01-10 17:07:10.738793363 +0000 UTC m=+0.123801916 container init f08a367ef13e9791d80ecd65591c2c69b880f80649a881e41a85995465e39f54 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_diffie, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 10 17:07:10 compute-0 podman[147214]: 2026-01-10 17:07:10.745466236 +0000 UTC m=+0.130474769 container start f08a367ef13e9791d80ecd65591c2c69b880f80649a881e41a85995465e39f54 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_diffie, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 17:07:10 compute-0 systemd[1]: libpod-f08a367ef13e9791d80ecd65591c2c69b880f80649a881e41a85995465e39f54.scope: Deactivated successfully.
Jan 10 17:07:10 compute-0 nifty_diffie[147279]: 167 167
Jan 10 17:07:10 compute-0 podman[147214]: 2026-01-10 17:07:10.750481731 +0000 UTC m=+0.135490294 container attach f08a367ef13e9791d80ecd65591c2c69b880f80649a881e41a85995465e39f54 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_diffie, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 10 17:07:10 compute-0 conmon[147279]: conmon f08a367ef13e9791d80e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f08a367ef13e9791d80ecd65591c2c69b880f80649a881e41a85995465e39f54.scope/container/memory.events
Jan 10 17:07:10 compute-0 podman[147214]: 2026-01-10 17:07:10.753669284 +0000 UTC m=+0.138677807 container died f08a367ef13e9791d80ecd65591c2c69b880f80649a881e41a85995465e39f54 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_diffie, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 10 17:07:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-23153237b81e905ca124b33bdd0ec8edd15ca9e5c5b1f9c5ba6fe1c3bd003585-merged.mount: Deactivated successfully.
Jan 10 17:07:10 compute-0 podman[147214]: 2026-01-10 17:07:10.799907923 +0000 UTC m=+0.184916456 container remove f08a367ef13e9791d80ecd65591c2c69b880f80649a881e41a85995465e39f54 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_diffie, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 10 17:07:10 compute-0 systemd[1]: libpod-conmon-f08a367ef13e9791d80ecd65591c2c69b880f80649a881e41a85995465e39f54.scope: Deactivated successfully.
Jan 10 17:07:10 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 17:07:10 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 10 17:07:10 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:07:10 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 10 17:07:10 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 10 17:07:10 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 17:07:10 compute-0 python3.9[147283]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 10 17:07:10 compute-0 podman[147308]: 2026-01-10 17:07:10.984181929 +0000 UTC m=+0.047738643 container create 56196550ee60fca9697dc61414534d6a8cfe559ea58563e81414be875cc4513c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_morse, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 10 17:07:11 compute-0 systemd[1]: Started libpod-conmon-56196550ee60fca9697dc61414534d6a8cfe559ea58563e81414be875cc4513c.scope.
Jan 10 17:07:11 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:07:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c83c1b8001abd5020e1df90f9c96f0abcb4b54bcec2849713717daf58e810321/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 17:07:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c83c1b8001abd5020e1df90f9c96f0abcb4b54bcec2849713717daf58e810321/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 17:07:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c83c1b8001abd5020e1df90f9c96f0abcb4b54bcec2849713717daf58e810321/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 17:07:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c83c1b8001abd5020e1df90f9c96f0abcb4b54bcec2849713717daf58e810321/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 17:07:11 compute-0 podman[147308]: 2026-01-10 17:07:10.967336612 +0000 UTC m=+0.030893326 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:07:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c83c1b8001abd5020e1df90f9c96f0abcb4b54bcec2849713717daf58e810321/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 10 17:07:11 compute-0 podman[147308]: 2026-01-10 17:07:11.076075701 +0000 UTC m=+0.139632465 container init 56196550ee60fca9697dc61414534d6a8cfe559ea58563e81414be875cc4513c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_morse, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 17:07:11 compute-0 podman[147308]: 2026-01-10 17:07:11.083484095 +0000 UTC m=+0.147040809 container start 56196550ee60fca9697dc61414534d6a8cfe559ea58563e81414be875cc4513c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_morse, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 10 17:07:11 compute-0 podman[147308]: 2026-01-10 17:07:11.087748939 +0000 UTC m=+0.151305693 container attach 56196550ee60fca9697dc61414534d6a8cfe559ea58563e81414be875cc4513c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_morse, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2)
Jan 10 17:07:11 compute-0 sudo[147485]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pypsyyquluyvzdepwxawwsbcojykpiwq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064831.162611-215-32584101085770/AnsiballZ_file.py'
Jan 10 17:07:11 compute-0 sudo[147485]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:07:11 compute-0 focused_morse[147348]: --> passed data devices: 0 physical, 3 LVM
Jan 10 17:07:11 compute-0 focused_morse[147348]: --> All data devices are unavailable
Jan 10 17:07:11 compute-0 systemd[1]: libpod-56196550ee60fca9697dc61414534d6a8cfe559ea58563e81414be875cc4513c.scope: Deactivated successfully.
Jan 10 17:07:11 compute-0 podman[147308]: 2026-01-10 17:07:11.618310984 +0000 UTC m=+0.681867698 container died 56196550ee60fca9697dc61414534d6a8cfe559ea58563e81414be875cc4513c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_morse, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 17:07:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-c83c1b8001abd5020e1df90f9c96f0abcb4b54bcec2849713717daf58e810321-merged.mount: Deactivated successfully.
Jan 10 17:07:11 compute-0 podman[147308]: 2026-01-10 17:07:11.662991808 +0000 UTC m=+0.726548522 container remove 56196550ee60fca9697dc61414534d6a8cfe559ea58563e81414be875cc4513c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_morse, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 10 17:07:11 compute-0 systemd[1]: libpod-conmon-56196550ee60fca9697dc61414534d6a8cfe559ea58563e81414be875cc4513c.scope: Deactivated successfully.
Jan 10 17:07:11 compute-0 python3.9[147488]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 10 17:07:11 compute-0 sudo[147125]: pam_unix(sudo:session): session closed for user root
Jan 10 17:07:11 compute-0 sudo[147485]: pam_unix(sudo:session): session closed for user root
Jan 10 17:07:11 compute-0 sudo[147506]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 17:07:11 compute-0 sudo[147506]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:07:11 compute-0 sudo[147506]: pam_unix(sudo:session): session closed for user root
Jan 10 17:07:11 compute-0 sudo[147555]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 -- lvm list --format json
Jan 10 17:07:11 compute-0 sudo[147555]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:07:11 compute-0 ceph-mon[75249]: pgmap v345: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:07:12 compute-0 podman[147684]: 2026-01-10 17:07:12.078124419 +0000 UTC m=+0.037328632 container create de998e4e3f6e30ffd6761c57587a6e439e8dac29ed6857e87f2e41546407e2f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_kalam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 10 17:07:12 compute-0 systemd[1]: Started libpod-conmon-de998e4e3f6e30ffd6761c57587a6e439e8dac29ed6857e87f2e41546407e2f8.scope.
Jan 10 17:07:12 compute-0 sudo[147732]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ivfrjmxzaiwanlsapeprjwzmkknjiiiu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064831.8454695-223-170588835775294/AnsiballZ_stat.py'
Jan 10 17:07:12 compute-0 sudo[147732]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:07:12 compute-0 podman[147684]: 2026-01-10 17:07:12.060609572 +0000 UTC m=+0.019813835 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:07:12 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:07:12 compute-0 podman[147684]: 2026-01-10 17:07:12.17690797 +0000 UTC m=+0.136112233 container init de998e4e3f6e30ffd6761c57587a6e439e8dac29ed6857e87f2e41546407e2f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_kalam, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 17:07:12 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v346: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:07:12 compute-0 podman[147684]: 2026-01-10 17:07:12.183184081 +0000 UTC m=+0.142388314 container start de998e4e3f6e30ffd6761c57587a6e439e8dac29ed6857e87f2e41546407e2f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_kalam, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 17:07:12 compute-0 podman[147684]: 2026-01-10 17:07:12.18694229 +0000 UTC m=+0.146146513 container attach de998e4e3f6e30ffd6761c57587a6e439e8dac29ed6857e87f2e41546407e2f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_kalam, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 17:07:12 compute-0 recursing_kalam[147736]: 167 167
Jan 10 17:07:12 compute-0 systemd[1]: libpod-de998e4e3f6e30ffd6761c57587a6e439e8dac29ed6857e87f2e41546407e2f8.scope: Deactivated successfully.
Jan 10 17:07:12 compute-0 podman[147684]: 2026-01-10 17:07:12.193009996 +0000 UTC m=+0.152214319 container died de998e4e3f6e30ffd6761c57587a6e439e8dac29ed6857e87f2e41546407e2f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_kalam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 10 17:07:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-097fa46bd5d260fec7d6c12df11a479f6353b8c6c4c07aa76f283c0d686e0450-merged.mount: Deactivated successfully.
Jan 10 17:07:12 compute-0 podman[147684]: 2026-01-10 17:07:12.244298001 +0000 UTC m=+0.203502254 container remove de998e4e3f6e30ffd6761c57587a6e439e8dac29ed6857e87f2e41546407e2f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_kalam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 17:07:12 compute-0 systemd[1]: libpod-conmon-de998e4e3f6e30ffd6761c57587a6e439e8dac29ed6857e87f2e41546407e2f8.scope: Deactivated successfully.
Jan 10 17:07:12 compute-0 python3.9[147738]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 17:07:12 compute-0 sudo[147732]: pam_unix(sudo:session): session closed for user root
Jan 10 17:07:12 compute-0 podman[147762]: 2026-01-10 17:07:12.450101611 +0000 UTC m=+0.046556759 container create 7cc233b07ad19ac7596380cc1a3947fc25f3dc41d6068d77a0e402423f017f1f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_benz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 17:07:12 compute-0 systemd[1]: Started libpod-conmon-7cc233b07ad19ac7596380cc1a3947fc25f3dc41d6068d77a0e402423f017f1f.scope.
Jan 10 17:07:12 compute-0 podman[147762]: 2026-01-10 17:07:12.42901087 +0000 UTC m=+0.025465998 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:07:12 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:07:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66d2b36944818f97c356e12b85cdaae777c3af169f62982cf15ce45810a2cb69/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 17:07:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66d2b36944818f97c356e12b85cdaae777c3af169f62982cf15ce45810a2cb69/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 17:07:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66d2b36944818f97c356e12b85cdaae777c3af169f62982cf15ce45810a2cb69/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 17:07:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66d2b36944818f97c356e12b85cdaae777c3af169f62982cf15ce45810a2cb69/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 17:07:12 compute-0 podman[147762]: 2026-01-10 17:07:12.564844294 +0000 UTC m=+0.161299492 container init 7cc233b07ad19ac7596380cc1a3947fc25f3dc41d6068d77a0e402423f017f1f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_benz, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 10 17:07:12 compute-0 podman[147762]: 2026-01-10 17:07:12.578109828 +0000 UTC m=+0.174564966 container start 7cc233b07ad19ac7596380cc1a3947fc25f3dc41d6068d77a0e402423f017f1f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_benz, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 17:07:12 compute-0 podman[147762]: 2026-01-10 17:07:12.582434834 +0000 UTC m=+0.178889962 container attach 7cc233b07ad19ac7596380cc1a3947fc25f3dc41d6068d77a0e402423f017f1f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_benz, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 17:07:12 compute-0 sudo[147857]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-simeuyaspbhkemmjjhkgrthqkyhxhpmr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064831.8454695-223-170588835775294/AnsiballZ_file.py'
Jan 10 17:07:12 compute-0 sudo[147857]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:07:12 compute-0 python3.9[147859]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 10 17:07:12 compute-0 boring_benz[147802]: {
Jan 10 17:07:12 compute-0 boring_benz[147802]:     "0": [
Jan 10 17:07:12 compute-0 boring_benz[147802]:         {
Jan 10 17:07:12 compute-0 boring_benz[147802]:             "devices": [
Jan 10 17:07:12 compute-0 boring_benz[147802]:                 "/dev/loop3"
Jan 10 17:07:12 compute-0 boring_benz[147802]:             ],
Jan 10 17:07:12 compute-0 boring_benz[147802]:             "lv_name": "ceph_lv0",
Jan 10 17:07:12 compute-0 boring_benz[147802]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 10 17:07:12 compute-0 boring_benz[147802]:             "lv_size": "21470642176",
Jan 10 17:07:12 compute-0 boring_benz[147802]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9aa1dcc9-88f4-49c0-be40-744313964d3e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 17:07:12 compute-0 boring_benz[147802]:             "lv_uuid": "fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2",
Jan 10 17:07:12 compute-0 boring_benz[147802]:             "name": "ceph_lv0",
Jan 10 17:07:12 compute-0 boring_benz[147802]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 10 17:07:12 compute-0 boring_benz[147802]:             "tags": {
Jan 10 17:07:12 compute-0 boring_benz[147802]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 10 17:07:12 compute-0 boring_benz[147802]:                 "ceph.block_uuid": "fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2",
Jan 10 17:07:12 compute-0 boring_benz[147802]:                 "ceph.cephx_lockbox_secret": "",
Jan 10 17:07:12 compute-0 boring_benz[147802]:                 "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 17:07:12 compute-0 boring_benz[147802]:                 "ceph.cluster_name": "ceph",
Jan 10 17:07:12 compute-0 boring_benz[147802]:                 "ceph.crush_device_class": "",
Jan 10 17:07:12 compute-0 boring_benz[147802]:                 "ceph.encrypted": "0",
Jan 10 17:07:12 compute-0 boring_benz[147802]:                 "ceph.objectstore": "bluestore",
Jan 10 17:07:12 compute-0 boring_benz[147802]:                 "ceph.osd_fsid": "9aa1dcc9-88f4-49c0-be40-744313964d3e",
Jan 10 17:07:12 compute-0 boring_benz[147802]:                 "ceph.osd_id": "0",
Jan 10 17:07:12 compute-0 boring_benz[147802]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 10 17:07:12 compute-0 boring_benz[147802]:                 "ceph.type": "block",
Jan 10 17:07:12 compute-0 boring_benz[147802]:                 "ceph.vdo": "0",
Jan 10 17:07:12 compute-0 boring_benz[147802]:                 "ceph.with_tpm": "0"
Jan 10 17:07:12 compute-0 boring_benz[147802]:             },
Jan 10 17:07:12 compute-0 boring_benz[147802]:             "type": "block",
Jan 10 17:07:12 compute-0 boring_benz[147802]:             "vg_name": "ceph_vg0"
Jan 10 17:07:12 compute-0 boring_benz[147802]:         }
Jan 10 17:07:12 compute-0 boring_benz[147802]:     ],
Jan 10 17:07:12 compute-0 boring_benz[147802]:     "1": [
Jan 10 17:07:12 compute-0 boring_benz[147802]:         {
Jan 10 17:07:12 compute-0 boring_benz[147802]:             "devices": [
Jan 10 17:07:12 compute-0 boring_benz[147802]:                 "/dev/loop4"
Jan 10 17:07:12 compute-0 boring_benz[147802]:             ],
Jan 10 17:07:12 compute-0 boring_benz[147802]:             "lv_name": "ceph_lv1",
Jan 10 17:07:12 compute-0 boring_benz[147802]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 10 17:07:12 compute-0 boring_benz[147802]:             "lv_size": "21470642176",
Jan 10 17:07:12 compute-0 boring_benz[147802]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e8e31518-65ae-476c-891c-e2fc550d0a1c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 17:07:12 compute-0 boring_benz[147802]:             "lv_uuid": "jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE",
Jan 10 17:07:12 compute-0 boring_benz[147802]:             "name": "ceph_lv1",
Jan 10 17:07:12 compute-0 boring_benz[147802]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 10 17:07:12 compute-0 boring_benz[147802]:             "tags": {
Jan 10 17:07:12 compute-0 boring_benz[147802]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 10 17:07:12 compute-0 boring_benz[147802]:                 "ceph.block_uuid": "jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE",
Jan 10 17:07:12 compute-0 boring_benz[147802]:                 "ceph.cephx_lockbox_secret": "",
Jan 10 17:07:12 compute-0 boring_benz[147802]:                 "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 17:07:12 compute-0 boring_benz[147802]:                 "ceph.cluster_name": "ceph",
Jan 10 17:07:12 compute-0 boring_benz[147802]:                 "ceph.crush_device_class": "",
Jan 10 17:07:12 compute-0 boring_benz[147802]:                 "ceph.encrypted": "0",
Jan 10 17:07:12 compute-0 boring_benz[147802]:                 "ceph.objectstore": "bluestore",
Jan 10 17:07:12 compute-0 boring_benz[147802]:                 "ceph.osd_fsid": "e8e31518-65ae-476c-891c-e2fc550d0a1c",
Jan 10 17:07:12 compute-0 boring_benz[147802]:                 "ceph.osd_id": "1",
Jan 10 17:07:12 compute-0 boring_benz[147802]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 10 17:07:12 compute-0 boring_benz[147802]:                 "ceph.type": "block",
Jan 10 17:07:12 compute-0 boring_benz[147802]:                 "ceph.vdo": "0",
Jan 10 17:07:12 compute-0 boring_benz[147802]:                 "ceph.with_tpm": "0"
Jan 10 17:07:12 compute-0 boring_benz[147802]:             },
Jan 10 17:07:12 compute-0 boring_benz[147802]:             "type": "block",
Jan 10 17:07:12 compute-0 boring_benz[147802]:             "vg_name": "ceph_vg1"
Jan 10 17:07:12 compute-0 boring_benz[147802]:         }
Jan 10 17:07:12 compute-0 boring_benz[147802]:     ],
Jan 10 17:07:12 compute-0 boring_benz[147802]:     "2": [
Jan 10 17:07:12 compute-0 boring_benz[147802]:         {
Jan 10 17:07:12 compute-0 boring_benz[147802]:             "devices": [
Jan 10 17:07:12 compute-0 boring_benz[147802]:                 "/dev/loop5"
Jan 10 17:07:12 compute-0 boring_benz[147802]:             ],
Jan 10 17:07:12 compute-0 boring_benz[147802]:             "lv_name": "ceph_lv2",
Jan 10 17:07:12 compute-0 boring_benz[147802]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 10 17:07:12 compute-0 boring_benz[147802]:             "lv_size": "21470642176",
Jan 10 17:07:12 compute-0 boring_benz[147802]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=87473727-6468-4f68-8371-e0bf60edaa43,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 17:07:12 compute-0 boring_benz[147802]:             "lv_uuid": "47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL",
Jan 10 17:07:12 compute-0 boring_benz[147802]:             "name": "ceph_lv2",
Jan 10 17:07:12 compute-0 boring_benz[147802]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 10 17:07:12 compute-0 boring_benz[147802]:             "tags": {
Jan 10 17:07:12 compute-0 boring_benz[147802]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 10 17:07:12 compute-0 boring_benz[147802]:                 "ceph.block_uuid": "47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL",
Jan 10 17:07:12 compute-0 boring_benz[147802]:                 "ceph.cephx_lockbox_secret": "",
Jan 10 17:07:12 compute-0 boring_benz[147802]:                 "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 17:07:12 compute-0 boring_benz[147802]:                 "ceph.cluster_name": "ceph",
Jan 10 17:07:12 compute-0 boring_benz[147802]:                 "ceph.crush_device_class": "",
Jan 10 17:07:12 compute-0 boring_benz[147802]:                 "ceph.encrypted": "0",
Jan 10 17:07:12 compute-0 boring_benz[147802]:                 "ceph.objectstore": "bluestore",
Jan 10 17:07:12 compute-0 boring_benz[147802]:                 "ceph.osd_fsid": "87473727-6468-4f68-8371-e0bf60edaa43",
Jan 10 17:07:12 compute-0 boring_benz[147802]:                 "ceph.osd_id": "2",
Jan 10 17:07:12 compute-0 boring_benz[147802]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 10 17:07:12 compute-0 boring_benz[147802]:                 "ceph.type": "block",
Jan 10 17:07:12 compute-0 boring_benz[147802]:                 "ceph.vdo": "0",
Jan 10 17:07:12 compute-0 boring_benz[147802]:                 "ceph.with_tpm": "0"
Jan 10 17:07:12 compute-0 boring_benz[147802]:             },
Jan 10 17:07:12 compute-0 boring_benz[147802]:             "type": "block",
Jan 10 17:07:12 compute-0 boring_benz[147802]:             "vg_name": "ceph_vg2"
Jan 10 17:07:12 compute-0 boring_benz[147802]:         }
Jan 10 17:07:12 compute-0 boring_benz[147802]:     ]
Jan 10 17:07:12 compute-0 boring_benz[147802]: }
Jan 10 17:07:12 compute-0 ceph-mon[75249]: pgmap v346: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:07:12 compute-0 sudo[147857]: pam_unix(sudo:session): session closed for user root
Jan 10 17:07:12 compute-0 systemd[1]: libpod-7cc233b07ad19ac7596380cc1a3947fc25f3dc41d6068d77a0e402423f017f1f.scope: Deactivated successfully.
Jan 10 17:07:12 compute-0 podman[147762]: 2026-01-10 17:07:12.901965157 +0000 UTC m=+0.498420265 container died 7cc233b07ad19ac7596380cc1a3947fc25f3dc41d6068d77a0e402423f017f1f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_benz, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 10 17:07:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-66d2b36944818f97c356e12b85cdaae777c3af169f62982cf15ce45810a2cb69-merged.mount: Deactivated successfully.
Jan 10 17:07:12 compute-0 podman[147762]: 2026-01-10 17:07:12.950884754 +0000 UTC m=+0.547339862 container remove 7cc233b07ad19ac7596380cc1a3947fc25f3dc41d6068d77a0e402423f017f1f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_benz, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 17:07:12 compute-0 systemd[1]: libpod-conmon-7cc233b07ad19ac7596380cc1a3947fc25f3dc41d6068d77a0e402423f017f1f.scope: Deactivated successfully.
Jan 10 17:07:12 compute-0 sudo[147555]: pam_unix(sudo:session): session closed for user root
Jan 10 17:07:13 compute-0 sudo[147910]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 17:07:13 compute-0 sudo[147910]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:07:13 compute-0 sudo[147910]: pam_unix(sudo:session): session closed for user root
Jan 10 17:07:13 compute-0 sudo[147972]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 -- raw list --format json
Jan 10 17:07:13 compute-0 sudo[147972]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:07:13 compute-0 sudo[148077]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rwgjoclfazgptuujzhntaupczujgoqde ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064833.0224185-223-116081668852975/AnsiballZ_stat.py'
Jan 10 17:07:13 compute-0 sudo[148077]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:07:13 compute-0 podman[148094]: 2026-01-10 17:07:13.39414146 +0000 UTC m=+0.046070025 container create 35daaa43208955800346835fd7a3119dcb4255ea7fcf8744265552307992fd1d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_hopper, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 10 17:07:13 compute-0 systemd[1]: Started libpod-conmon-35daaa43208955800346835fd7a3119dcb4255ea7fcf8744265552307992fd1d.scope.
Jan 10 17:07:13 compute-0 python3.9[148079]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 17:07:13 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:07:13 compute-0 podman[148094]: 2026-01-10 17:07:13.372893605 +0000 UTC m=+0.024822170 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:07:13 compute-0 podman[148094]: 2026-01-10 17:07:13.470401279 +0000 UTC m=+0.122329874 container init 35daaa43208955800346835fd7a3119dcb4255ea7fcf8744265552307992fd1d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_hopper, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 17:07:13 compute-0 podman[148094]: 2026-01-10 17:07:13.47630701 +0000 UTC m=+0.128235575 container start 35daaa43208955800346835fd7a3119dcb4255ea7fcf8744265552307992fd1d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_hopper, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 10 17:07:13 compute-0 podman[148094]: 2026-01-10 17:07:13.480301896 +0000 UTC m=+0.132230501 container attach 35daaa43208955800346835fd7a3119dcb4255ea7fcf8744265552307992fd1d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_hopper, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 17:07:13 compute-0 competent_hopper[148110]: 167 167
Jan 10 17:07:13 compute-0 systemd[1]: libpod-35daaa43208955800346835fd7a3119dcb4255ea7fcf8744265552307992fd1d.scope: Deactivated successfully.
Jan 10 17:07:13 compute-0 podman[148094]: 2026-01-10 17:07:13.484986141 +0000 UTC m=+0.136914706 container died 35daaa43208955800346835fd7a3119dcb4255ea7fcf8744265552307992fd1d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_hopper, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 10 17:07:13 compute-0 sudo[148077]: pam_unix(sudo:session): session closed for user root
Jan 10 17:07:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-1b8ec561fcb06c3f3d8af0c795d550277fdfb5562bbc7ad611e54d4726a0e680-merged.mount: Deactivated successfully.
Jan 10 17:07:13 compute-0 podman[148094]: 2026-01-10 17:07:13.528287225 +0000 UTC m=+0.180215810 container remove 35daaa43208955800346835fd7a3119dcb4255ea7fcf8744265552307992fd1d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_hopper, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 10 17:07:13 compute-0 systemd[1]: libpod-conmon-35daaa43208955800346835fd7a3119dcb4255ea7fcf8744265552307992fd1d.scope: Deactivated successfully.
Jan 10 17:07:13 compute-0 podman[148180]: 2026-01-10 17:07:13.735565268 +0000 UTC m=+0.057388263 container create bc186779c388d38772f86a5859fd91a4b58cc67fc9c71815ffd71047255eb6b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_chaum, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 17:07:13 compute-0 sudo[148223]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-geeilnyldmyuoqzqxdlukdgcvcpxemwe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064833.0224185-223-116081668852975/AnsiballZ_file.py'
Jan 10 17:07:13 compute-0 sudo[148223]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:07:13 compute-0 systemd[1]: Started libpod-conmon-bc186779c388d38772f86a5859fd91a4b58cc67fc9c71815ffd71047255eb6b6.scope.
Jan 10 17:07:13 compute-0 podman[148180]: 2026-01-10 17:07:13.707298269 +0000 UTC m=+0.029121264 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:07:13 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:07:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e60ec7a45037fbfe9af98cbbc6354b3bb1f9cc11256e55319754bd3659db0838/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 17:07:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e60ec7a45037fbfe9af98cbbc6354b3bb1f9cc11256e55319754bd3659db0838/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 17:07:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e60ec7a45037fbfe9af98cbbc6354b3bb1f9cc11256e55319754bd3659db0838/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 17:07:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e60ec7a45037fbfe9af98cbbc6354b3bb1f9cc11256e55319754bd3659db0838/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 17:07:13 compute-0 podman[148180]: 2026-01-10 17:07:13.841913738 +0000 UTC m=+0.163736743 container init bc186779c388d38772f86a5859fd91a4b58cc67fc9c71815ffd71047255eb6b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_chaum, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 17:07:13 compute-0 podman[148180]: 2026-01-10 17:07:13.853007969 +0000 UTC m=+0.174830934 container start bc186779c388d38772f86a5859fd91a4b58cc67fc9c71815ffd71047255eb6b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_chaum, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 10 17:07:13 compute-0 podman[148180]: 2026-01-10 17:07:13.859289741 +0000 UTC m=+0.181112736 container attach bc186779c388d38772f86a5859fd91a4b58cc67fc9c71815ffd71047255eb6b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_chaum, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True)
Jan 10 17:07:13 compute-0 python3.9[148227]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 10 17:07:13 compute-0 sudo[148223]: pam_unix(sudo:session): session closed for user root
Jan 10 17:07:14 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:07:14 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v347: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:07:14 compute-0 sudo[148440]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zanselskvhqudqyjlpkvxxerwjomoagw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064834.1843884-246-127360502000320/AnsiballZ_file.py'
Jan 10 17:07:14 compute-0 sudo[148440]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:07:14 compute-0 lvm[148459]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 10 17:07:14 compute-0 lvm[148459]: VG ceph_vg0 finished
Jan 10 17:07:14 compute-0 lvm[148460]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 10 17:07:14 compute-0 lvm[148460]: VG ceph_vg1 finished
Jan 10 17:07:14 compute-0 lvm[148462]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 10 17:07:14 compute-0 lvm[148462]: VG ceph_vg2 finished
Jan 10 17:07:14 compute-0 python3.9[148446]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:07:14 compute-0 eager_chaum[148228]: {}
Jan 10 17:07:14 compute-0 sudo[148440]: pam_unix(sudo:session): session closed for user root
Jan 10 17:07:14 compute-0 systemd[1]: libpod-bc186779c388d38772f86a5859fd91a4b58cc67fc9c71815ffd71047255eb6b6.scope: Deactivated successfully.
Jan 10 17:07:14 compute-0 podman[148180]: 2026-01-10 17:07:14.69456318 +0000 UTC m=+1.016386135 container died bc186779c388d38772f86a5859fd91a4b58cc67fc9c71815ffd71047255eb6b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_chaum, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 10 17:07:14 compute-0 systemd[1]: libpod-bc186779c388d38772f86a5859fd91a4b58cc67fc9c71815ffd71047255eb6b6.scope: Consumed 1.485s CPU time.
Jan 10 17:07:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-e60ec7a45037fbfe9af98cbbc6354b3bb1f9cc11256e55319754bd3659db0838-merged.mount: Deactivated successfully.
Jan 10 17:07:14 compute-0 podman[148180]: 2026-01-10 17:07:14.754513517 +0000 UTC m=+1.076336512 container remove bc186779c388d38772f86a5859fd91a4b58cc67fc9c71815ffd71047255eb6b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_chaum, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 17:07:14 compute-0 systemd[1]: libpod-conmon-bc186779c388d38772f86a5859fd91a4b58cc67fc9c71815ffd71047255eb6b6.scope: Deactivated successfully.
Jan 10 17:07:14 compute-0 sudo[147972]: pam_unix(sudo:session): session closed for user root
Jan 10 17:07:14 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 10 17:07:14 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:07:14 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 10 17:07:14 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:07:14 compute-0 sudo[148521]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 10 17:07:14 compute-0 sudo[148521]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:07:14 compute-0 sudo[148521]: pam_unix(sudo:session): session closed for user root
Jan 10 17:07:15 compute-0 sudo[148650]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uwblmkhxkgsmnncwizanxwdfpbypghgm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064834.8536618-254-218806947811066/AnsiballZ_stat.py'
Jan 10 17:07:15 compute-0 sudo[148650]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:07:15 compute-0 python3.9[148652]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 17:07:15 compute-0 sudo[148650]: pam_unix(sudo:session): session closed for user root
Jan 10 17:07:15 compute-0 sudo[148728]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lgfpgnwiqgsdxrljsgdsqxfohntrvaok ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064834.8536618-254-218806947811066/AnsiballZ_file.py'
Jan 10 17:07:15 compute-0 sudo[148728]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:07:15 compute-0 ceph-mon[75249]: pgmap v347: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:07:15 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:07:15 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:07:15 compute-0 python3.9[148730]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:07:15 compute-0 sudo[148728]: pam_unix(sudo:session): session closed for user root
Jan 10 17:07:16 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v348: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:07:16 compute-0 sudo[148880]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xtkjajughwcvikzcuzfxilzzqlyzywog ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064836.0482788-266-106456363240723/AnsiballZ_stat.py'
Jan 10 17:07:16 compute-0 sudo[148880]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:07:16 compute-0 python3.9[148882]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 17:07:16 compute-0 sudo[148880]: pam_unix(sudo:session): session closed for user root
Jan 10 17:07:16 compute-0 ceph-mon[75249]: pgmap v348: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:07:16 compute-0 sudo[148958]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ajywgaexupevxqgxqmaavrwooreonknh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064836.0482788-266-106456363240723/AnsiballZ_file.py'
Jan 10 17:07:16 compute-0 sudo[148958]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:07:17 compute-0 python3.9[148960]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:07:17 compute-0 sudo[148958]: pam_unix(sudo:session): session closed for user root
Jan 10 17:07:17 compute-0 sudo[149110]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wjsuemadugiqdeigzmfnkaatjeduumtj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064837.3923116-278-141788212283452/AnsiballZ_systemd.py'
Jan 10 17:07:17 compute-0 sudo[149110]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:07:18 compute-0 python3.9[149112]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 10 17:07:18 compute-0 systemd[1]: Reloading.
Jan 10 17:07:18 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v349: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:07:18 compute-0 systemd-sysv-generator[149143]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 10 17:07:18 compute-0 ceph-mon[75249]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 10 17:07:18 compute-0 systemd-rc-local-generator[149138]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 17:07:18 compute-0 ceph-mon[75249]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Cumulative writes: 1817 writes, 7853 keys, 1817 commit groups, 1.0 writes per commit group, ingest: 0.01 GB, 0.01 MB/s
                                           Cumulative WAL: 1817 writes, 1817 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1817 writes, 7853 keys, 1817 commit groups, 1.0 writes per commit group, ingest: 8.61 MB, 0.01 MB/s
                                           Interval WAL: 1817 writes, 1817 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    104.7      0.06              0.02         3    0.019       0      0       0.0       0.0
                                             L6      1/0    4.39 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.6    115.3     98.9      0.10              0.05         2    0.050    6104    774       0.0       0.0
                                            Sum      1/0    4.39 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.6     73.3    101.0      0.16              0.07         5    0.032    6104    774       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.7     75.3    103.4      0.15              0.07         4    0.039    6104    774       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0    115.3     98.9      0.10              0.05         2    0.050    6104    774       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    111.7      0.05              0.02         2    0.027       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     13.8      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.006, interval 0.006
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.02 GB write, 0.03 MB/s write, 0.01 GB read, 0.02 MB/s read, 0.2 seconds
                                           Interval compaction: 0.02 GB write, 0.03 MB/s write, 0.01 GB read, 0.02 MB/s read, 0.2 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55efa2bef8d0#2 capacity: 308.00 MB usage: 667.53 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 0.000119 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(41,597.50 KB,0.189447%) FilterBlock(6,24.23 KB,0.00768389%) IndexBlock(6,45.80 KB,0.0145206%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 10 17:07:18 compute-0 sudo[149110]: pam_unix(sudo:session): session closed for user root
Jan 10 17:07:19 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:07:19 compute-0 sudo[149299]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bnyyvauaazopwufgczltuouxdbppmkli ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064838.792947-286-137115644491878/AnsiballZ_stat.py'
Jan 10 17:07:19 compute-0 sudo[149299]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:07:19 compute-0 ceph-mon[75249]: pgmap v349: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:07:19 compute-0 python3.9[149301]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 17:07:19 compute-0 sudo[149299]: pam_unix(sudo:session): session closed for user root
Jan 10 17:07:19 compute-0 sudo[149377]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lvlipbujxmfjfbgdihhikwayffnlfiao ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064838.792947-286-137115644491878/AnsiballZ_file.py'
Jan 10 17:07:19 compute-0 sudo[149377]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:07:19 compute-0 python3.9[149379]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:07:19 compute-0 sudo[149377]: pam_unix(sudo:session): session closed for user root
Jan 10 17:07:20 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v350: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:07:20 compute-0 sudo[149529]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhnarrryvynkknytcgilnzcwdsabikqw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064840.0609775-298-108754653843540/AnsiballZ_stat.py'
Jan 10 17:07:20 compute-0 sudo[149529]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:07:20 compute-0 python3.9[149531]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 17:07:20 compute-0 sudo[149529]: pam_unix(sudo:session): session closed for user root
Jan 10 17:07:20 compute-0 sudo[149607]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-caweaaaofatdhdpfskoprisscuyjwtfl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064840.0609775-298-108754653843540/AnsiballZ_file.py'
Jan 10 17:07:20 compute-0 sudo[149607]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:07:21 compute-0 python3.9[149609]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:07:21 compute-0 sudo[149607]: pam_unix(sudo:session): session closed for user root
Jan 10 17:07:21 compute-0 ceph-mon[75249]: pgmap v350: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:07:21 compute-0 sudo[149759]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qirxzjacunaqhyzwylgxwqarlpgjvlqd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064841.2970316-310-250727628811456/AnsiballZ_systemd.py'
Jan 10 17:07:21 compute-0 sudo[149759]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:07:21 compute-0 python3.9[149761]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 10 17:07:21 compute-0 systemd[1]: Reloading.
Jan 10 17:07:22 compute-0 systemd-rc-local-generator[149788]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 17:07:22 compute-0 systemd-sysv-generator[149791]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 10 17:07:22 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v351: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:07:22 compute-0 systemd[1]: Starting Create netns directory...
Jan 10 17:07:22 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 10 17:07:22 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 10 17:07:22 compute-0 systemd[1]: Finished Create netns directory.
Jan 10 17:07:22 compute-0 sudo[149759]: pam_unix(sudo:session): session closed for user root
Jan 10 17:07:23 compute-0 sudo[149952]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ekpujfiovjfwbaiirokybkaiqcdyxuwd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064842.7012336-320-172628675298699/AnsiballZ_file.py'
Jan 10 17:07:23 compute-0 sudo[149952]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:07:23 compute-0 ceph-mon[75249]: pgmap v351: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:07:23 compute-0 python3.9[149954]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 10 17:07:23 compute-0 sudo[149952]: pam_unix(sudo:session): session closed for user root
Jan 10 17:07:23 compute-0 sudo[150104]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iczotdchcapaxliyiywlefrfuayclfzr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064843.5239952-328-75271160631345/AnsiballZ_stat.py'
Jan 10 17:07:23 compute-0 sudo[150104]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:07:24 compute-0 python3.9[150106]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 17:07:24 compute-0 sudo[150104]: pam_unix(sudo:session): session closed for user root
Jan 10 17:07:24 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:07:24 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v352: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:07:24 compute-0 sudo[150227]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-emposofnucrvgcfjvncxstdlsbzpqztu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064843.5239952-328-75271160631345/AnsiballZ_copy.py'
Jan 10 17:07:24 compute-0 sudo[150227]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:07:24 compute-0 python3.9[150229]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1768064843.5239952-328-75271160631345/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 10 17:07:24 compute-0 sudo[150227]: pam_unix(sudo:session): session closed for user root
Jan 10 17:07:25 compute-0 ceph-mon[75249]: pgmap v352: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:07:25 compute-0 sudo[150379]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-covvmldtmppnukvjbgizyhjhcnmcpntg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064845.1397762-345-53405632923155/AnsiballZ_file.py'
Jan 10 17:07:25 compute-0 sudo[150379]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:07:25 compute-0 python3.9[150381]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:07:25 compute-0 sudo[150379]: pam_unix(sudo:session): session closed for user root
Jan 10 17:07:26 compute-0 sudo[150531]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cgecijfcgpojtbcziboonrjcobyhtshx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064845.838002-353-89119340560100/AnsiballZ_file.py'
Jan 10 17:07:26 compute-0 sudo[150531]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:07:26 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v353: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:07:26 compute-0 python3.9[150533]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 10 17:07:26 compute-0 sudo[150531]: pam_unix(sudo:session): session closed for user root
Jan 10 17:07:26 compute-0 sudo[150683]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yauucrenlqmhktzhocovuzmlntaiomlm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064846.5554848-361-121698914602896/AnsiballZ_stat.py'
Jan 10 17:07:26 compute-0 sudo[150683]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:07:27 compute-0 python3.9[150685]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 17:07:27 compute-0 sudo[150683]: pam_unix(sudo:session): session closed for user root
Jan 10 17:07:27 compute-0 ceph-mon[75249]: pgmap v353: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:07:27 compute-0 sudo[150807]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jokxfwbhireelluklvlkkzjzjfityhzb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064846.5554848-361-121698914602896/AnsiballZ_copy.py'
Jan 10 17:07:27 compute-0 sudo[150807]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:07:27 compute-0 python3.9[150809]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1768064846.5554848-361-121698914602896/.source.json _original_basename=.m1ydec72 follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:07:27 compute-0 sudo[150807]: pam_unix(sudo:session): session closed for user root
Jan 10 17:07:28 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v354: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:07:28 compute-0 sshd-session[150779]: Connection closed by authenticating user root 216.36.124.133 port 45306 [preauth]
Jan 10 17:07:28 compute-0 python3.9[150960]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:07:29 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:07:29 compute-0 ceph-mon[75249]: pgmap v354: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:07:30 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v355: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:07:30 compute-0 sudo[151381]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-govwusgklcvweulvjlnwcwjizaqdgkxj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064850.213375-401-71527357586686/AnsiballZ_container_config_data.py'
Jan 10 17:07:30 compute-0 sudo[151381]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:07:30 compute-0 python3.9[151383]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Jan 10 17:07:30 compute-0 sudo[151381]: pam_unix(sudo:session): session closed for user root
Jan 10 17:07:31 compute-0 ceph-mon[75249]: pgmap v355: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:07:31 compute-0 sudo[151533]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fxxebqwsxzpoagxovcqounmjxgmnelly ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064851.2538853-412-193400840693515/AnsiballZ_container_config_hash.py'
Jan 10 17:07:31 compute-0 sudo[151533]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:07:31 compute-0 python3.9[151535]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 10 17:07:31 compute-0 sudo[151533]: pam_unix(sudo:session): session closed for user root
Jan 10 17:07:32 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v356: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:07:32 compute-0 sudo[151685]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-guvndhfjhbinbzkfczuyeejriuxxflwb ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1768064852.289733-422-203979680098903/AnsiballZ_edpm_container_manage.py'
Jan 10 17:07:32 compute-0 sudo[151685]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:07:33 compute-0 python3[151687]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json containers=['ovn_metadata_agent'] log_base_path=/var/log/containers/stdouts debug=False
Jan 10 17:07:33 compute-0 ceph-mon[75249]: pgmap v356: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:07:34 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:07:34 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v357: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:07:35 compute-0 ceph-mon[75249]: pgmap v357: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:07:36 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v358: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:07:37 compute-0 ceph-mon[75249]: pgmap v358: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:07:38 compute-0 ceph-mgr[75538]: [balancer INFO root] Optimize plan auto_2026-01-10_17:07:38
Jan 10 17:07:38 compute-0 ceph-mgr[75538]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 10 17:07:38 compute-0 ceph-mgr[75538]: [balancer INFO root] do_upmap
Jan 10 17:07:38 compute-0 ceph-mgr[75538]: [balancer INFO root] pools ['cephfs.cephfs.meta', '.mgr', 'images', 'vms', 'cephfs.cephfs.data', 'volumes', 'backups']
Jan 10 17:07:38 compute-0 ceph-mgr[75538]: [balancer INFO root] prepared 0/10 upmap changes
Jan 10 17:07:38 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v359: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:07:38 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:07:38 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:07:38 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:07:38 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:07:38 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:07:38 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:07:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 10 17:07:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 10 17:07:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 10 17:07:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 10 17:07:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 10 17:07:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 10 17:07:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 10 17:07:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 10 17:07:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 10 17:07:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 10 17:07:39 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:07:39 compute-0 ceph-mon[75249]: pgmap v359: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:07:40 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v360: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:07:40 compute-0 podman[151772]: 2026-01-10 17:07:40.321225517 +0000 UTC m=+1.318319500 container health_status a2049b55215718ead6719d161f51721cb7ba61ad8a59a8a1174e05b17008fa6f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 10 17:07:41 compute-0 podman[151701]: 2026-01-10 17:07:41.760670012 +0000 UTC m=+8.570095148 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 10 17:07:41 compute-0 ceph-mon[75249]: pgmap v360: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:07:41 compute-0 podman[151855]: 2026-01-10 17:07:41.89666082 +0000 UTC m=+0.051247025 container create 4a66317a3a3660e903224f5e823ead0a57597ae17c6ac34919f8a3aeea25124f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.license=GPLv2)
Jan 10 17:07:41 compute-0 podman[151855]: 2026-01-10 17:07:41.871898943 +0000 UTC m=+0.026485158 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 10 17:07:41 compute-0 python3[151687]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 10 17:07:42 compute-0 sudo[151685]: pam_unix(sudo:session): session closed for user root
Jan 10 17:07:42 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v361: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:07:42 compute-0 sudo[152040]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-swhcxwbuzxmfnjbzzwgedpuneguksuou ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064862.2096121-430-24125517751436/AnsiballZ_stat.py'
Jan 10 17:07:42 compute-0 sudo[152040]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:07:42 compute-0 python3.9[152042]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 10 17:07:42 compute-0 sudo[152040]: pam_unix(sudo:session): session closed for user root
Jan 10 17:07:43 compute-0 sudo[152194]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cvlinfdhqbkqaxuazuhddfsyacgqvpji ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064862.9268243-439-238578671778571/AnsiballZ_file.py'
Jan 10 17:07:43 compute-0 sudo[152194]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:07:43 compute-0 python3.9[152196]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:07:43 compute-0 sudo[152194]: pam_unix(sudo:session): session closed for user root
Jan 10 17:07:43 compute-0 sudo[152270]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-spnifggtmbqsqgijflzazzpeyelmmgsf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064862.9268243-439-238578671778571/AnsiballZ_stat.py'
Jan 10 17:07:43 compute-0 sudo[152270]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:07:43 compute-0 ceph-mon[75249]: pgmap v361: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:07:43 compute-0 python3.9[152272]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 10 17:07:43 compute-0 sudo[152270]: pam_unix(sudo:session): session closed for user root
Jan 10 17:07:44 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:07:44 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v362: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:07:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] _maybe_adjust
Jan 10 17:07:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:07:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 10 17:07:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:07:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 17:07:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:07:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 17:07:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:07:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 17:07:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:07:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 17:07:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:07:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 9.302004027771843e-07 of space, bias 4.0, pg target 0.0011162404833326212 quantized to 16 (current 16)
Jan 10 17:07:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:07:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 17:07:44 compute-0 sudo[152421]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqoerwggypwvqxejjyzsslpylxqhvajv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064863.9935336-439-773100210538/AnsiballZ_copy.py'
Jan 10 17:07:44 compute-0 sudo[152421]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:07:44 compute-0 python3.9[152423]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1768064863.9935336-439-773100210538/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:07:44 compute-0 sudo[152421]: pam_unix(sudo:session): session closed for user root
Jan 10 17:07:44 compute-0 sudo[152497]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qagwichqudqksxgriydrdysockywqhtu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064863.9935336-439-773100210538/AnsiballZ_systemd.py'
Jan 10 17:07:44 compute-0 sudo[152497]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:07:45 compute-0 python3.9[152499]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 10 17:07:45 compute-0 systemd[1]: Reloading.
Jan 10 17:07:45 compute-0 systemd-rc-local-generator[152526]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 17:07:45 compute-0 systemd-sysv-generator[152529]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 10 17:07:45 compute-0 sudo[152497]: pam_unix(sudo:session): session closed for user root
Jan 10 17:07:45 compute-0 sudo[152607]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ojolhsreisltsrazneqlbfxfdnfmktdu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064863.9935336-439-773100210538/AnsiballZ_systemd.py'
Jan 10 17:07:45 compute-0 sudo[152607]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:07:45 compute-0 ceph-mon[75249]: pgmap v362: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:07:46 compute-0 python3.9[152609]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 10 17:07:46 compute-0 systemd[1]: Reloading.
Jan 10 17:07:46 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v363: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:07:46 compute-0 systemd-rc-local-generator[152639]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 17:07:46 compute-0 systemd-sysv-generator[152642]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 10 17:07:46 compute-0 systemd[1]: Starting ovn_metadata_agent container...
Jan 10 17:07:46 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:07:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9203d38291e46e8251c2b66f9fbb0ed4d4f73da5133d73ec8b17c7b3cb1b6e2d/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Jan 10 17:07:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9203d38291e46e8251c2b66f9fbb0ed4d4f73da5133d73ec8b17c7b3cb1b6e2d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 10 17:07:46 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 4a66317a3a3660e903224f5e823ead0a57597ae17c6ac34919f8a3aeea25124f.
Jan 10 17:07:46 compute-0 podman[152650]: 2026-01-10 17:07:46.779234827 +0000 UTC m=+0.277119226 container init 4a66317a3a3660e903224f5e823ead0a57597ae17c6ac34919f8a3aeea25124f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 10 17:07:46 compute-0 ovn_metadata_agent[152665]: + sudo -E kolla_set_configs
Jan 10 17:07:46 compute-0 podman[152650]: 2026-01-10 17:07:46.807460325 +0000 UTC m=+0.305344684 container start 4a66317a3a3660e903224f5e823ead0a57597ae17c6ac34919f8a3aeea25124f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Jan 10 17:07:46 compute-0 ovn_metadata_agent[152665]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 10 17:07:46 compute-0 ovn_metadata_agent[152665]: INFO:__main__:Validating config file
Jan 10 17:07:46 compute-0 ovn_metadata_agent[152665]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 10 17:07:46 compute-0 ovn_metadata_agent[152665]: INFO:__main__:Copying service configuration files
Jan 10 17:07:46 compute-0 ovn_metadata_agent[152665]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Jan 10 17:07:46 compute-0 ovn_metadata_agent[152665]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Jan 10 17:07:46 compute-0 ovn_metadata_agent[152665]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Jan 10 17:07:46 compute-0 ovn_metadata_agent[152665]: INFO:__main__:Writing out command to execute
Jan 10 17:07:46 compute-0 ovn_metadata_agent[152665]: INFO:__main__:Setting permission for /var/lib/neutron
Jan 10 17:07:46 compute-0 ovn_metadata_agent[152665]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Jan 10 17:07:46 compute-0 ovn_metadata_agent[152665]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Jan 10 17:07:46 compute-0 ovn_metadata_agent[152665]: INFO:__main__:Setting permission for /var/lib/neutron/external
Jan 10 17:07:46 compute-0 ovn_metadata_agent[152665]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Jan 10 17:07:46 compute-0 ovn_metadata_agent[152665]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Jan 10 17:07:46 compute-0 ovn_metadata_agent[152665]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Jan 10 17:07:46 compute-0 ovn_metadata_agent[152665]: ++ cat /run_command
Jan 10 17:07:46 compute-0 ovn_metadata_agent[152665]: + CMD=neutron-ovn-metadata-agent
Jan 10 17:07:46 compute-0 ovn_metadata_agent[152665]: + ARGS=
Jan 10 17:07:46 compute-0 ovn_metadata_agent[152665]: + sudo kolla_copy_cacerts
Jan 10 17:07:46 compute-0 ovn_metadata_agent[152665]: + [[ ! -n '' ]]
Jan 10 17:07:46 compute-0 ovn_metadata_agent[152665]: + . kolla_extend_start
Jan 10 17:07:46 compute-0 ovn_metadata_agent[152665]: Running command: 'neutron-ovn-metadata-agent'
Jan 10 17:07:46 compute-0 ovn_metadata_agent[152665]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Jan 10 17:07:46 compute-0 ovn_metadata_agent[152665]: + umask 0022
Jan 10 17:07:46 compute-0 ovn_metadata_agent[152665]: + exec neutron-ovn-metadata-agent
Jan 10 17:07:46 compute-0 edpm-start-podman-container[152650]: ovn_metadata_agent
Jan 10 17:07:47 compute-0 podman[152673]: 2026-01-10 17:07:47.0013668 +0000 UTC m=+0.180883089 container health_status 4a66317a3a3660e903224f5e823ead0a57597ae17c6ac34919f8a3aeea25124f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Jan 10 17:07:47 compute-0 edpm-start-podman-container[152649]: Creating additional drop-in dependency for "ovn_metadata_agent" (4a66317a3a3660e903224f5e823ead0a57597ae17c6ac34919f8a3aeea25124f)
Jan 10 17:07:47 compute-0 systemd[1]: Reloading.
Jan 10 17:07:47 compute-0 ceph-mon[75249]: pgmap v363: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:07:47 compute-0 systemd-rc-local-generator[152750]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 17:07:47 compute-0 systemd-sysv-generator[152753]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 10 17:07:47 compute-0 systemd[1]: Started ovn_metadata_agent container.
Jan 10 17:07:47 compute-0 sudo[152607]: pam_unix(sudo:session): session closed for user root
Jan 10 17:07:48 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v364: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.871 152671 INFO neutron.common.config [-] Logging enabled!
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.871 152671 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.872 152671 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.872 152671 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.872 152671 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.872 152671 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.872 152671 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.873 152671 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.873 152671 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.873 152671 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.873 152671 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.873 152671 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.873 152671 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.873 152671 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.873 152671 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.873 152671 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.873 152671 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.874 152671 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.874 152671 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.874 152671 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.874 152671 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.874 152671 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.874 152671 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.874 152671 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.874 152671 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.874 152671 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.874 152671 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.875 152671 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.875 152671 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.875 152671 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.875 152671 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.875 152671 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.875 152671 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.875 152671 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.875 152671 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.875 152671 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.876 152671 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.876 152671 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.876 152671 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.876 152671 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.876 152671 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.876 152671 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.876 152671 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.876 152671 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.876 152671 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.876 152671 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.876 152671 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.877 152671 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.877 152671 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.877 152671 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.877 152671 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.877 152671 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.877 152671 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.877 152671 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.877 152671 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.877 152671 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.878 152671 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.878 152671 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.878 152671 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.878 152671 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.878 152671 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.878 152671 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.878 152671 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.878 152671 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.878 152671 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.879 152671 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.879 152671 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.879 152671 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.879 152671 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.879 152671 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.879 152671 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.879 152671 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.879 152671 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.879 152671 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.879 152671 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.880 152671 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.880 152671 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.880 152671 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.880 152671 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.880 152671 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.880 152671 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.880 152671 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.880 152671 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.880 152671 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.881 152671 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.881 152671 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.881 152671 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.881 152671 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.881 152671 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.881 152671 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.881 152671 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.881 152671 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.881 152671 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.881 152671 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.882 152671 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.882 152671 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.882 152671 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.882 152671 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.882 152671 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.882 152671 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.882 152671 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.882 152671 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.882 152671 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.882 152671 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.882 152671 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.883 152671 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.883 152671 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.883 152671 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.883 152671 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.883 152671 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.883 152671 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.883 152671 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.883 152671 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.883 152671 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.883 152671 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.884 152671 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.884 152671 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.884 152671 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.884 152671 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.884 152671 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.884 152671 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.884 152671 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.885 152671 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.885 152671 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.885 152671 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.885 152671 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.885 152671 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.885 152671 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.886 152671 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.886 152671 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.886 152671 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.886 152671 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.886 152671 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.886 152671 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.886 152671 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.886 152671 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.886 152671 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.887 152671 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.887 152671 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.887 152671 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.887 152671 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.887 152671 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.887 152671 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.887 152671 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.887 152671 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.887 152671 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.887 152671 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.888 152671 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.888 152671 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.888 152671 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.888 152671 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.888 152671 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.888 152671 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 python3.9[152908]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.888 152671 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.888 152671 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.888 152671 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.888 152671 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.889 152671 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.889 152671 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.889 152671 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.889 152671 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.889 152671 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.889 152671 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.889 152671 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.889 152671 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.889 152671 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.889 152671 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.890 152671 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.890 152671 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.890 152671 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.890 152671 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.890 152671 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.890 152671 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.890 152671 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.890 152671 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.890 152671 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.890 152671 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.891 152671 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.891 152671 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.891 152671 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.891 152671 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.891 152671 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.891 152671 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.891 152671 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.891 152671 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.892 152671 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.892 152671 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.892 152671 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.892 152671 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.892 152671 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.892 152671 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.892 152671 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.892 152671 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.892 152671 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.892 152671 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.893 152671 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.893 152671 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.893 152671 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.893 152671 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.893 152671 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.893 152671 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.893 152671 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.893 152671 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.893 152671 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.893 152671 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.894 152671 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.894 152671 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.894 152671 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.894 152671 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.894 152671 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.894 152671 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.894 152671 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.894 152671 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.894 152671 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.895 152671 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.895 152671 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.895 152671 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.895 152671 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.895 152671 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.895 152671 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.895 152671 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.895 152671 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.895 152671 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.895 152671 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.896 152671 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.896 152671 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.896 152671 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.896 152671 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.896 152671 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.896 152671 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.896 152671 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.896 152671 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.896 152671 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.896 152671 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.897 152671 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.897 152671 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.897 152671 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.897 152671 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.897 152671 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.897 152671 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.897 152671 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.897 152671 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.897 152671 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.898 152671 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.898 152671 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.898 152671 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.898 152671 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.898 152671 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.898 152671 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.898 152671 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.898 152671 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.899 152671 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.899 152671 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.899 152671 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.899 152671 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.899 152671 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.899 152671 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.899 152671 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.900 152671 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.900 152671 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.900 152671 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.900 152671 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.900 152671 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.900 152671 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.900 152671 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.901 152671 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.901 152671 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.901 152671 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.901 152671 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.901 152671 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.901 152671 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.901 152671 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.901 152671 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.901 152671 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.902 152671 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.902 152671 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.902 152671 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.902 152671 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.902 152671 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.902 152671 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.902 152671 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.902 152671 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.902 152671 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.903 152671 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.903 152671 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.903 152671 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.903 152671 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.903 152671 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.903 152671 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.903 152671 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.903 152671 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.903 152671 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.903 152671 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.904 152671 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.904 152671 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.904 152671 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.904 152671 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.904 152671 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.904 152671 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.904 152671 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.904 152671 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.904 152671 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.904 152671 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.905 152671 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.950 152671 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.951 152671 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.951 152671 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.951 152671 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.951 152671 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.963 152671 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name fbd04e21-7be2-4eb3-a385-03f0bb540a40 (UUID: fbd04e21-7be2-4eb3-a385-03f0bb540a40) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.987 152671 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.987 152671 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.988 152671 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.988 152671 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.990 152671 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Jan 10 17:07:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.996 152671 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Jan 10 17:07:49 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:49.002 152671 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', 'fbd04e21-7be2-4eb3-a385-03f0bb540a40'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7f7a871b1d30>], external_ids={}, name=fbd04e21-7be2-4eb3-a385-03f0bb540a40, nb_cfg_timestamp=1768064807011, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 10 17:07:49 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:49.003 152671 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7f7a87133eb0>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52
Jan 10 17:07:49 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:49.004 152671 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 10 17:07:49 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:49.004 152671 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 10 17:07:49 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:49.004 152671 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 10 17:07:49 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:49.004 152671 INFO oslo_service.service [-] Starting 1 workers
Jan 10 17:07:49 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:49.008 152671 DEBUG oslo_service.service [-] Started child 152933 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575
Jan 10 17:07:49 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:49.012 152671 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmply24caqc/privsep.sock']
Jan 10 17:07:49 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:49.013 152933 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-170691'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184
Jan 10 17:07:49 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:49.037 152933 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Jan 10 17:07:49 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:49.038 152933 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Jan 10 17:07:49 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:49.038 152933 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 10 17:07:49 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:49.043 152933 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Jan 10 17:07:49 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:49.049 152933 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Jan 10 17:07:49 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:49.056 152933 INFO eventlet.wsgi.server [-] (152933) wsgi starting up on http:/var/lib/neutron/metadata_proxy
Jan 10 17:07:49 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:07:49 compute-0 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Jan 10 17:07:49 compute-0 sudo[153064]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-znfgukirzifjxwtqrcqefwnjznxuvqsh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064869.337805-484-94805663345967/AnsiballZ_stat.py'
Jan 10 17:07:49 compute-0 sudo[153064]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:07:49 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:49.717 152671 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Jan 10 17:07:49 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:49.718 152671 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmply24caqc/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Jan 10 17:07:49 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:49.594 153043 INFO oslo.privsep.daemon [-] privsep daemon starting
Jan 10 17:07:49 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:49.601 153043 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Jan 10 17:07:49 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:49.605 153043 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none
Jan 10 17:07:49 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:49.605 153043 INFO oslo.privsep.daemon [-] privsep daemon running as pid 153043
Jan 10 17:07:49 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:49.721 153043 DEBUG oslo.privsep.daemon [-] privsep: reply[53cbdb0e-39aa-4805-aa93-cd46619c4370]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 10 17:07:49 compute-0 ceph-mon[75249]: pgmap v364: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:07:49 compute-0 python3.9[153066]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 17:07:49 compute-0 sudo[153064]: pam_unix(sudo:session): session closed for user root
Jan 10 17:07:50 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v365: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:07:50 compute-0 sudo[153193]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jioysamoekyokpasbsowgalwhbstylpj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064869.337805-484-94805663345967/AnsiballZ_copy.py'
Jan 10 17:07:50 compute-0 sudo[153193]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.255 153043 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.256 153043 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.256 153043 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 10 17:07:50 compute-0 python3.9[153195]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1768064869.337805-484-94805663345967/.source.yaml _original_basename=.jo3c30n6 follow=False checksum=24e6f428ca40407899f031d999dfc3af0c87e301 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:07:50 compute-0 sudo[153193]: pam_unix(sudo:session): session closed for user root
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.771 153043 DEBUG oslo.privsep.daemon [-] privsep: reply[2c8def25-921c-4741-96aa-c0261abfa229]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.777 152671 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=fbd04e21-7be2-4eb3-a385-03f0bb540a40, column=external_ids, values=({'neutron:ovn-metadata-id': 'df62b40c-cd70-516a-95e8-1aab1acf968a'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.800 152671 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=fbd04e21-7be2-4eb3-a385-03f0bb540a40, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.808 152671 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.808 152671 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.809 152671 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.809 152671 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.809 152671 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.809 152671 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.809 152671 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.810 152671 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.810 152671 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.810 152671 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.810 152671 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.810 152671 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.811 152671 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.811 152671 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.811 152671 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.811 152671 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.811 152671 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.812 152671 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.812 152671 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.812 152671 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.812 152671 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.812 152671 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.813 152671 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.813 152671 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.813 152671 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.813 152671 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.814 152671 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.814 152671 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.814 152671 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.814 152671 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.814 152671 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.814 152671 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.815 152671 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.815 152671 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.815 152671 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.815 152671 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.815 152671 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.816 152671 DEBUG oslo_service.service [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.816 152671 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.816 152671 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.816 152671 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.816 152671 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.816 152671 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.817 152671 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.817 152671 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.817 152671 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.817 152671 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.817 152671 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.817 152671 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.818 152671 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.818 152671 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.818 152671 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.818 152671 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.818 152671 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.818 152671 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.818 152671 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.819 152671 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.819 152671 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.819 152671 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.819 152671 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.819 152671 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.819 152671 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.819 152671 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.820 152671 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.820 152671 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.820 152671 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.820 152671 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.820 152671 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.820 152671 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.821 152671 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.821 152671 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.821 152671 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.821 152671 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.821 152671 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.821 152671 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.822 152671 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.822 152671 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.822 152671 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.822 152671 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.822 152671 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.822 152671 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.823 152671 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.823 152671 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.823 152671 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:50 compute-0 sshd-session[143710]: Connection closed by 192.168.122.30 port 33280
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.824 152671 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.824 152671 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.825 152671 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.826 152671 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.827 152671 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.827 152671 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.827 152671 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.828 152671 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.828 152671 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:50 compute-0 sshd-session[143707]: pam_unix(sshd:session): session closed for user zuul
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.828 152671 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.829 152671 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.829 152671 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.829 152671 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.829 152671 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.830 152671 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.830 152671 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.830 152671 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.830 152671 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.831 152671 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.831 152671 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.831 152671 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.832 152671 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.832 152671 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.832 152671 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.832 152671 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.833 152671 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.833 152671 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 systemd-logind[798]: Session 48 logged out. Waiting for processes to exit.
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.833 152671 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 systemd[1]: session-48.scope: Deactivated successfully.
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.834 152671 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.834 152671 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.834 152671 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 systemd[1]: session-48.scope: Consumed 59.281s CPU time.
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.834 152671 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.835 152671 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.835 152671 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.835 152671 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.836 152671 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.836 152671 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.836 152671 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.836 152671 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 systemd-logind[798]: Removed session 48.
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.837 152671 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.837 152671 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.837 152671 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.838 152671 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.838 152671 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.839 152671 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.839 152671 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.839 152671 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.840 152671 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.840 152671 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.840 152671 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.840 152671 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.841 152671 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.841 152671 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.842 152671 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.842 152671 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.842 152671 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.842 152671 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.843 152671 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.843 152671 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.843 152671 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.843 152671 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.844 152671 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.844 152671 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.844 152671 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.845 152671 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.845 152671 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.845 152671 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.845 152671 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.846 152671 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.846 152671 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.846 152671 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.846 152671 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.847 152671 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.847 152671 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.847 152671 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.847 152671 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.848 152671 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.848 152671 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.848 152671 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.849 152671 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.849 152671 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.849 152671 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.849 152671 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.850 152671 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.850 152671 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.850 152671 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.850 152671 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.851 152671 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.851 152671 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.851 152671 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.851 152671 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.852 152671 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.852 152671 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.852 152671 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.853 152671 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.853 152671 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.853 152671 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.853 152671 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.854 152671 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.854 152671 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.854 152671 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.855 152671 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.855 152671 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.855 152671 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.856 152671 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.856 152671 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.856 152671 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.856 152671 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.857 152671 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.857 152671 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.857 152671 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.858 152671 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.858 152671 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.858 152671 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.858 152671 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.859 152671 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.859 152671 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.859 152671 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.859 152671 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.860 152671 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.860 152671 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.860 152671 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.860 152671 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.861 152671 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.861 152671 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.861 152671 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.861 152671 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.862 152671 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.862 152671 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.863 152671 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.863 152671 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.863 152671 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.863 152671 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.864 152671 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.864 152671 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.864 152671 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.865 152671 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.865 152671 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.865 152671 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.866 152671 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.866 152671 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.866 152671 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.867 152671 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.867 152671 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.867 152671 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.867 152671 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.867 152671 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.868 152671 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.868 152671 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.868 152671 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.869 152671 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.869 152671 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.869 152671 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.870 152671 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.870 152671 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.870 152671 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.870 152671 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.871 152671 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.871 152671 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.871 152671 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.872 152671 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.872 152671 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.872 152671 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.872 152671 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.873 152671 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.873 152671 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.873 152671 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.874 152671 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.874 152671 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.874 152671 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.875 152671 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.875 152671 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.875 152671 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.876 152671 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.876 152671 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.876 152671 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.876 152671 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.877 152671 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.877 152671 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.877 152671 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.877 152671 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.878 152671 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.878 152671 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.878 152671 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.879 152671 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.879 152671 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.879 152671 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.879 152671 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.879 152671 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.880 152671 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.880 152671 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.880 152671 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.881 152671 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.881 152671 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.881 152671 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.881 152671 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.881 152671 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.882 152671 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.882 152671 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.882 152671 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.882 152671 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.882 152671 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.882 152671 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.883 152671 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.883 152671 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.883 152671 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.883 152671 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.883 152671 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.884 152671 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.884 152671 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.884 152671 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.884 152671 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.884 152671 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.884 152671 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.884 152671 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.885 152671 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.885 152671 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.885 152671 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.885 152671 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:07:50 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.885 152671 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Jan 10 17:07:51 compute-0 ceph-mon[75249]: pgmap v365: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:07:52 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v366: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:07:53 compute-0 ceph-mon[75249]: pgmap v366: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:07:54 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:07:54 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v367: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:07:55 compute-0 ceph-mon[75249]: pgmap v367: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:07:55 compute-0 sshd-session[153220]: Accepted publickey for zuul from 192.168.122.30 port 32862 ssh2: ECDSA SHA256:YYROLJW/JwZAyyZtyl+88gzuUs1GqrQIhGb+AzXg9yc
Jan 10 17:07:55 compute-0 systemd-logind[798]: New session 49 of user zuul.
Jan 10 17:07:55 compute-0 systemd[1]: Started Session 49 of User zuul.
Jan 10 17:07:55 compute-0 sshd-session[153220]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 10 17:07:56 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v368: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:07:57 compute-0 python3.9[153373]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 10 17:07:57 compute-0 ceph-mon[75249]: pgmap v368: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:07:58 compute-0 sudo[153527]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tsckfbjevjwtjviuhnjaglrtdqmzewjg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064877.5470304-29-178155117355664/AnsiballZ_command.py'
Jan 10 17:07:58 compute-0 sudo[153527]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:07:58 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v369: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:07:58 compute-0 python3.9[153529]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 17:07:58 compute-0 sudo[153527]: pam_unix(sudo:session): session closed for user root
Jan 10 17:07:59 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:07:59 compute-0 sudo[153692]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gjwgspgevkhxrpcpygcgdkypeyutzzih ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064878.8751493-40-251601988326432/AnsiballZ_systemd_service.py'
Jan 10 17:07:59 compute-0 sudo[153692]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:07:59 compute-0 python3.9[153694]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 10 17:07:59 compute-0 systemd[1]: Reloading.
Jan 10 17:07:59 compute-0 ceph-mon[75249]: pgmap v369: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:07:59 compute-0 systemd-rc-local-generator[153722]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 17:07:59 compute-0 systemd-sysv-generator[153725]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 10 17:08:00 compute-0 sudo[153692]: pam_unix(sudo:session): session closed for user root
Jan 10 17:08:00 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v370: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:08:00 compute-0 python3.9[153879]: ansible-ansible.builtin.service_facts Invoked
Jan 10 17:08:01 compute-0 network[153896]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 10 17:08:01 compute-0 network[153897]: 'network-scripts' will be removed from distribution in near future.
Jan 10 17:08:01 compute-0 network[153898]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 10 17:08:01 compute-0 ceph-mon[75249]: pgmap v370: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:08:02 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v371: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:08:03 compute-0 ceph-mon[75249]: pgmap v371: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:08:04 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:08:04 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v372: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:08:04 compute-0 sudo[154158]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-notsuvddypfoaobzddvnmetkufetjzqq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064884.4828353-59-17937792607924/AnsiballZ_systemd_service.py'
Jan 10 17:08:04 compute-0 sudo[154158]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:08:04 compute-0 ceph-mon[75249]: pgmap v372: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:08:05 compute-0 python3.9[154160]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 10 17:08:05 compute-0 sudo[154158]: pam_unix(sudo:session): session closed for user root
Jan 10 17:08:05 compute-0 sudo[154311]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nlmmviblglzaegmfahrbpoljumoojlth ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064885.320835-59-25949752210512/AnsiballZ_systemd_service.py'
Jan 10 17:08:05 compute-0 sudo[154311]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:08:05 compute-0 python3.9[154313]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 10 17:08:06 compute-0 sudo[154311]: pam_unix(sudo:session): session closed for user root
Jan 10 17:08:06 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v373: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:08:06 compute-0 sudo[154464]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jqsdzpconojldqblrlzlngculgsprxxg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064886.165645-59-217665132340361/AnsiballZ_systemd_service.py'
Jan 10 17:08:06 compute-0 sudo[154464]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:08:06 compute-0 python3.9[154466]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 10 17:08:06 compute-0 sudo[154464]: pam_unix(sudo:session): session closed for user root
Jan 10 17:08:07 compute-0 ceph-mon[75249]: pgmap v373: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:08:07 compute-0 sudo[154617]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sgqkrewlldxxpdavxhxpmmchmwlhzebq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064886.9401336-59-227208831770076/AnsiballZ_systemd_service.py'
Jan 10 17:08:07 compute-0 sudo[154617]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:08:07 compute-0 python3.9[154619]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 10 17:08:07 compute-0 sudo[154617]: pam_unix(sudo:session): session closed for user root
Jan 10 17:08:08 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v374: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:08:08 compute-0 sudo[154770]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aqkkrnydskbefbxnqhiacdubkbnwwefw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064887.8816278-59-151245392348323/AnsiballZ_systemd_service.py'
Jan 10 17:08:08 compute-0 sudo[154770]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:08:08 compute-0 python3.9[154772]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 10 17:08:08 compute-0 sudo[154770]: pam_unix(sudo:session): session closed for user root
Jan 10 17:08:08 compute-0 sudo[154923]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jplerjryundlkytnklgpjejgysbgonmx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064888.715283-59-26681681137942/AnsiballZ_systemd_service.py'
Jan 10 17:08:08 compute-0 sudo[154923]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:08:08 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:08:08 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:08:08 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:08:08 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:08:08 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:08:08 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:08:09 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:08:09 compute-0 ceph-mon[75249]: pgmap v374: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:08:09 compute-0 python3.9[154925]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 10 17:08:09 compute-0 sudo[154923]: pam_unix(sudo:session): session closed for user root
Jan 10 17:08:09 compute-0 sudo[155076]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-reawlattgprsgriwofcmejhzotwndvcv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064889.4440188-59-42189803260255/AnsiballZ_systemd_service.py'
Jan 10 17:08:09 compute-0 sudo[155076]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:08:10 compute-0 python3.9[155078]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 10 17:08:10 compute-0 sudo[155076]: pam_unix(sudo:session): session closed for user root
Jan 10 17:08:10 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v375: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:08:10 compute-0 sudo[155229]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tzfvqcawtoqfxlnjhfrbfoiajlpdtles ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064890.3787496-111-126181775649238/AnsiballZ_file.py'
Jan 10 17:08:10 compute-0 sudo[155229]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:08:11 compute-0 python3.9[155231]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:08:11 compute-0 sudo[155229]: pam_unix(sudo:session): session closed for user root
Jan 10 17:08:11 compute-0 ceph-mon[75249]: pgmap v375: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:08:11 compute-0 sudo[155381]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-juxrdtzosfvzjopcgpobetwsimeigemf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064891.1983259-111-271223433554104/AnsiballZ_file.py'
Jan 10 17:08:11 compute-0 sudo[155381]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:08:11 compute-0 python3.9[155383]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:08:11 compute-0 sudo[155381]: pam_unix(sudo:session): session closed for user root
Jan 10 17:08:12 compute-0 sudo[155543]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-flxkvnkxefussiznszesvjrdccpmcnab ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064891.7899532-111-246095493953746/AnsiballZ_file.py'
Jan 10 17:08:12 compute-0 sudo[155543]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:08:12 compute-0 podman[155507]: 2026-01-10 17:08:12.140809966 +0000 UTC m=+0.119152720 container health_status a2049b55215718ead6719d161f51721cb7ba61ad8a59a8a1174e05b17008fa6f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, managed_by=edpm_ansible)
Jan 10 17:08:12 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v376: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:08:12 compute-0 python3.9[155545]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:08:12 compute-0 sudo[155543]: pam_unix(sudo:session): session closed for user root
Jan 10 17:08:12 compute-0 sudo[155709]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fxiuikxptzszcqfhhmhldpsgeirfhxuu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064892.4359097-111-80063785592322/AnsiballZ_file.py'
Jan 10 17:08:12 compute-0 sudo[155709]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:08:12 compute-0 python3.9[155711]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:08:12 compute-0 sudo[155709]: pam_unix(sudo:session): session closed for user root
Jan 10 17:08:13 compute-0 ceph-mon[75249]: pgmap v376: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:08:13 compute-0 sudo[155861]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lzvbgyjjvuarbnhnsfmnbzcoqsctwvyt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064893.0930495-111-274685287630010/AnsiballZ_file.py'
Jan 10 17:08:13 compute-0 sudo[155861]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:08:13 compute-0 python3.9[155863]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:08:13 compute-0 sudo[155861]: pam_unix(sudo:session): session closed for user root
Jan 10 17:08:14 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:08:14 compute-0 sudo[156013]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nclfbvhdoydylehnzatmwdiwabccqntr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064893.806691-111-170528316056419/AnsiballZ_file.py'
Jan 10 17:08:14 compute-0 sudo[156013]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:08:14 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v377: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:08:14 compute-0 python3.9[156015]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:08:14 compute-0 sudo[156013]: pam_unix(sudo:session): session closed for user root
Jan 10 17:08:14 compute-0 sudo[156165]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zyeglltejpnkxpoqwvrtugmqdgoanmao ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064894.472472-111-10126585071689/AnsiballZ_file.py'
Jan 10 17:08:14 compute-0 sudo[156165]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:08:14 compute-0 python3.9[156167]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:08:14 compute-0 sudo[156165]: pam_unix(sudo:session): session closed for user root
Jan 10 17:08:15 compute-0 sudo[156168]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 17:08:15 compute-0 sudo[156168]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:08:15 compute-0 sudo[156168]: pam_unix(sudo:session): session closed for user root
Jan 10 17:08:15 compute-0 sudo[156216]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host
Jan 10 17:08:15 compute-0 sudo[156216]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:08:15 compute-0 ceph-mon[75249]: pgmap v377: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:08:15 compute-0 sudo[156216]: pam_unix(sudo:session): session closed for user root
Jan 10 17:08:15 compute-0 sudo[156389]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zmxmlownerksftreylcewwokympglsxp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064895.12706-161-244415256714526/AnsiballZ_file.py'
Jan 10 17:08:15 compute-0 sudo[156389]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:08:15 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 10 17:08:15 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:08:15 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 10 17:08:15 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:08:15 compute-0 sudo[156392]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 17:08:15 compute-0 sudo[156392]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:08:15 compute-0 sudo[156392]: pam_unix(sudo:session): session closed for user root
Jan 10 17:08:15 compute-0 sudo[156417]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 10 17:08:15 compute-0 sudo[156417]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:08:15 compute-0 python3.9[156391]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:08:15 compute-0 sudo[156389]: pam_unix(sudo:session): session closed for user root
Jan 10 17:08:16 compute-0 sudo[156615]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ivlnwkgjhulclnyhpcheiuggwwbjuyjc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064895.8563354-161-22522579386496/AnsiballZ_file.py'
Jan 10 17:08:16 compute-0 sudo[156615]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:08:16 compute-0 sudo[156417]: pam_unix(sudo:session): session closed for user root
Jan 10 17:08:16 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 17:08:16 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 17:08:16 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 10 17:08:16 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 10 17:08:16 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 10 17:08:16 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:08:16 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 10 17:08:16 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 10 17:08:16 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v378: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:08:16 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 10 17:08:16 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 10 17:08:16 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 17:08:16 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 17:08:16 compute-0 sudo[156625]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 17:08:16 compute-0 sudo[156625]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:08:16 compute-0 sudo[156625]: pam_unix(sudo:session): session closed for user root
Jan 10 17:08:16 compute-0 python3.9[156624]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:08:16 compute-0 sudo[156650]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 10 17:08:16 compute-0 sudo[156650]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:08:16 compute-0 sudo[156615]: pam_unix(sudo:session): session closed for user root
Jan 10 17:08:16 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:08:16 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:08:16 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 17:08:16 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 10 17:08:16 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:08:16 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 10 17:08:16 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 10 17:08:16 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 17:08:16 compute-0 podman[156766]: 2026-01-10 17:08:16.615241871 +0000 UTC m=+0.044420581 container create 2e0ebbb1f6add9e37b379af0fe26fbefe5a02467f6fc5b9dc7207f04842855c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_poitras, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True)
Jan 10 17:08:16 compute-0 systemd[1]: Started libpod-conmon-2e0ebbb1f6add9e37b379af0fe26fbefe5a02467f6fc5b9dc7207f04842855c0.scope.
Jan 10 17:08:16 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:08:16 compute-0 podman[156766]: 2026-01-10 17:08:16.599186676 +0000 UTC m=+0.028365416 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:08:16 compute-0 podman[156766]: 2026-01-10 17:08:16.700425099 +0000 UTC m=+0.129603829 container init 2e0ebbb1f6add9e37b379af0fe26fbefe5a02467f6fc5b9dc7207f04842855c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_poitras, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 10 17:08:16 compute-0 podman[156766]: 2026-01-10 17:08:16.708662304 +0000 UTC m=+0.137841014 container start 2e0ebbb1f6add9e37b379af0fe26fbefe5a02467f6fc5b9dc7207f04842855c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_poitras, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 17:08:16 compute-0 podman[156766]: 2026-01-10 17:08:16.712787851 +0000 UTC m=+0.141966591 container attach 2e0ebbb1f6add9e37b379af0fe26fbefe5a02467f6fc5b9dc7207f04842855c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_poitras, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 17:08:16 compute-0 systemd[1]: libpod-2e0ebbb1f6add9e37b379af0fe26fbefe5a02467f6fc5b9dc7207f04842855c0.scope: Deactivated successfully.
Jan 10 17:08:16 compute-0 hungry_poitras[156827]: 167 167
Jan 10 17:08:16 compute-0 conmon[156827]: conmon 2e0ebbb1f6add9e37b37 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2e0ebbb1f6add9e37b379af0fe26fbefe5a02467f6fc5b9dc7207f04842855c0.scope/container/memory.events
Jan 10 17:08:16 compute-0 podman[156766]: 2026-01-10 17:08:16.715477841 +0000 UTC m=+0.144656571 container died 2e0ebbb1f6add9e37b379af0fe26fbefe5a02467f6fc5b9dc7207f04842855c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_poitras, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 17:08:16 compute-0 sudo[156856]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zlgwiazlqqnwpvdmfhjtyvqvoyxurepa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064896.451531-161-127846713503600/AnsiballZ_file.py'
Jan 10 17:08:16 compute-0 sudo[156856]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:08:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-20a296bee2a646f8c157f834cc1633ac4f583f3ee7ae936c82fc34940422c8f5-merged.mount: Deactivated successfully.
Jan 10 17:08:16 compute-0 podman[156766]: 2026-01-10 17:08:16.755525765 +0000 UTC m=+0.184704485 container remove 2e0ebbb1f6add9e37b379af0fe26fbefe5a02467f6fc5b9dc7207f04842855c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_poitras, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 10 17:08:16 compute-0 systemd[1]: libpod-conmon-2e0ebbb1f6add9e37b379af0fe26fbefe5a02467f6fc5b9dc7207f04842855c0.scope: Deactivated successfully.
Jan 10 17:08:16 compute-0 python3.9[156860]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:08:16 compute-0 sudo[156856]: pam_unix(sudo:session): session closed for user root
Jan 10 17:08:16 compute-0 podman[156880]: 2026-01-10 17:08:16.960541625 +0000 UTC m=+0.069121824 container create 5fc77d048ce4e11bf435608587f8aa36167895819c75995b720fce85671a174f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_cannon, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 10 17:08:17 compute-0 systemd[1]: Started libpod-conmon-5fc77d048ce4e11bf435608587f8aa36167895819c75995b720fce85671a174f.scope.
Jan 10 17:08:17 compute-0 podman[156880]: 2026-01-10 17:08:16.929538012 +0000 UTC m=+0.038118281 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:08:17 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:08:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f55854b2bf3f5dd1b536b4ecb8d8e10081c9b80a2ce0f998d21e3bfff1c38012/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 17:08:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f55854b2bf3f5dd1b536b4ecb8d8e10081c9b80a2ce0f998d21e3bfff1c38012/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 17:08:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f55854b2bf3f5dd1b536b4ecb8d8e10081c9b80a2ce0f998d21e3bfff1c38012/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 17:08:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f55854b2bf3f5dd1b536b4ecb8d8e10081c9b80a2ce0f998d21e3bfff1c38012/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 17:08:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f55854b2bf3f5dd1b536b4ecb8d8e10081c9b80a2ce0f998d21e3bfff1c38012/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 10 17:08:17 compute-0 podman[156880]: 2026-01-10 17:08:17.059320516 +0000 UTC m=+0.167900735 container init 5fc77d048ce4e11bf435608587f8aa36167895819c75995b720fce85671a174f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_cannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 10 17:08:17 compute-0 podman[156880]: 2026-01-10 17:08:17.070763567 +0000 UTC m=+0.179343766 container start 5fc77d048ce4e11bf435608587f8aa36167895819c75995b720fce85671a174f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_cannon, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 17:08:17 compute-0 podman[156880]: 2026-01-10 17:08:17.075745923 +0000 UTC m=+0.184326122 container attach 5fc77d048ce4e11bf435608587f8aa36167895819c75995b720fce85671a174f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_cannon, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 10 17:08:17 compute-0 podman[156946]: 2026-01-10 17:08:17.117836635 +0000 UTC m=+0.058844691 container health_status 4a66317a3a3660e903224f5e823ead0a57597ae17c6ac34919f8a3aeea25124f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Jan 10 17:08:17 compute-0 sudo[157068]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nyjkjydblwfbgxqcanafcqnvnxdrbjuq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064897.0423665-161-274585512061510/AnsiballZ_file.py'
Jan 10 17:08:17 compute-0 sudo[157068]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:08:17 compute-0 ceph-mon[75249]: pgmap v378: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:08:17 compute-0 python3.9[157072]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:08:17 compute-0 sudo[157068]: pam_unix(sudo:session): session closed for user root
Jan 10 17:08:17 compute-0 fervent_cannon[156926]: --> passed data devices: 0 physical, 3 LVM
Jan 10 17:08:17 compute-0 fervent_cannon[156926]: --> All data devices are unavailable
Jan 10 17:08:17 compute-0 systemd[1]: libpod-5fc77d048ce4e11bf435608587f8aa36167895819c75995b720fce85671a174f.scope: Deactivated successfully.
Jan 10 17:08:17 compute-0 podman[156880]: 2026-01-10 17:08:17.731322664 +0000 UTC m=+0.839902863 container died 5fc77d048ce4e11bf435608587f8aa36167895819c75995b720fce85671a174f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_cannon, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 10 17:08:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-f55854b2bf3f5dd1b536b4ecb8d8e10081c9b80a2ce0f998d21e3bfff1c38012-merged.mount: Deactivated successfully.
Jan 10 17:08:17 compute-0 podman[156880]: 2026-01-10 17:08:17.794273811 +0000 UTC m=+0.902854030 container remove 5fc77d048ce4e11bf435608587f8aa36167895819c75995b720fce85671a174f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_cannon, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 10 17:08:17 compute-0 systemd[1]: libpod-conmon-5fc77d048ce4e11bf435608587f8aa36167895819c75995b720fce85671a174f.scope: Deactivated successfully.
Jan 10 17:08:17 compute-0 sudo[156650]: pam_unix(sudo:session): session closed for user root
Jan 10 17:08:17 compute-0 sudo[157196]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 17:08:17 compute-0 sudo[157196]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:08:17 compute-0 sudo[157196]: pam_unix(sudo:session): session closed for user root
Jan 10 17:08:18 compute-0 sudo[157245]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 -- lvm list --format json
Jan 10 17:08:18 compute-0 sudo[157245]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:08:18 compute-0 sudo[157294]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fzztrxmbhidbrwixapunggaqglhapcer ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064897.68126-161-234767140454374/AnsiballZ_file.py'
Jan 10 17:08:18 compute-0 sudo[157294]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:08:18 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v379: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:08:18 compute-0 python3.9[157298]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:08:18 compute-0 sudo[157294]: pam_unix(sudo:session): session closed for user root
Jan 10 17:08:18 compute-0 podman[157311]: 2026-01-10 17:08:18.363608968 +0000 UTC m=+0.058111896 container create 442715ec5d223de05c78ae87a34e9b32679c93da167f91f11ed9c67171f0d13c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_babbage, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 10 17:08:18 compute-0 systemd[1]: Started libpod-conmon-442715ec5d223de05c78ae87a34e9b32679c93da167f91f11ed9c67171f0d13c.scope.
Jan 10 17:08:18 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:08:18 compute-0 podman[157311]: 2026-01-10 17:08:18.341952837 +0000 UTC m=+0.036455825 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:08:18 compute-0 podman[157311]: 2026-01-10 17:08:18.447587166 +0000 UTC m=+0.142090144 container init 442715ec5d223de05c78ae87a34e9b32679c93da167f91f11ed9c67171f0d13c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_babbage, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 17:08:18 compute-0 podman[157311]: 2026-01-10 17:08:18.455608743 +0000 UTC m=+0.150111691 container start 442715ec5d223de05c78ae87a34e9b32679c93da167f91f11ed9c67171f0d13c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_babbage, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 17:08:18 compute-0 podman[157311]: 2026-01-10 17:08:18.459483443 +0000 UTC m=+0.153986391 container attach 442715ec5d223de05c78ae87a34e9b32679c93da167f91f11ed9c67171f0d13c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_babbage, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 10 17:08:18 compute-0 priceless_babbage[157357]: 167 167
Jan 10 17:08:18 compute-0 systemd[1]: libpod-442715ec5d223de05c78ae87a34e9b32679c93da167f91f11ed9c67171f0d13c.scope: Deactivated successfully.
Jan 10 17:08:18 compute-0 podman[157311]: 2026-01-10 17:08:18.462429821 +0000 UTC m=+0.156932839 container died 442715ec5d223de05c78ae87a34e9b32679c93da167f91f11ed9c67171f0d13c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_babbage, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 17:08:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-afaaee9c5a08b90be4f2bd0db0aa4f9c1b9f36424df35ae351db25b69b70c2e8-merged.mount: Deactivated successfully.
Jan 10 17:08:18 compute-0 podman[157311]: 2026-01-10 17:08:18.505980852 +0000 UTC m=+0.200483820 container remove 442715ec5d223de05c78ae87a34e9b32679c93da167f91f11ed9c67171f0d13c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_babbage, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 17:08:18 compute-0 systemd[1]: libpod-conmon-442715ec5d223de05c78ae87a34e9b32679c93da167f91f11ed9c67171f0d13c.scope: Deactivated successfully.
Jan 10 17:08:18 compute-0 podman[157472]: 2026-01-10 17:08:18.709301755 +0000 UTC m=+0.047887556 container create 02db77fe727d94b9400ee0d0ecd0e8d74649f0f1b0f8717635a26567cac0ad3c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_swirles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 10 17:08:18 compute-0 sudo[157513]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-baiclthiyxigmipohuhfwsnifweohbhl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064898.4307883-161-251103367426379/AnsiballZ_file.py'
Jan 10 17:08:18 compute-0 systemd[1]: Started libpod-conmon-02db77fe727d94b9400ee0d0ecd0e8d74649f0f1b0f8717635a26567cac0ad3c.scope.
Jan 10 17:08:18 compute-0 sudo[157513]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:08:18 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:08:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/942cc0f0a76a05ad39c978a9c7a55a6dacad3284d688c70616d29656de40397d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 17:08:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/942cc0f0a76a05ad39c978a9c7a55a6dacad3284d688c70616d29656de40397d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 17:08:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/942cc0f0a76a05ad39c978a9c7a55a6dacad3284d688c70616d29656de40397d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 17:08:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/942cc0f0a76a05ad39c978a9c7a55a6dacad3284d688c70616d29656de40397d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 17:08:18 compute-0 podman[157472]: 2026-01-10 17:08:18.689051641 +0000 UTC m=+0.027637482 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:08:18 compute-0 podman[157472]: 2026-01-10 17:08:18.788729382 +0000 UTC m=+0.127315193 container init 02db77fe727d94b9400ee0d0ecd0e8d74649f0f1b0f8717635a26567cac0ad3c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_swirles, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 17:08:18 compute-0 podman[157472]: 2026-01-10 17:08:18.795642002 +0000 UTC m=+0.134227813 container start 02db77fe727d94b9400ee0d0ecd0e8d74649f0f1b0f8717635a26567cac0ad3c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_swirles, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 10 17:08:18 compute-0 podman[157472]: 2026-01-10 17:08:18.79979339 +0000 UTC m=+0.138379191 container attach 02db77fe727d94b9400ee0d0ecd0e8d74649f0f1b0f8717635a26567cac0ad3c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_swirles, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 17:08:18 compute-0 python3.9[157520]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:08:18 compute-0 sudo[157513]: pam_unix(sudo:session): session closed for user root
Jan 10 17:08:19 compute-0 nifty_swirles[157518]: {
Jan 10 17:08:19 compute-0 nifty_swirles[157518]:     "0": [
Jan 10 17:08:19 compute-0 nifty_swirles[157518]:         {
Jan 10 17:08:19 compute-0 nifty_swirles[157518]:             "devices": [
Jan 10 17:08:19 compute-0 nifty_swirles[157518]:                 "/dev/loop3"
Jan 10 17:08:19 compute-0 nifty_swirles[157518]:             ],
Jan 10 17:08:19 compute-0 nifty_swirles[157518]:             "lv_name": "ceph_lv0",
Jan 10 17:08:19 compute-0 nifty_swirles[157518]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 10 17:08:19 compute-0 nifty_swirles[157518]:             "lv_size": "21470642176",
Jan 10 17:08:19 compute-0 nifty_swirles[157518]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9aa1dcc9-88f4-49c0-be40-744313964d3e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 17:08:19 compute-0 nifty_swirles[157518]:             "lv_uuid": "fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2",
Jan 10 17:08:19 compute-0 nifty_swirles[157518]:             "name": "ceph_lv0",
Jan 10 17:08:19 compute-0 nifty_swirles[157518]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 10 17:08:19 compute-0 nifty_swirles[157518]:             "tags": {
Jan 10 17:08:19 compute-0 nifty_swirles[157518]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 10 17:08:19 compute-0 nifty_swirles[157518]:                 "ceph.block_uuid": "fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2",
Jan 10 17:08:19 compute-0 nifty_swirles[157518]:                 "ceph.cephx_lockbox_secret": "",
Jan 10 17:08:19 compute-0 nifty_swirles[157518]:                 "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 17:08:19 compute-0 nifty_swirles[157518]:                 "ceph.cluster_name": "ceph",
Jan 10 17:08:19 compute-0 nifty_swirles[157518]:                 "ceph.crush_device_class": "",
Jan 10 17:08:19 compute-0 nifty_swirles[157518]:                 "ceph.encrypted": "0",
Jan 10 17:08:19 compute-0 nifty_swirles[157518]:                 "ceph.objectstore": "bluestore",
Jan 10 17:08:19 compute-0 nifty_swirles[157518]:                 "ceph.osd_fsid": "9aa1dcc9-88f4-49c0-be40-744313964d3e",
Jan 10 17:08:19 compute-0 nifty_swirles[157518]:                 "ceph.osd_id": "0",
Jan 10 17:08:19 compute-0 nifty_swirles[157518]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 10 17:08:19 compute-0 nifty_swirles[157518]:                 "ceph.type": "block",
Jan 10 17:08:19 compute-0 nifty_swirles[157518]:                 "ceph.vdo": "0",
Jan 10 17:08:19 compute-0 nifty_swirles[157518]:                 "ceph.with_tpm": "0"
Jan 10 17:08:19 compute-0 nifty_swirles[157518]:             },
Jan 10 17:08:19 compute-0 nifty_swirles[157518]:             "type": "block",
Jan 10 17:08:19 compute-0 nifty_swirles[157518]:             "vg_name": "ceph_vg0"
Jan 10 17:08:19 compute-0 nifty_swirles[157518]:         }
Jan 10 17:08:19 compute-0 nifty_swirles[157518]:     ],
Jan 10 17:08:19 compute-0 nifty_swirles[157518]:     "1": [
Jan 10 17:08:19 compute-0 nifty_swirles[157518]:         {
Jan 10 17:08:19 compute-0 nifty_swirles[157518]:             "devices": [
Jan 10 17:08:19 compute-0 nifty_swirles[157518]:                 "/dev/loop4"
Jan 10 17:08:19 compute-0 nifty_swirles[157518]:             ],
Jan 10 17:08:19 compute-0 nifty_swirles[157518]:             "lv_name": "ceph_lv1",
Jan 10 17:08:19 compute-0 nifty_swirles[157518]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 10 17:08:19 compute-0 nifty_swirles[157518]:             "lv_size": "21470642176",
Jan 10 17:08:19 compute-0 nifty_swirles[157518]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e8e31518-65ae-476c-891c-e2fc550d0a1c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 17:08:19 compute-0 nifty_swirles[157518]:             "lv_uuid": "jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE",
Jan 10 17:08:19 compute-0 nifty_swirles[157518]:             "name": "ceph_lv1",
Jan 10 17:08:19 compute-0 nifty_swirles[157518]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 10 17:08:19 compute-0 nifty_swirles[157518]:             "tags": {
Jan 10 17:08:19 compute-0 nifty_swirles[157518]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 10 17:08:19 compute-0 nifty_swirles[157518]:                 "ceph.block_uuid": "jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE",
Jan 10 17:08:19 compute-0 nifty_swirles[157518]:                 "ceph.cephx_lockbox_secret": "",
Jan 10 17:08:19 compute-0 nifty_swirles[157518]:                 "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 17:08:19 compute-0 nifty_swirles[157518]:                 "ceph.cluster_name": "ceph",
Jan 10 17:08:19 compute-0 nifty_swirles[157518]:                 "ceph.crush_device_class": "",
Jan 10 17:08:19 compute-0 nifty_swirles[157518]:                 "ceph.encrypted": "0",
Jan 10 17:08:19 compute-0 nifty_swirles[157518]:                 "ceph.objectstore": "bluestore",
Jan 10 17:08:19 compute-0 nifty_swirles[157518]:                 "ceph.osd_fsid": "e8e31518-65ae-476c-891c-e2fc550d0a1c",
Jan 10 17:08:19 compute-0 nifty_swirles[157518]:                 "ceph.osd_id": "1",
Jan 10 17:08:19 compute-0 nifty_swirles[157518]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 10 17:08:19 compute-0 nifty_swirles[157518]:                 "ceph.type": "block",
Jan 10 17:08:19 compute-0 nifty_swirles[157518]:                 "ceph.vdo": "0",
Jan 10 17:08:19 compute-0 nifty_swirles[157518]:                 "ceph.with_tpm": "0"
Jan 10 17:08:19 compute-0 nifty_swirles[157518]:             },
Jan 10 17:08:19 compute-0 nifty_swirles[157518]:             "type": "block",
Jan 10 17:08:19 compute-0 nifty_swirles[157518]:             "vg_name": "ceph_vg1"
Jan 10 17:08:19 compute-0 nifty_swirles[157518]:         }
Jan 10 17:08:19 compute-0 nifty_swirles[157518]:     ],
Jan 10 17:08:19 compute-0 nifty_swirles[157518]:     "2": [
Jan 10 17:08:19 compute-0 nifty_swirles[157518]:         {
Jan 10 17:08:19 compute-0 nifty_swirles[157518]:             "devices": [
Jan 10 17:08:19 compute-0 nifty_swirles[157518]:                 "/dev/loop5"
Jan 10 17:08:19 compute-0 nifty_swirles[157518]:             ],
Jan 10 17:08:19 compute-0 nifty_swirles[157518]:             "lv_name": "ceph_lv2",
Jan 10 17:08:19 compute-0 nifty_swirles[157518]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 10 17:08:19 compute-0 nifty_swirles[157518]:             "lv_size": "21470642176",
Jan 10 17:08:19 compute-0 nifty_swirles[157518]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=87473727-6468-4f68-8371-e0bf60edaa43,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 17:08:19 compute-0 nifty_swirles[157518]:             "lv_uuid": "47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL",
Jan 10 17:08:19 compute-0 nifty_swirles[157518]:             "name": "ceph_lv2",
Jan 10 17:08:19 compute-0 nifty_swirles[157518]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 10 17:08:19 compute-0 nifty_swirles[157518]:             "tags": {
Jan 10 17:08:19 compute-0 nifty_swirles[157518]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 10 17:08:19 compute-0 nifty_swirles[157518]:                 "ceph.block_uuid": "47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL",
Jan 10 17:08:19 compute-0 nifty_swirles[157518]:                 "ceph.cephx_lockbox_secret": "",
Jan 10 17:08:19 compute-0 nifty_swirles[157518]:                 "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 17:08:19 compute-0 nifty_swirles[157518]:                 "ceph.cluster_name": "ceph",
Jan 10 17:08:19 compute-0 nifty_swirles[157518]:                 "ceph.crush_device_class": "",
Jan 10 17:08:19 compute-0 nifty_swirles[157518]:                 "ceph.encrypted": "0",
Jan 10 17:08:19 compute-0 nifty_swirles[157518]:                 "ceph.objectstore": "bluestore",
Jan 10 17:08:19 compute-0 nifty_swirles[157518]:                 "ceph.osd_fsid": "87473727-6468-4f68-8371-e0bf60edaa43",
Jan 10 17:08:19 compute-0 nifty_swirles[157518]:                 "ceph.osd_id": "2",
Jan 10 17:08:19 compute-0 nifty_swirles[157518]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 10 17:08:19 compute-0 nifty_swirles[157518]:                 "ceph.type": "block",
Jan 10 17:08:19 compute-0 nifty_swirles[157518]:                 "ceph.vdo": "0",
Jan 10 17:08:19 compute-0 nifty_swirles[157518]:                 "ceph.with_tpm": "0"
Jan 10 17:08:19 compute-0 nifty_swirles[157518]:             },
Jan 10 17:08:19 compute-0 nifty_swirles[157518]:             "type": "block",
Jan 10 17:08:19 compute-0 nifty_swirles[157518]:             "vg_name": "ceph_vg2"
Jan 10 17:08:19 compute-0 nifty_swirles[157518]:         }
Jan 10 17:08:19 compute-0 nifty_swirles[157518]:     ]
Jan 10 17:08:19 compute-0 nifty_swirles[157518]: }
Jan 10 17:08:19 compute-0 systemd[1]: libpod-02db77fe727d94b9400ee0d0ecd0e8d74649f0f1b0f8717635a26567cac0ad3c.scope: Deactivated successfully.
Jan 10 17:08:19 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:08:19 compute-0 podman[157472]: 2026-01-10 17:08:19.112492688 +0000 UTC m=+0.451078549 container died 02db77fe727d94b9400ee0d0ecd0e8d74649f0f1b0f8717635a26567cac0ad3c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_swirles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 10 17:08:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-942cc0f0a76a05ad39c978a9c7a55a6dacad3284d688c70616d29656de40397d-merged.mount: Deactivated successfully.
Jan 10 17:08:19 compute-0 podman[157472]: 2026-01-10 17:08:19.162164203 +0000 UTC m=+0.500749994 container remove 02db77fe727d94b9400ee0d0ecd0e8d74649f0f1b0f8717635a26567cac0ad3c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_swirles, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 10 17:08:19 compute-0 systemd[1]: libpod-conmon-02db77fe727d94b9400ee0d0ecd0e8d74649f0f1b0f8717635a26567cac0ad3c.scope: Deactivated successfully.
Jan 10 17:08:19 compute-0 sudo[157245]: pam_unix(sudo:session): session closed for user root
Jan 10 17:08:19 compute-0 sudo[157617]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 17:08:19 compute-0 sudo[157617]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:08:19 compute-0 sudo[157617]: pam_unix(sudo:session): session closed for user root
Jan 10 17:08:19 compute-0 sudo[157664]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 -- raw list --format json
Jan 10 17:08:19 compute-0 sudo[157664]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:08:19 compute-0 ceph-mon[75249]: pgmap v379: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:08:19 compute-0 sudo[157739]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bvrwdxayihngxzqmyfruujcrvpmtigqs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064899.1091347-161-157963276333810/AnsiballZ_file.py'
Jan 10 17:08:19 compute-0 sudo[157739]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:08:19 compute-0 podman[157754]: 2026-01-10 17:08:19.671594704 +0000 UTC m=+0.058321764 container create 31cf20fcdf3ada6545187732cff7abfebe7d0e5cb1b2e29af40cd1ccf54b464a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_bose, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 10 17:08:19 compute-0 python3.9[157741]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:08:19 compute-0 sudo[157739]: pam_unix(sudo:session): session closed for user root
Jan 10 17:08:19 compute-0 systemd[1]: Started libpod-conmon-31cf20fcdf3ada6545187732cff7abfebe7d0e5cb1b2e29af40cd1ccf54b464a.scope.
Jan 10 17:08:19 compute-0 podman[157754]: 2026-01-10 17:08:19.642092872 +0000 UTC m=+0.028819602 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:08:19 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:08:19 compute-0 podman[157754]: 2026-01-10 17:08:19.761075124 +0000 UTC m=+0.147801824 container init 31cf20fcdf3ada6545187732cff7abfebe7d0e5cb1b2e29af40cd1ccf54b464a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_bose, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 10 17:08:19 compute-0 podman[157754]: 2026-01-10 17:08:19.768947097 +0000 UTC m=+0.155673777 container start 31cf20fcdf3ada6545187732cff7abfebe7d0e5cb1b2e29af40cd1ccf54b464a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_bose, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 17:08:19 compute-0 podman[157754]: 2026-01-10 17:08:19.772415882 +0000 UTC m=+0.159142552 container attach 31cf20fcdf3ada6545187732cff7abfebe7d0e5cb1b2e29af40cd1ccf54b464a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_bose, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 10 17:08:19 compute-0 great_bose[157771]: 167 167
Jan 10 17:08:19 compute-0 systemd[1]: libpod-31cf20fcdf3ada6545187732cff7abfebe7d0e5cb1b2e29af40cd1ccf54b464a.scope: Deactivated successfully.
Jan 10 17:08:19 compute-0 podman[157754]: 2026-01-10 17:08:19.77533933 +0000 UTC m=+0.162066010 container died 31cf20fcdf3ada6545187732cff7abfebe7d0e5cb1b2e29af40cd1ccf54b464a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_bose, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 17:08:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-e0088c4e02a2066e0d8fc8562dc417a280e280d52cd59bd3d1c06b49ff696c07-merged.mount: Deactivated successfully.
Jan 10 17:08:19 compute-0 podman[157754]: 2026-01-10 17:08:19.818026372 +0000 UTC m=+0.204753052 container remove 31cf20fcdf3ada6545187732cff7abfebe7d0e5cb1b2e29af40cd1ccf54b464a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_bose, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 17:08:19 compute-0 systemd[1]: libpod-conmon-31cf20fcdf3ada6545187732cff7abfebe7d0e5cb1b2e29af40cd1ccf54b464a.scope: Deactivated successfully.
Jan 10 17:08:20 compute-0 podman[157843]: 2026-01-10 17:08:20.061289846 +0000 UTC m=+0.069165535 container create 26a53ef86f64ec35f74f2a8f7a90d87b35e27a295043a32e13280286750d2627 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_mcclintock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True)
Jan 10 17:08:20 compute-0 systemd[1]: Started libpod-conmon-26a53ef86f64ec35f74f2a8f7a90d87b35e27a295043a32e13280286750d2627.scope.
Jan 10 17:08:20 compute-0 podman[157843]: 2026-01-10 17:08:20.033435898 +0000 UTC m=+0.041311657 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:08:20 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:08:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/090d23415d5c160d8db157b74e56f3bbd69f9331d29534364c5a2d6c0bd20d08/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 17:08:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/090d23415d5c160d8db157b74e56f3bbd69f9331d29534364c5a2d6c0bd20d08/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 17:08:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/090d23415d5c160d8db157b74e56f3bbd69f9331d29534364c5a2d6c0bd20d08/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 17:08:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/090d23415d5c160d8db157b74e56f3bbd69f9331d29534364c5a2d6c0bd20d08/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 17:08:20 compute-0 podman[157843]: 2026-01-10 17:08:20.16827535 +0000 UTC m=+0.176151039 container init 26a53ef86f64ec35f74f2a8f7a90d87b35e27a295043a32e13280286750d2627 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_mcclintock, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 17:08:20 compute-0 podman[157843]: 2026-01-10 17:08:20.174776677 +0000 UTC m=+0.182652386 container start 26a53ef86f64ec35f74f2a8f7a90d87b35e27a295043a32e13280286750d2627 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_mcclintock, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 10 17:08:20 compute-0 podman[157843]: 2026-01-10 17:08:20.17937244 +0000 UTC m=+0.187248159 container attach 26a53ef86f64ec35f74f2a8f7a90d87b35e27a295043a32e13280286750d2627 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_mcclintock, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 10 17:08:20 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v380: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:08:20 compute-0 sudo[157967]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gfubzcfysoeerkmgrdkyjrwephhisqeu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064899.9490268-212-212364378566986/AnsiballZ_command.py'
Jan 10 17:08:20 compute-0 sudo[157967]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:08:20 compute-0 python3.9[157969]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 17:08:20 compute-0 sudo[157967]: pam_unix(sudo:session): session closed for user root
Jan 10 17:08:20 compute-0 lvm[158122]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 10 17:08:20 compute-0 lvm[158122]: VG ceph_vg1 finished
Jan 10 17:08:20 compute-0 lvm[158121]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 10 17:08:20 compute-0 lvm[158121]: VG ceph_vg0 finished
Jan 10 17:08:20 compute-0 lvm[158124]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 10 17:08:20 compute-0 lvm[158124]: VG ceph_vg2 finished
Jan 10 17:08:20 compute-0 competent_mcclintock[157897]: {}
Jan 10 17:08:21 compute-0 systemd[1]: libpod-26a53ef86f64ec35f74f2a8f7a90d87b35e27a295043a32e13280286750d2627.scope: Deactivated successfully.
Jan 10 17:08:21 compute-0 systemd[1]: libpod-26a53ef86f64ec35f74f2a8f7a90d87b35e27a295043a32e13280286750d2627.scope: Consumed 1.281s CPU time.
Jan 10 17:08:21 compute-0 podman[157843]: 2026-01-10 17:08:21.008234374 +0000 UTC m=+1.016110053 container died 26a53ef86f64ec35f74f2a8f7a90d87b35e27a295043a32e13280286750d2627 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_mcclintock, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 17:08:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-090d23415d5c160d8db157b74e56f3bbd69f9331d29534364c5a2d6c0bd20d08-merged.mount: Deactivated successfully.
Jan 10 17:08:21 compute-0 podman[157843]: 2026-01-10 17:08:21.054475114 +0000 UTC m=+1.062350783 container remove 26a53ef86f64ec35f74f2a8f7a90d87b35e27a295043a32e13280286750d2627 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_mcclintock, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 17:08:21 compute-0 systemd[1]: libpod-conmon-26a53ef86f64ec35f74f2a8f7a90d87b35e27a295043a32e13280286750d2627.scope: Deactivated successfully.
Jan 10 17:08:21 compute-0 sudo[157664]: pam_unix(sudo:session): session closed for user root
Jan 10 17:08:21 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 10 17:08:21 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:08:21 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 10 17:08:21 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:08:21 compute-0 sudo[158212]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 10 17:08:21 compute-0 sudo[158212]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:08:21 compute-0 sudo[158212]: pam_unix(sudo:session): session closed for user root
Jan 10 17:08:21 compute-0 python3.9[158211]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 10 17:08:21 compute-0 ceph-mon[75249]: pgmap v380: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:08:21 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:08:21 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:08:21 compute-0 sudo[158386]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-etzvrwotwdolqzfziezkurvpatywxfvq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064901.5502176-230-240022166068510/AnsiballZ_systemd_service.py'
Jan 10 17:08:21 compute-0 sudo[158386]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:08:22 compute-0 python3.9[158388]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 10 17:08:22 compute-0 systemd[1]: Reloading.
Jan 10 17:08:22 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v381: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:08:22 compute-0 systemd-sysv-generator[158420]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 10 17:08:22 compute-0 systemd-rc-local-generator[158417]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 17:08:22 compute-0 sudo[158386]: pam_unix(sudo:session): session closed for user root
Jan 10 17:08:22 compute-0 sudo[158574]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nhfmjzyplrxmhnxosxqyesqileuoajgg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064902.596875-238-159932160034135/AnsiballZ_command.py'
Jan 10 17:08:22 compute-0 sudo[158574]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:08:23 compute-0 python3.9[158576]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 17:08:23 compute-0 sudo[158574]: pam_unix(sudo:session): session closed for user root
Jan 10 17:08:23 compute-0 ceph-mon[75249]: pgmap v381: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:08:23 compute-0 sudo[158727]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tcezleegqcotpqkpzosnejozrejtgrol ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064903.2554507-238-140051022811971/AnsiballZ_command.py'
Jan 10 17:08:23 compute-0 sudo[158727]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:08:23 compute-0 python3.9[158729]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 17:08:23 compute-0 sudo[158727]: pam_unix(sudo:session): session closed for user root
Jan 10 17:08:24 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:08:24 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v382: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:08:24 compute-0 sudo[158880]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-famkccdksimyvvaintijcclvobtuebno ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064903.9853914-238-96292075799756/AnsiballZ_command.py'
Jan 10 17:08:24 compute-0 sudo[158880]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:08:24 compute-0 python3.9[158882]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 17:08:24 compute-0 sudo[158880]: pam_unix(sudo:session): session closed for user root
Jan 10 17:08:24 compute-0 sudo[159033]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mhaxoenboaqdabuzevlozyfsxivsysqu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064904.6709816-238-268350666352759/AnsiballZ_command.py'
Jan 10 17:08:24 compute-0 sudo[159033]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:08:25 compute-0 python3.9[159035]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 17:08:25 compute-0 sudo[159033]: pam_unix(sudo:session): session closed for user root
Jan 10 17:08:25 compute-0 ceph-mon[75249]: pgmap v382: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:08:25 compute-0 sudo[159186]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xodzpajsksndycdbqrzivaxhgphsyirt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064905.3382416-238-19470196049107/AnsiballZ_command.py'
Jan 10 17:08:25 compute-0 sudo[159186]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:08:25 compute-0 python3.9[159188]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 17:08:26 compute-0 sudo[159186]: pam_unix(sudo:session): session closed for user root
Jan 10 17:08:26 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v383: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:08:26 compute-0 sudo[159339]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gkxlurtalntioxvsaldukjcnsvsbxzqh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064906.1807375-238-155661687341115/AnsiballZ_command.py'
Jan 10 17:08:26 compute-0 sudo[159339]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:08:26 compute-0 python3.9[159341]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 17:08:26 compute-0 sudo[159339]: pam_unix(sudo:session): session closed for user root
Jan 10 17:08:27 compute-0 sudo[159492]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-biowykzbecxwqdsfwkmefbatfzvdwjak ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064906.8218374-238-44543756089529/AnsiballZ_command.py'
Jan 10 17:08:27 compute-0 sudo[159492]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:08:27 compute-0 python3.9[159494]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 17:08:27 compute-0 sudo[159492]: pam_unix(sudo:session): session closed for user root
Jan 10 17:08:27 compute-0 ceph-mon[75249]: pgmap v383: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:08:28 compute-0 sudo[159645]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ladxjdumsedbdldyntegexhkepdmmqnr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064907.6906528-292-117325375216212/AnsiballZ_getent.py'
Jan 10 17:08:28 compute-0 sudo[159645]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:08:28 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v384: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:08:28 compute-0 python3.9[159647]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Jan 10 17:08:28 compute-0 sudo[159645]: pam_unix(sudo:session): session closed for user root
Jan 10 17:08:28 compute-0 sudo[159798]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nicxmskagazsmplaeppykwuqbdpdgqrf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064908.4786015-300-226401309557397/AnsiballZ_group.py'
Jan 10 17:08:28 compute-0 sudo[159798]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:08:29 compute-0 python3.9[159800]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 10 17:08:29 compute-0 groupadd[159801]: group added to /etc/group: name=libvirt, GID=42473
Jan 10 17:08:29 compute-0 groupadd[159801]: group added to /etc/gshadow: name=libvirt
Jan 10 17:08:29 compute-0 groupadd[159801]: new group: name=libvirt, GID=42473
Jan 10 17:08:29 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:08:29 compute-0 sudo[159798]: pam_unix(sudo:session): session closed for user root
Jan 10 17:08:29 compute-0 ceph-mon[75249]: pgmap v384: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:08:29 compute-0 ceph-osd[85764]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 10 17:08:29 compute-0 ceph-osd[85764]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Cumulative writes: 4379 writes, 20K keys, 4379 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 4379 writes, 468 syncs, 9.36 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 4379 writes, 20K keys, 4379 commit groups, 1.0 writes per commit group, ingest: 16.51 MB, 0.03 MB/s
                                           Interval WAL: 4379 writes, 468 syncs, 9.36 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560f2dc198d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 0.000158 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560f2dc198d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 0.000158 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560f2dc198d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 0.000158 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560f2dc198d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 0.000158 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560f2dc198d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 0.000158 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560f2dc198d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 0.000158 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560f2dc198d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 0.000158 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560f2dc19a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560f2dc19a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560f2dc19a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560f2dc198d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 0.000158 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560f2dc198d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 0.000158 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 10 17:08:29 compute-0 sudo[159956]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmiadigmmmbquichmtnamkxrraizrmzk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064909.3007913-308-40110071142105/AnsiballZ_user.py'
Jan 10 17:08:29 compute-0 sudo[159956]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:08:30 compute-0 python3.9[159958]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 10 17:08:30 compute-0 useradd[159960]: new user: name=libvirt, UID=42473, GID=42473, home=/home/libvirt, shell=/sbin/nologin, from=/dev/pts/0
Jan 10 17:08:30 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 10 17:08:30 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 10 17:08:30 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v385: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:08:30 compute-0 sudo[159956]: pam_unix(sudo:session): session closed for user root
Jan 10 17:08:30 compute-0 sudo[160117]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-azvfahbvvsiytfnlifiugipjznjathes ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064910.5943096-319-50948097812456/AnsiballZ_setup.py'
Jan 10 17:08:30 compute-0 sudo[160117]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:08:31 compute-0 python3.9[160119]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 10 17:08:31 compute-0 ceph-mon[75249]: pgmap v385: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:08:31 compute-0 sudo[160117]: pam_unix(sudo:session): session closed for user root
Jan 10 17:08:32 compute-0 sudo[160201]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vxhusvbpbcucopyyqxcukuihaxstjkoi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768064910.5943096-319-50948097812456/AnsiballZ_dnf.py'
Jan 10 17:08:32 compute-0 sudo[160201]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:08:32 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v386: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:08:32 compute-0 python3.9[160203]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 10 17:08:33 compute-0 ceph-mon[75249]: pgmap v386: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:08:34 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:08:34 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v387: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:08:35 compute-0 ceph-osd[86809]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 10 17:08:35 compute-0 ceph-osd[86809]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Cumulative writes: 4552 writes, 20K keys, 4552 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 4552 writes, 515 syncs, 8.84 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 4552 writes, 20K keys, 4552 commit groups, 1.0 writes per commit group, ingest: 16.66 MB, 0.03 MB/s
                                           Interval WAL: 4552 writes, 515 syncs, 8.84 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.019       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.019       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.019       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d5952838d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 0.000145 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d5952838d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 0.000145 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d5952838d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 0.000145 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d5952838d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 0.000145 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.01              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d5952838d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 0.000145 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d5952838d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 0.000145 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d5952838d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 0.000145 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d595283a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 1.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d595283a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 1.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.02              0.00         1    0.025       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.02              0.00         1    0.025       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.02              0.00         1    0.025       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d595283a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 1.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d5952838d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 0.000145 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d5952838d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 0.000145 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 10 17:08:35 compute-0 ceph-mon[75249]: pgmap v387: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:08:36 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v388: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:08:37 compute-0 ceph-mon[75249]: pgmap v388: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:08:38 compute-0 ceph-mgr[75538]: [balancer INFO root] Optimize plan auto_2026-01-10_17:08:38
Jan 10 17:08:38 compute-0 ceph-mgr[75538]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 10 17:08:38 compute-0 ceph-mgr[75538]: [balancer INFO root] do_upmap
Jan 10 17:08:38 compute-0 ceph-mgr[75538]: [balancer INFO root] pools ['.mgr', 'backups', 'volumes', 'vms', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'images']
Jan 10 17:08:38 compute-0 ceph-mgr[75538]: [balancer INFO root] prepared 0/10 upmap changes
Jan 10 17:08:38 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v389: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:08:38 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:08:38 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:08:38 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:08:38 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:08:38 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:08:38 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:08:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 10 17:08:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 10 17:08:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 10 17:08:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 10 17:08:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 10 17:08:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 10 17:08:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 10 17:08:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 10 17:08:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 10 17:08:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 10 17:08:39 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:08:39 compute-0 ceph-mon[75249]: pgmap v389: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:08:40 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v390: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:08:41 compute-0 ceph-mon[75249]: pgmap v390: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:08:42 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v391: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:08:42 compute-0 ceph-osd[87867]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 10 17:08:42 compute-0 ceph-osd[87867]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Cumulative writes: 4222 writes, 19K keys, 4222 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 4222 writes, 393 syncs, 10.74 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 4222 writes, 19K keys, 4222 commit groups, 1.0 writes per commit group, ingest: 16.31 MB, 0.03 MB/s
                                           Interval WAL: 4222 writes, 393 syncs, 10.74 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5621ddea9a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5621ddea9a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5621ddea9a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5621ddea9a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5621ddea9a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5621ddea9a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5621ddea9a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5621ddea94b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5621ddea94b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5621ddea94b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5621ddea9a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5621ddea9a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 10 17:08:43 compute-0 podman[160259]: 2026-01-10 17:08:43.183168489 +0000 UTC m=+0.165495655 container health_status a2049b55215718ead6719d161f51721cb7ba61ad8a59a8a1174e05b17008fa6f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3)
Jan 10 17:08:43 compute-0 ceph-mon[75249]: pgmap v391: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:08:44 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:08:44 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v392: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:08:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] _maybe_adjust
Jan 10 17:08:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:08:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 10 17:08:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:08:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 17:08:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:08:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 17:08:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:08:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 17:08:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:08:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 17:08:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:08:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 9.302004027771843e-07 of space, bias 4.0, pg target 0.0011162404833326212 quantized to 16 (current 16)
Jan 10 17:08:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:08:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 17:08:45 compute-0 ceph-mon[75249]: pgmap v392: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:08:46 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v393: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:08:47 compute-0 ceph-mon[75249]: pgmap v393: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:08:48 compute-0 podman[160414]: 2026-01-10 17:08:48.07078652 +0000 UTC m=+0.074597347 container health_status 4a66317a3a3660e903224f5e823ead0a57597ae17c6ac34919f8a3aeea25124f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 10 17:08:48 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v394: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:08:48 compute-0 ceph-mgr[75538]: [devicehealth INFO root] Check health
Jan 10 17:08:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:08:48.908 152671 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 10 17:08:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:08:48.910 152671 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 10 17:08:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:08:48.910 152671 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 10 17:08:49 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:08:49 compute-0 ceph-mon[75249]: pgmap v394: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:08:50 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v395: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:08:51 compute-0 ceph-mon[75249]: pgmap v395: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:08:52 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v396: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:08:53 compute-0 ceph-mon[75249]: pgmap v396: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:08:54 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:08:54 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v397: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:08:55 compute-0 ceph-mon[75249]: pgmap v397: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:08:56 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v398: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:08:57 compute-0 ceph-mon[75249]: pgmap v398: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:08:58 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v399: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:08:58 compute-0 sshd-session[160440]: Connection closed by authenticating user root 216.36.124.133 port 46102 [preauth]
Jan 10 17:08:59 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:08:59 compute-0 ceph-mon[75249]: pgmap v399: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:09:00 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v400: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:09:01 compute-0 ceph-mon[75249]: pgmap v400: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:09:02 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v401: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:09:03 compute-0 kernel: SELinux:  Converting 2769 SID table entries...
Jan 10 17:09:03 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Jan 10 17:09:03 compute-0 kernel: SELinux:  policy capability open_perms=1
Jan 10 17:09:03 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Jan 10 17:09:03 compute-0 kernel: SELinux:  policy capability always_check_network=0
Jan 10 17:09:03 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 10 17:09:03 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 10 17:09:03 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 10 17:09:03 compute-0 ceph-mon[75249]: pgmap v401: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:09:04 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:09:04 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v402: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:09:05 compute-0 ceph-mon[75249]: pgmap v402: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:09:06 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v403: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:09:07 compute-0 ceph-mon[75249]: pgmap v403: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:09:08 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v404: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:09:08 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:09:08 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:09:08 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:09:08 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:09:08 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:09:08 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:09:09 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:09:09 compute-0 ceph-mon[75249]: pgmap v404: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:09:10 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v405: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:09:11 compute-0 ceph-mon[75249]: pgmap v405: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:09:12 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v406: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:09:12 compute-0 kernel: SELinux:  Converting 2769 SID table entries...
Jan 10 17:09:12 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Jan 10 17:09:12 compute-0 kernel: SELinux:  policy capability open_perms=1
Jan 10 17:09:12 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Jan 10 17:09:12 compute-0 kernel: SELinux:  policy capability always_check_network=0
Jan 10 17:09:12 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 10 17:09:12 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 10 17:09:12 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 10 17:09:13 compute-0 ceph-mon[75249]: pgmap v406: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:09:13 compute-0 dbus-broker-launch[777]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Jan 10 17:09:14 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:09:14 compute-0 podman[160458]: 2026-01-10 17:09:14.18342087 +0000 UTC m=+0.154827875 container health_status a2049b55215718ead6719d161f51721cb7ba61ad8a59a8a1174e05b17008fa6f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Jan 10 17:09:14 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v407: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:09:14 compute-0 ceph-mon[75249]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #24. Immutable memtables: 0.
Jan 10 17:09:14 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:09:14.876863) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 10 17:09:14 compute-0 ceph-mon[75249]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 24
Jan 10 17:09:14 compute-0 ceph-mon[75249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768064954877092, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 1508, "num_deletes": 251, "total_data_size": 1663798, "memory_usage": 1708256, "flush_reason": "Manual Compaction"}
Jan 10 17:09:14 compute-0 ceph-mon[75249]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #25: started
Jan 10 17:09:14 compute-0 ceph-mon[75249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768064954895167, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 25, "file_size": 1621522, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 7610, "largest_seqno": 9117, "table_properties": {"data_size": 1614560, "index_size": 4037, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 13656, "raw_average_key_size": 18, "raw_value_size": 1600653, "raw_average_value_size": 2226, "num_data_blocks": 190, "num_entries": 719, "num_filter_entries": 719, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768064789, "oldest_key_time": 1768064789, "file_creation_time": 1768064954, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f71f9c2-f3c5-4fc3-bcd9-6ffe346ae9d4", "db_session_id": "VPFJD76VNV79HUMFHEYZ", "orig_file_number": 25, "seqno_to_time_mapping": "N/A"}}
Jan 10 17:09:14 compute-0 ceph-mon[75249]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 18360 microseconds, and 11183 cpu microseconds.
Jan 10 17:09:14 compute-0 ceph-mon[75249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 10 17:09:14 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:09:14.895243) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #25: 1621522 bytes OK
Jan 10 17:09:14 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:09:14.895285) [db/memtable_list.cc:519] [default] Level-0 commit table #25 started
Jan 10 17:09:14 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:09:14.897108) [db/memtable_list.cc:722] [default] Level-0 commit table #25: memtable #1 done
Jan 10 17:09:14 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:09:14.897129) EVENT_LOG_v1 {"time_micros": 1768064954897123, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 10 17:09:14 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:09:14.897158) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 10 17:09:14 compute-0 ceph-mon[75249]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 1657239, prev total WAL file size 1657239, number of live WAL files 2.
Jan 10 17:09:14 compute-0 ceph-mon[75249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000021.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 10 17:09:14 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:09:14.898608) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300323531' seq:72057594037927935, type:22 .. '7061786F7300353033' seq:0, type:0; will stop at (end)
Jan 10 17:09:14 compute-0 ceph-mon[75249]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 10 17:09:14 compute-0 ceph-mon[75249]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [25(1583KB)], [23(4492KB)]
Jan 10 17:09:14 compute-0 ceph-mon[75249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768064954898829, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [25], "files_L6": [23], "score": -1, "input_data_size": 6221897, "oldest_snapshot_seqno": -1}
Jan 10 17:09:14 compute-0 ceph-mon[75249]: pgmap v407: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:09:14 compute-0 ceph-mon[75249]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #26: 2840 keys, 4939457 bytes, temperature: kUnknown
Jan 10 17:09:14 compute-0 ceph-mon[75249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768064954956194, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 26, "file_size": 4939457, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 4917720, "index_size": 13564, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 7109, "raw_key_size": 65944, "raw_average_key_size": 23, "raw_value_size": 4864013, "raw_average_value_size": 1712, "num_data_blocks": 606, "num_entries": 2840, "num_filter_entries": 2840, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768064235, "oldest_key_time": 0, "file_creation_time": 1768064954, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f71f9c2-f3c5-4fc3-bcd9-6ffe346ae9d4", "db_session_id": "VPFJD76VNV79HUMFHEYZ", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Jan 10 17:09:14 compute-0 ceph-mon[75249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 10 17:09:14 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:09:14.956840) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 4939457 bytes
Jan 10 17:09:14 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:09:14.959670) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 107.9 rd, 85.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 4.4 +0.0 blob) out(4.7 +0.0 blob), read-write-amplify(6.9) write-amplify(3.0) OK, records in: 3354, records dropped: 514 output_compression: NoCompression
Jan 10 17:09:14 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:09:14.959748) EVENT_LOG_v1 {"time_micros": 1768064954959690, "job": 8, "event": "compaction_finished", "compaction_time_micros": 57657, "compaction_time_cpu_micros": 38124, "output_level": 6, "num_output_files": 1, "total_output_size": 4939457, "num_input_records": 3354, "num_output_records": 2840, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 10 17:09:14 compute-0 ceph-mon[75249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000025.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 10 17:09:14 compute-0 ceph-mon[75249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768064954960330, "job": 8, "event": "table_file_deletion", "file_number": 25}
Jan 10 17:09:14 compute-0 ceph-mon[75249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 10 17:09:14 compute-0 ceph-mon[75249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768064954961365, "job": 8, "event": "table_file_deletion", "file_number": 23}
Jan 10 17:09:14 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:09:14.898285) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 10 17:09:14 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:09:14.961615) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 10 17:09:14 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:09:14.961626) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 10 17:09:14 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:09:14.961631) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 10 17:09:14 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:09:14.961635) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 10 17:09:14 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:09:14.961638) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 10 17:09:16 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v408: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:09:17 compute-0 ceph-mon[75249]: pgmap v408: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:09:18 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v409: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:09:19 compute-0 podman[160483]: 2026-01-10 17:09:19.106095086 +0000 UTC m=+0.088475149 container health_status 4a66317a3a3660e903224f5e823ead0a57597ae17c6ac34919f8a3aeea25124f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202)
Jan 10 17:09:19 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:09:19 compute-0 ceph-mon[75249]: pgmap v409: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:09:20 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v410: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:09:21 compute-0 sudo[160502]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 17:09:21 compute-0 sudo[160502]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:09:21 compute-0 sudo[160502]: pam_unix(sudo:session): session closed for user root
Jan 10 17:09:21 compute-0 sudo[160527]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 10 17:09:21 compute-0 sudo[160527]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:09:21 compute-0 ceph-mon[75249]: pgmap v410: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:09:22 compute-0 sudo[160527]: pam_unix(sudo:session): session closed for user root
Jan 10 17:09:22 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Jan 10 17:09:22 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 10 17:09:22 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 17:09:22 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 17:09:22 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 10 17:09:22 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 10 17:09:22 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 10 17:09:22 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:09:22 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 10 17:09:22 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 10 17:09:22 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 10 17:09:22 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 10 17:09:22 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 17:09:22 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 17:09:22 compute-0 sudo[160585]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 17:09:22 compute-0 sudo[160585]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:09:22 compute-0 sudo[160585]: pam_unix(sudo:session): session closed for user root
Jan 10 17:09:22 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v411: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:09:22 compute-0 sudo[160610]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 10 17:09:22 compute-0 sudo[160610]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:09:22 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 10 17:09:22 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 17:09:22 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 10 17:09:22 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:09:22 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 10 17:09:22 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 10 17:09:22 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 17:09:22 compute-0 podman[160648]: 2026-01-10 17:09:22.67894171 +0000 UTC m=+0.049406597 container create 4a139dde07acc18b54c33aa247a35a3c949bf3f32f217ebb0e2e4b01320fb0be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_einstein, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 17:09:22 compute-0 systemd[1]: Started libpod-conmon-4a139dde07acc18b54c33aa247a35a3c949bf3f32f217ebb0e2e4b01320fb0be.scope.
Jan 10 17:09:22 compute-0 podman[160648]: 2026-01-10 17:09:22.660018887 +0000 UTC m=+0.030483794 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:09:22 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:09:22 compute-0 podman[160648]: 2026-01-10 17:09:22.779682539 +0000 UTC m=+0.150147476 container init 4a139dde07acc18b54c33aa247a35a3c949bf3f32f217ebb0e2e4b01320fb0be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_einstein, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 10 17:09:22 compute-0 podman[160648]: 2026-01-10 17:09:22.791734674 +0000 UTC m=+0.162199561 container start 4a139dde07acc18b54c33aa247a35a3c949bf3f32f217ebb0e2e4b01320fb0be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_einstein, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 10 17:09:22 compute-0 podman[160648]: 2026-01-10 17:09:22.795994023 +0000 UTC m=+0.166458950 container attach 4a139dde07acc18b54c33aa247a35a3c949bf3f32f217ebb0e2e4b01320fb0be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_einstein, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True)
Jan 10 17:09:22 compute-0 modest_einstein[160665]: 167 167
Jan 10 17:09:22 compute-0 systemd[1]: libpod-4a139dde07acc18b54c33aa247a35a3c949bf3f32f217ebb0e2e4b01320fb0be.scope: Deactivated successfully.
Jan 10 17:09:22 compute-0 podman[160648]: 2026-01-10 17:09:22.801345565 +0000 UTC m=+0.171810492 container died 4a139dde07acc18b54c33aa247a35a3c949bf3f32f217ebb0e2e4b01320fb0be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_einstein, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 10 17:09:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-2cb938a9f521c2b516e0e28777277f1e9be39bc85e255af12e89e26e340b2061-merged.mount: Deactivated successfully.
Jan 10 17:09:22 compute-0 podman[160648]: 2026-01-10 17:09:22.860895397 +0000 UTC m=+0.231360284 container remove 4a139dde07acc18b54c33aa247a35a3c949bf3f32f217ebb0e2e4b01320fb0be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_einstein, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 10 17:09:22 compute-0 systemd[1]: libpod-conmon-4a139dde07acc18b54c33aa247a35a3c949bf3f32f217ebb0e2e4b01320fb0be.scope: Deactivated successfully.
Jan 10 17:09:23 compute-0 podman[160689]: 2026-01-10 17:09:23.118045511 +0000 UTC m=+0.075139896 container create 842efcc283907e52e21e23b875f220090579fd201c799233bdd2b167b2b9c603 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_dirac, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 17:09:23 compute-0 systemd[1]: Started libpod-conmon-842efcc283907e52e21e23b875f220090579fd201c799233bdd2b167b2b9c603.scope.
Jan 10 17:09:23 compute-0 podman[160689]: 2026-01-10 17:09:23.088934529 +0000 UTC m=+0.046028984 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:09:23 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:09:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7c8594d13cc0c51d5de5363947b124418e9dcfa4f63548777450d1885dba502/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 17:09:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7c8594d13cc0c51d5de5363947b124418e9dcfa4f63548777450d1885dba502/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 17:09:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7c8594d13cc0c51d5de5363947b124418e9dcfa4f63548777450d1885dba502/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 17:09:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7c8594d13cc0c51d5de5363947b124418e9dcfa4f63548777450d1885dba502/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 17:09:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7c8594d13cc0c51d5de5363947b124418e9dcfa4f63548777450d1885dba502/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 10 17:09:23 compute-0 podman[160689]: 2026-01-10 17:09:23.25379906 +0000 UTC m=+0.210893515 container init 842efcc283907e52e21e23b875f220090579fd201c799233bdd2b167b2b9c603 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_dirac, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 17:09:23 compute-0 podman[160689]: 2026-01-10 17:09:23.265232006 +0000 UTC m=+0.222326421 container start 842efcc283907e52e21e23b875f220090579fd201c799233bdd2b167b2b9c603 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_dirac, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 10 17:09:23 compute-0 podman[160689]: 2026-01-10 17:09:23.270479034 +0000 UTC m=+0.227573499 container attach 842efcc283907e52e21e23b875f220090579fd201c799233bdd2b167b2b9c603 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_dirac, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 10 17:09:23 compute-0 ceph-mon[75249]: pgmap v411: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:09:23 compute-0 vigilant_dirac[160705]: --> passed data devices: 0 physical, 3 LVM
Jan 10 17:09:23 compute-0 vigilant_dirac[160705]: --> All data devices are unavailable
Jan 10 17:09:23 compute-0 systemd[1]: libpod-842efcc283907e52e21e23b875f220090579fd201c799233bdd2b167b2b9c603.scope: Deactivated successfully.
Jan 10 17:09:23 compute-0 podman[160689]: 2026-01-10 17:09:23.87956123 +0000 UTC m=+0.836655625 container died 842efcc283907e52e21e23b875f220090579fd201c799233bdd2b167b2b9c603 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_dirac, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 17:09:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-a7c8594d13cc0c51d5de5363947b124418e9dcfa4f63548777450d1885dba502-merged.mount: Deactivated successfully.
Jan 10 17:09:23 compute-0 podman[160689]: 2026-01-10 17:09:23.962011046 +0000 UTC m=+0.919105431 container remove 842efcc283907e52e21e23b875f220090579fd201c799233bdd2b167b2b9c603 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_dirac, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 17:09:23 compute-0 systemd[1]: libpod-conmon-842efcc283907e52e21e23b875f220090579fd201c799233bdd2b167b2b9c603.scope: Deactivated successfully.
Jan 10 17:09:24 compute-0 sudo[160610]: pam_unix(sudo:session): session closed for user root
Jan 10 17:09:24 compute-0 sudo[160736]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 17:09:24 compute-0 sudo[160736]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:09:24 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:09:24 compute-0 sudo[160736]: pam_unix(sudo:session): session closed for user root
Jan 10 17:09:24 compute-0 sudo[160761]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 -- lvm list --format json
Jan 10 17:09:24 compute-0 sudo[160761]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:09:24 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v412: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:09:24 compute-0 podman[160798]: 2026-01-10 17:09:24.587116006 +0000 UTC m=+0.071552777 container create 1789f5875dcc298bc8a8b97eb9809cf8dff7a24270542fd6fba8bd142e6bb17a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_wu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 10 17:09:24 compute-0 systemd[1]: Started libpod-conmon-1789f5875dcc298bc8a8b97eb9809cf8dff7a24270542fd6fba8bd142e6bb17a.scope.
Jan 10 17:09:24 compute-0 podman[160798]: 2026-01-10 17:09:24.547315051 +0000 UTC m=+0.031751832 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:09:24 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:09:24 compute-0 podman[160798]: 2026-01-10 17:09:24.705667064 +0000 UTC m=+0.190103875 container init 1789f5875dcc298bc8a8b97eb9809cf8dff7a24270542fd6fba8bd142e6bb17a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_wu, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 10 17:09:24 compute-0 podman[160798]: 2026-01-10 17:09:24.712336966 +0000 UTC m=+0.196773717 container start 1789f5875dcc298bc8a8b97eb9809cf8dff7a24270542fd6fba8bd142e6bb17a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_wu, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 17:09:24 compute-0 quizzical_wu[160814]: 167 167
Jan 10 17:09:24 compute-0 systemd[1]: libpod-1789f5875dcc298bc8a8b97eb9809cf8dff7a24270542fd6fba8bd142e6bb17a.scope: Deactivated successfully.
Jan 10 17:09:24 compute-0 podman[160798]: 2026-01-10 17:09:24.718303247 +0000 UTC m=+0.202740078 container attach 1789f5875dcc298bc8a8b97eb9809cf8dff7a24270542fd6fba8bd142e6bb17a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_wu, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 17:09:24 compute-0 podman[160798]: 2026-01-10 17:09:24.719268446 +0000 UTC m=+0.203705227 container died 1789f5875dcc298bc8a8b97eb9809cf8dff7a24270542fd6fba8bd142e6bb17a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_wu, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 17:09:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-af768bd8d06d576d68877a5258bb6366f518122077251274b909b21db16e5592-merged.mount: Deactivated successfully.
Jan 10 17:09:24 compute-0 podman[160798]: 2026-01-10 17:09:24.773159097 +0000 UTC m=+0.257595878 container remove 1789f5875dcc298bc8a8b97eb9809cf8dff7a24270542fd6fba8bd142e6bb17a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_wu, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 17:09:24 compute-0 systemd[1]: libpod-conmon-1789f5875dcc298bc8a8b97eb9809cf8dff7a24270542fd6fba8bd142e6bb17a.scope: Deactivated successfully.
Jan 10 17:09:25 compute-0 podman[160838]: 2026-01-10 17:09:25.002948992 +0000 UTC m=+0.077267199 container create f6f28e666f15392b219c068d85146774647d8fa1ab30ef2370107c713cacb088 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_kalam, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 17:09:25 compute-0 systemd[1]: Started libpod-conmon-f6f28e666f15392b219c068d85146774647d8fa1ab30ef2370107c713cacb088.scope.
Jan 10 17:09:25 compute-0 podman[160838]: 2026-01-10 17:09:24.968649054 +0000 UTC m=+0.042967291 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:09:25 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:09:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/515d40665d1aa293b52bd909a83ae181dcbee97390baf69ac1e9de4a0cd13b00/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 17:09:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/515d40665d1aa293b52bd909a83ae181dcbee97390baf69ac1e9de4a0cd13b00/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 17:09:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/515d40665d1aa293b52bd909a83ae181dcbee97390baf69ac1e9de4a0cd13b00/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 17:09:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/515d40665d1aa293b52bd909a83ae181dcbee97390baf69ac1e9de4a0cd13b00/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 17:09:25 compute-0 podman[160838]: 2026-01-10 17:09:25.092622147 +0000 UTC m=+0.166940374 container init f6f28e666f15392b219c068d85146774647d8fa1ab30ef2370107c713cacb088 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_kalam, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 17:09:25 compute-0 podman[160838]: 2026-01-10 17:09:25.104383353 +0000 UTC m=+0.178701560 container start f6f28e666f15392b219c068d85146774647d8fa1ab30ef2370107c713cacb088 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_kalam, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 10 17:09:25 compute-0 podman[160838]: 2026-01-10 17:09:25.107877818 +0000 UTC m=+0.182196095 container attach f6f28e666f15392b219c068d85146774647d8fa1ab30ef2370107c713cacb088 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_kalam, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 17:09:25 compute-0 elegant_kalam[160910]: {
Jan 10 17:09:25 compute-0 elegant_kalam[160910]:     "0": [
Jan 10 17:09:25 compute-0 elegant_kalam[160910]:         {
Jan 10 17:09:25 compute-0 elegant_kalam[160910]:             "devices": [
Jan 10 17:09:25 compute-0 elegant_kalam[160910]:                 "/dev/loop3"
Jan 10 17:09:25 compute-0 elegant_kalam[160910]:             ],
Jan 10 17:09:25 compute-0 elegant_kalam[160910]:             "lv_name": "ceph_lv0",
Jan 10 17:09:25 compute-0 elegant_kalam[160910]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 10 17:09:25 compute-0 elegant_kalam[160910]:             "lv_size": "21470642176",
Jan 10 17:09:25 compute-0 elegant_kalam[160910]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9aa1dcc9-88f4-49c0-be40-744313964d3e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 17:09:25 compute-0 elegant_kalam[160910]:             "lv_uuid": "fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2",
Jan 10 17:09:25 compute-0 elegant_kalam[160910]:             "name": "ceph_lv0",
Jan 10 17:09:25 compute-0 elegant_kalam[160910]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 10 17:09:25 compute-0 elegant_kalam[160910]:             "tags": {
Jan 10 17:09:25 compute-0 elegant_kalam[160910]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 10 17:09:25 compute-0 elegant_kalam[160910]:                 "ceph.block_uuid": "fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2",
Jan 10 17:09:25 compute-0 elegant_kalam[160910]:                 "ceph.cephx_lockbox_secret": "",
Jan 10 17:09:25 compute-0 elegant_kalam[160910]:                 "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 17:09:25 compute-0 elegant_kalam[160910]:                 "ceph.cluster_name": "ceph",
Jan 10 17:09:25 compute-0 elegant_kalam[160910]:                 "ceph.crush_device_class": "",
Jan 10 17:09:25 compute-0 elegant_kalam[160910]:                 "ceph.encrypted": "0",
Jan 10 17:09:25 compute-0 elegant_kalam[160910]:                 "ceph.objectstore": "bluestore",
Jan 10 17:09:25 compute-0 elegant_kalam[160910]:                 "ceph.osd_fsid": "9aa1dcc9-88f4-49c0-be40-744313964d3e",
Jan 10 17:09:25 compute-0 elegant_kalam[160910]:                 "ceph.osd_id": "0",
Jan 10 17:09:25 compute-0 elegant_kalam[160910]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 10 17:09:25 compute-0 elegant_kalam[160910]:                 "ceph.type": "block",
Jan 10 17:09:25 compute-0 elegant_kalam[160910]:                 "ceph.vdo": "0",
Jan 10 17:09:25 compute-0 elegant_kalam[160910]:                 "ceph.with_tpm": "0"
Jan 10 17:09:25 compute-0 elegant_kalam[160910]:             },
Jan 10 17:09:25 compute-0 elegant_kalam[160910]:             "type": "block",
Jan 10 17:09:25 compute-0 elegant_kalam[160910]:             "vg_name": "ceph_vg0"
Jan 10 17:09:25 compute-0 elegant_kalam[160910]:         }
Jan 10 17:09:25 compute-0 elegant_kalam[160910]:     ],
Jan 10 17:09:25 compute-0 elegant_kalam[160910]:     "1": [
Jan 10 17:09:25 compute-0 elegant_kalam[160910]:         {
Jan 10 17:09:25 compute-0 elegant_kalam[160910]:             "devices": [
Jan 10 17:09:25 compute-0 elegant_kalam[160910]:                 "/dev/loop4"
Jan 10 17:09:25 compute-0 elegant_kalam[160910]:             ],
Jan 10 17:09:25 compute-0 elegant_kalam[160910]:             "lv_name": "ceph_lv1",
Jan 10 17:09:25 compute-0 elegant_kalam[160910]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 10 17:09:25 compute-0 elegant_kalam[160910]:             "lv_size": "21470642176",
Jan 10 17:09:25 compute-0 elegant_kalam[160910]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e8e31518-65ae-476c-891c-e2fc550d0a1c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 17:09:25 compute-0 elegant_kalam[160910]:             "lv_uuid": "jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE",
Jan 10 17:09:25 compute-0 elegant_kalam[160910]:             "name": "ceph_lv1",
Jan 10 17:09:25 compute-0 elegant_kalam[160910]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 10 17:09:25 compute-0 elegant_kalam[160910]:             "tags": {
Jan 10 17:09:25 compute-0 elegant_kalam[160910]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 10 17:09:25 compute-0 elegant_kalam[160910]:                 "ceph.block_uuid": "jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE",
Jan 10 17:09:25 compute-0 elegant_kalam[160910]:                 "ceph.cephx_lockbox_secret": "",
Jan 10 17:09:25 compute-0 elegant_kalam[160910]:                 "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 17:09:25 compute-0 elegant_kalam[160910]:                 "ceph.cluster_name": "ceph",
Jan 10 17:09:25 compute-0 elegant_kalam[160910]:                 "ceph.crush_device_class": "",
Jan 10 17:09:25 compute-0 elegant_kalam[160910]:                 "ceph.encrypted": "0",
Jan 10 17:09:25 compute-0 elegant_kalam[160910]:                 "ceph.objectstore": "bluestore",
Jan 10 17:09:25 compute-0 elegant_kalam[160910]:                 "ceph.osd_fsid": "e8e31518-65ae-476c-891c-e2fc550d0a1c",
Jan 10 17:09:25 compute-0 elegant_kalam[160910]:                 "ceph.osd_id": "1",
Jan 10 17:09:25 compute-0 elegant_kalam[160910]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 10 17:09:25 compute-0 elegant_kalam[160910]:                 "ceph.type": "block",
Jan 10 17:09:25 compute-0 elegant_kalam[160910]:                 "ceph.vdo": "0",
Jan 10 17:09:25 compute-0 elegant_kalam[160910]:                 "ceph.with_tpm": "0"
Jan 10 17:09:25 compute-0 elegant_kalam[160910]:             },
Jan 10 17:09:25 compute-0 elegant_kalam[160910]:             "type": "block",
Jan 10 17:09:25 compute-0 elegant_kalam[160910]:             "vg_name": "ceph_vg1"
Jan 10 17:09:25 compute-0 elegant_kalam[160910]:         }
Jan 10 17:09:25 compute-0 elegant_kalam[160910]:     ],
Jan 10 17:09:25 compute-0 elegant_kalam[160910]:     "2": [
Jan 10 17:09:25 compute-0 elegant_kalam[160910]:         {
Jan 10 17:09:25 compute-0 elegant_kalam[160910]:             "devices": [
Jan 10 17:09:25 compute-0 elegant_kalam[160910]:                 "/dev/loop5"
Jan 10 17:09:25 compute-0 elegant_kalam[160910]:             ],
Jan 10 17:09:25 compute-0 elegant_kalam[160910]:             "lv_name": "ceph_lv2",
Jan 10 17:09:25 compute-0 elegant_kalam[160910]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 10 17:09:25 compute-0 elegant_kalam[160910]:             "lv_size": "21470642176",
Jan 10 17:09:25 compute-0 elegant_kalam[160910]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=87473727-6468-4f68-8371-e0bf60edaa43,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 17:09:25 compute-0 elegant_kalam[160910]:             "lv_uuid": "47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL",
Jan 10 17:09:25 compute-0 elegant_kalam[160910]:             "name": "ceph_lv2",
Jan 10 17:09:25 compute-0 elegant_kalam[160910]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 10 17:09:25 compute-0 elegant_kalam[160910]:             "tags": {
Jan 10 17:09:25 compute-0 elegant_kalam[160910]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 10 17:09:25 compute-0 elegant_kalam[160910]:                 "ceph.block_uuid": "47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL",
Jan 10 17:09:25 compute-0 elegant_kalam[160910]:                 "ceph.cephx_lockbox_secret": "",
Jan 10 17:09:25 compute-0 elegant_kalam[160910]:                 "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 17:09:25 compute-0 elegant_kalam[160910]:                 "ceph.cluster_name": "ceph",
Jan 10 17:09:25 compute-0 elegant_kalam[160910]:                 "ceph.crush_device_class": "",
Jan 10 17:09:25 compute-0 elegant_kalam[160910]:                 "ceph.encrypted": "0",
Jan 10 17:09:25 compute-0 elegant_kalam[160910]:                 "ceph.objectstore": "bluestore",
Jan 10 17:09:25 compute-0 elegant_kalam[160910]:                 "ceph.osd_fsid": "87473727-6468-4f68-8371-e0bf60edaa43",
Jan 10 17:09:25 compute-0 elegant_kalam[160910]:                 "ceph.osd_id": "2",
Jan 10 17:09:25 compute-0 elegant_kalam[160910]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 10 17:09:25 compute-0 elegant_kalam[160910]:                 "ceph.type": "block",
Jan 10 17:09:25 compute-0 elegant_kalam[160910]:                 "ceph.vdo": "0",
Jan 10 17:09:25 compute-0 elegant_kalam[160910]:                 "ceph.with_tpm": "0"
Jan 10 17:09:25 compute-0 elegant_kalam[160910]:             },
Jan 10 17:09:25 compute-0 elegant_kalam[160910]:             "type": "block",
Jan 10 17:09:25 compute-0 elegant_kalam[160910]:             "vg_name": "ceph_vg2"
Jan 10 17:09:25 compute-0 elegant_kalam[160910]:         }
Jan 10 17:09:25 compute-0 elegant_kalam[160910]:     ]
Jan 10 17:09:25 compute-0 elegant_kalam[160910]: }
Jan 10 17:09:25 compute-0 ceph-mon[75249]: pgmap v412: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:09:25 compute-0 systemd[1]: libpod-f6f28e666f15392b219c068d85146774647d8fa1ab30ef2370107c713cacb088.scope: Deactivated successfully.
Jan 10 17:09:25 compute-0 podman[160838]: 2026-01-10 17:09:25.4855539 +0000 UTC m=+0.559872107 container died f6f28e666f15392b219c068d85146774647d8fa1ab30ef2370107c713cacb088 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_kalam, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 17:09:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-515d40665d1aa293b52bd909a83ae181dcbee97390baf69ac1e9de4a0cd13b00-merged.mount: Deactivated successfully.
Jan 10 17:09:25 compute-0 podman[160838]: 2026-01-10 17:09:25.538577495 +0000 UTC m=+0.612895702 container remove f6f28e666f15392b219c068d85146774647d8fa1ab30ef2370107c713cacb088 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_kalam, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 10 17:09:25 compute-0 systemd[1]: libpod-conmon-f6f28e666f15392b219c068d85146774647d8fa1ab30ef2370107c713cacb088.scope: Deactivated successfully.
Jan 10 17:09:25 compute-0 sudo[160761]: pam_unix(sudo:session): session closed for user root
Jan 10 17:09:25 compute-0 sudo[161167]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 17:09:25 compute-0 sudo[161167]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:09:25 compute-0 sudo[161167]: pam_unix(sudo:session): session closed for user root
Jan 10 17:09:25 compute-0 sudo[161231]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 -- raw list --format json
Jan 10 17:09:25 compute-0 sudo[161231]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:09:26 compute-0 podman[161446]: 2026-01-10 17:09:25.999533947 +0000 UTC m=+0.040495616 container create 16b0862eae473dedcead60796275d0a926bbf656a0f7e704c3c6071391002b2e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_diffie, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 10 17:09:26 compute-0 systemd[1]: Started libpod-conmon-16b0862eae473dedcead60796275d0a926bbf656a0f7e704c3c6071391002b2e.scope.
Jan 10 17:09:26 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:09:26 compute-0 podman[161446]: 2026-01-10 17:09:26.067270148 +0000 UTC m=+0.108231797 container init 16b0862eae473dedcead60796275d0a926bbf656a0f7e704c3c6071391002b2e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_diffie, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Jan 10 17:09:26 compute-0 podman[161446]: 2026-01-10 17:09:26.074886148 +0000 UTC m=+0.115847807 container start 16b0862eae473dedcead60796275d0a926bbf656a0f7e704c3c6071391002b2e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_diffie, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 10 17:09:26 compute-0 podman[161446]: 2026-01-10 17:09:25.979629995 +0000 UTC m=+0.020591694 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:09:26 compute-0 happy_diffie[161517]: 167 167
Jan 10 17:09:26 compute-0 podman[161446]: 2026-01-10 17:09:26.078336083 +0000 UTC m=+0.119297752 container attach 16b0862eae473dedcead60796275d0a926bbf656a0f7e704c3c6071391002b2e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_diffie, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 10 17:09:26 compute-0 systemd[1]: libpod-16b0862eae473dedcead60796275d0a926bbf656a0f7e704c3c6071391002b2e.scope: Deactivated successfully.
Jan 10 17:09:26 compute-0 podman[161446]: 2026-01-10 17:09:26.079175268 +0000 UTC m=+0.120136957 container died 16b0862eae473dedcead60796275d0a926bbf656a0f7e704c3c6071391002b2e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_diffie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 10 17:09:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-ea6023990152908865953e6ccf0a149252e22ada399c7bcd0daad3f906e79879-merged.mount: Deactivated successfully.
Jan 10 17:09:26 compute-0 podman[161446]: 2026-01-10 17:09:26.122782668 +0000 UTC m=+0.163744317 container remove 16b0862eae473dedcead60796275d0a926bbf656a0f7e704c3c6071391002b2e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_diffie, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 10 17:09:26 compute-0 systemd[1]: libpod-conmon-16b0862eae473dedcead60796275d0a926bbf656a0f7e704c3c6071391002b2e.scope: Deactivated successfully.
Jan 10 17:09:26 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v413: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:09:26 compute-0 podman[161676]: 2026-01-10 17:09:26.369018881 +0000 UTC m=+0.051547711 container create c5b64850e7339a807e57fdc7a6b8fe7f42c0a1667704a8e07d6774a47594c6c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_thompson, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 17:09:26 compute-0 systemd[1]: Started libpod-conmon-c5b64850e7339a807e57fdc7a6b8fe7f42c0a1667704a8e07d6774a47594c6c3.scope.
Jan 10 17:09:26 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:09:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/827601ea8636821a0ddabe8f9b3dc8b0f051bddbb1e3be53f930001bde03f718/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 17:09:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/827601ea8636821a0ddabe8f9b3dc8b0f051bddbb1e3be53f930001bde03f718/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 17:09:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/827601ea8636821a0ddabe8f9b3dc8b0f051bddbb1e3be53f930001bde03f718/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 17:09:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/827601ea8636821a0ddabe8f9b3dc8b0f051bddbb1e3be53f930001bde03f718/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 17:09:26 compute-0 podman[161676]: 2026-01-10 17:09:26.343008364 +0000 UTC m=+0.025537204 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:09:26 compute-0 podman[161676]: 2026-01-10 17:09:26.455017224 +0000 UTC m=+0.137546124 container init c5b64850e7339a807e57fdc7a6b8fe7f42c0a1667704a8e07d6774a47594c6c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_thompson, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 10 17:09:26 compute-0 podman[161676]: 2026-01-10 17:09:26.468514333 +0000 UTC m=+0.151043163 container start c5b64850e7339a807e57fdc7a6b8fe7f42c0a1667704a8e07d6774a47594c6c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_thompson, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 10 17:09:26 compute-0 podman[161676]: 2026-01-10 17:09:26.472686759 +0000 UTC m=+0.155215679 container attach c5b64850e7339a807e57fdc7a6b8fe7f42c0a1667704a8e07d6774a47594c6c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_thompson, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 17:09:27 compute-0 lvm[162293]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 10 17:09:27 compute-0 lvm[162291]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 10 17:09:27 compute-0 lvm[162291]: VG ceph_vg0 finished
Jan 10 17:09:27 compute-0 lvm[162293]: VG ceph_vg1 finished
Jan 10 17:09:27 compute-0 lvm[162298]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 10 17:09:27 compute-0 lvm[162298]: VG ceph_vg2 finished
Jan 10 17:09:27 compute-0 lvm[162343]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 10 17:09:27 compute-0 lvm[162343]: VG ceph_vg2 finished
Jan 10 17:09:27 compute-0 interesting_thompson[161769]: {}
Jan 10 17:09:27 compute-0 systemd[1]: libpod-c5b64850e7339a807e57fdc7a6b8fe7f42c0a1667704a8e07d6774a47594c6c3.scope: Deactivated successfully.
Jan 10 17:09:27 compute-0 podman[161676]: 2026-01-10 17:09:27.326217524 +0000 UTC m=+1.008746344 container died c5b64850e7339a807e57fdc7a6b8fe7f42c0a1667704a8e07d6774a47594c6c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_thompson, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 17:09:27 compute-0 systemd[1]: libpod-c5b64850e7339a807e57fdc7a6b8fe7f42c0a1667704a8e07d6774a47594c6c3.scope: Consumed 1.365s CPU time.
Jan 10 17:09:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-827601ea8636821a0ddabe8f9b3dc8b0f051bddbb1e3be53f930001bde03f718-merged.mount: Deactivated successfully.
Jan 10 17:09:27 compute-0 podman[161676]: 2026-01-10 17:09:27.3799252 +0000 UTC m=+1.062454030 container remove c5b64850e7339a807e57fdc7a6b8fe7f42c0a1667704a8e07d6774a47594c6c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_thompson, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True)
Jan 10 17:09:27 compute-0 systemd[1]: libpod-conmon-c5b64850e7339a807e57fdc7a6b8fe7f42c0a1667704a8e07d6774a47594c6c3.scope: Deactivated successfully.
Jan 10 17:09:27 compute-0 sudo[161231]: pam_unix(sudo:session): session closed for user root
Jan 10 17:09:27 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 10 17:09:27 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:09:27 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 10 17:09:27 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:09:27 compute-0 ceph-mon[75249]: pgmap v413: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:09:27 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:09:27 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:09:27 compute-0 sudo[162481]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 10 17:09:27 compute-0 sudo[162481]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:09:27 compute-0 sudo[162481]: pam_unix(sudo:session): session closed for user root
Jan 10 17:09:28 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v414: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:09:29 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:09:29 compute-0 ceph-mon[75249]: pgmap v414: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:09:30 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v415: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:09:31 compute-0 ceph-mon[75249]: pgmap v415: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:09:32 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v416: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:09:33 compute-0 ceph-mon[75249]: pgmap v416: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:09:34 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:09:34 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v417: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:09:35 compute-0 ceph-mon[75249]: pgmap v417: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:09:36 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v418: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:09:37 compute-0 ceph-mon[75249]: pgmap v418: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:09:38 compute-0 ceph-mgr[75538]: [balancer INFO root] Optimize plan auto_2026-01-10_17:09:38
Jan 10 17:09:38 compute-0 ceph-mgr[75538]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 10 17:09:38 compute-0 ceph-mgr[75538]: [balancer INFO root] do_upmap
Jan 10 17:09:38 compute-0 ceph-mgr[75538]: [balancer INFO root] pools ['volumes', '.mgr', 'cephfs.cephfs.meta', 'vms', 'backups', 'images', 'cephfs.cephfs.data']
Jan 10 17:09:38 compute-0 ceph-mgr[75538]: [balancer INFO root] prepared 0/10 upmap changes
Jan 10 17:09:38 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v419: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:09:38 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:09:38 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:09:38 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:09:38 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:09:38 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:09:38 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:09:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 10 17:09:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 10 17:09:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 10 17:09:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 10 17:09:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 10 17:09:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 10 17:09:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 10 17:09:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 10 17:09:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 10 17:09:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 10 17:09:39 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:09:39 compute-0 ceph-mon[75249]: pgmap v419: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:09:40 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v420: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:09:41 compute-0 ceph-mon[75249]: pgmap v420: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:09:42 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v421: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:09:43 compute-0 ceph-mon[75249]: pgmap v421: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:09:44 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:09:44 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v422: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:09:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] _maybe_adjust
Jan 10 17:09:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:09:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 10 17:09:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:09:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 17:09:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:09:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 17:09:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:09:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 17:09:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:09:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 17:09:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:09:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 9.302004027771843e-07 of space, bias 4.0, pg target 0.0011162404833326212 quantized to 16 (current 16)
Jan 10 17:09:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:09:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 17:09:45 compute-0 podman[170688]: 2026-01-10 17:09:45.142383684 +0000 UTC m=+0.130705777 container health_status a2049b55215718ead6719d161f51721cb7ba61ad8a59a8a1174e05b17008fa6f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 10 17:09:45 compute-0 ceph-mon[75249]: pgmap v422: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:09:46 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v423: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:09:47 compute-0 ceph-mon[75249]: pgmap v423: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:09:48 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v424: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:09:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:09:48.911 152671 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 10 17:09:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:09:48.914 152671 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 10 17:09:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:09:48.915 152671 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 10 17:09:49 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:09:49 compute-0 ceph-mon[75249]: pgmap v424: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:09:50 compute-0 podman[172895]: 2026-01-10 17:09:50.045410859 +0000 UTC m=+0.051419238 container health_status 4a66317a3a3660e903224f5e823ead0a57597ae17c6ac34919f8a3aeea25124f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Jan 10 17:09:50 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v425: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:09:51 compute-0 ceph-mon[75249]: pgmap v425: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:09:52 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v426: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:09:53 compute-0 ceph-mon[75249]: pgmap v426: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:09:54 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:09:54 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v427: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:09:55 compute-0 ceph-mon[75249]: pgmap v427: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:09:56 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v428: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:09:57 compute-0 ceph-mon[75249]: pgmap v428: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:09:58 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v429: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:09:59 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:09:59 compute-0 ceph-mon[75249]: pgmap v429: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:10:00 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v430: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:10:01 compute-0 ceph-mon[75249]: pgmap v430: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:10:02 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v431: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:10:03 compute-0 ceph-mon[75249]: pgmap v431: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:10:04 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:10:04 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v432: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:10:05 compute-0 ceph-mon[75249]: pgmap v432: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:10:06 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v433: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:10:07 compute-0 ceph-mon[75249]: pgmap v433: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:10:08 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v434: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:10:08 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:10:08 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:10:08 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:10:08 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:10:08 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:10:08 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:10:09 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:10:09 compute-0 ceph-mon[75249]: pgmap v434: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:10:10 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v435: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:10:11 compute-0 ceph-mon[75249]: pgmap v435: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:10:12 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v436: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:10:13 compute-0 ceph-mon[75249]: pgmap v436: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:10:13 compute-0 kernel: SELinux:  Converting 2770 SID table entries...
Jan 10 17:10:13 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Jan 10 17:10:13 compute-0 kernel: SELinux:  policy capability open_perms=1
Jan 10 17:10:13 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Jan 10 17:10:13 compute-0 kernel: SELinux:  policy capability always_check_network=0
Jan 10 17:10:13 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 10 17:10:13 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 10 17:10:13 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 10 17:10:14 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:10:14 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v437: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:10:14 compute-0 groupadd[178031]: group added to /etc/group: name=dnsmasq, GID=992
Jan 10 17:10:14 compute-0 groupadd[178031]: group added to /etc/gshadow: name=dnsmasq
Jan 10 17:10:14 compute-0 groupadd[178031]: new group: name=dnsmasq, GID=992
Jan 10 17:10:14 compute-0 useradd[178038]: new user: name=dnsmasq, UID=991, GID=992, home=/var/lib/dnsmasq, shell=/usr/sbin/nologin, from=none
Jan 10 17:10:14 compute-0 dbus-broker-launch[744]: Noticed file-system modification, trigger reload.
Jan 10 17:10:14 compute-0 dbus-broker-launch[777]: avc:  op=load_policy lsm=selinux seqno=12 res=1
Jan 10 17:10:14 compute-0 dbus-broker-launch[744]: Noticed file-system modification, trigger reload.
Jan 10 17:10:15 compute-0 ceph-mon[75249]: pgmap v437: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:10:16 compute-0 groupadd[178074]: group added to /etc/group: name=clevis, GID=991
Jan 10 17:10:16 compute-0 groupadd[178074]: group added to /etc/gshadow: name=clevis
Jan 10 17:10:16 compute-0 podman[178049]: 2026-01-10 17:10:16.171396941 +0000 UTC m=+0.145575297 container health_status a2049b55215718ead6719d161f51721cb7ba61ad8a59a8a1174e05b17008fa6f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3)
Jan 10 17:10:16 compute-0 groupadd[178074]: new group: name=clevis, GID=991
Jan 10 17:10:16 compute-0 useradd[178084]: new user: name=clevis, UID=990, GID=991, home=/var/cache/clevis, shell=/usr/sbin/nologin, from=none
Jan 10 17:10:16 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v438: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:10:16 compute-0 usermod[178094]: add 'clevis' to group 'tss'
Jan 10 17:10:16 compute-0 usermod[178094]: add 'clevis' to shadow group 'tss'
Jan 10 17:10:17 compute-0 ceph-mon[75249]: pgmap v438: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:10:18 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v439: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:10:19 compute-0 polkitd[43532]: Reloading rules
Jan 10 17:10:19 compute-0 polkitd[43532]: Collecting garbage unconditionally...
Jan 10 17:10:19 compute-0 polkitd[43532]: Loading rules from directory /etc/polkit-1/rules.d
Jan 10 17:10:19 compute-0 polkitd[43532]: Loading rules from directory /usr/share/polkit-1/rules.d
Jan 10 17:10:19 compute-0 polkitd[43532]: Finished loading, compiling and executing 3 rules
Jan 10 17:10:19 compute-0 polkitd[43532]: Reloading rules
Jan 10 17:10:19 compute-0 polkitd[43532]: Collecting garbage unconditionally...
Jan 10 17:10:19 compute-0 polkitd[43532]: Loading rules from directory /etc/polkit-1/rules.d
Jan 10 17:10:19 compute-0 polkitd[43532]: Loading rules from directory /usr/share/polkit-1/rules.d
Jan 10 17:10:19 compute-0 polkitd[43532]: Finished loading, compiling and executing 3 rules
Jan 10 17:10:19 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:10:19 compute-0 ceph-mon[75249]: pgmap v439: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:10:20 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v440: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:10:20 compute-0 podman[178281]: 2026-01-10 17:10:20.809047432 +0000 UTC m=+0.080930057 container health_status 4a66317a3a3660e903224f5e823ead0a57597ae17c6ac34919f8a3aeea25124f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Jan 10 17:10:21 compute-0 ceph-mon[75249]: pgmap v440: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:10:22 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v441: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:10:23 compute-0 ceph-mon[75249]: pgmap v441: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:10:24 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:10:24 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v442: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:10:24 compute-0 sshd[1007]: Received signal 15; terminating.
Jan 10 17:10:24 compute-0 systemd[1]: Stopping OpenSSH server daemon...
Jan 10 17:10:24 compute-0 systemd[1]: sshd.service: Deactivated successfully.
Jan 10 17:10:24 compute-0 systemd[1]: Stopped OpenSSH server daemon.
Jan 10 17:10:24 compute-0 systemd[1]: sshd.service: Consumed 4.907s CPU time, read 564.0K from disk, written 28.0K to disk.
Jan 10 17:10:24 compute-0 systemd[1]: Stopped target sshd-keygen.target.
Jan 10 17:10:24 compute-0 systemd[1]: Stopping sshd-keygen.target...
Jan 10 17:10:24 compute-0 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 10 17:10:24 compute-0 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 10 17:10:24 compute-0 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 10 17:10:24 compute-0 systemd[1]: Reached target sshd-keygen.target.
Jan 10 17:10:24 compute-0 systemd[1]: Starting OpenSSH server daemon...
Jan 10 17:10:24 compute-0 sshd[178908]: Server listening on 0.0.0.0 port 22.
Jan 10 17:10:24 compute-0 sshd[178908]: Server listening on :: port 22.
Jan 10 17:10:24 compute-0 systemd[1]: Started OpenSSH server daemon.
Jan 10 17:10:25 compute-0 ceph-mon[75249]: pgmap v442: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:10:26 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v443: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:10:27 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 10 17:10:27 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 10 17:10:27 compute-0 ceph-mon[75249]: pgmap v443: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:10:27 compute-0 systemd[1]: Reloading.
Jan 10 17:10:27 compute-0 systemd-sysv-generator[179194]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 10 17:10:27 compute-0 systemd-rc-local-generator[179189]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 17:10:27 compute-0 sudo[179139]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 17:10:27 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 10 17:10:27 compute-0 sudo[179139]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:10:27 compute-0 sudo[179139]: pam_unix(sudo:session): session closed for user root
Jan 10 17:10:28 compute-0 sudo[179398]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Jan 10 17:10:28 compute-0 sudo[179398]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:10:28 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v444: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:10:29 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:10:29 compute-0 ceph-mon[75249]: pgmap v444: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:10:29 compute-0 sshd-session[180050]: Connection closed by authenticating user nobody 216.36.124.133 port 47018 [preauth]
Jan 10 17:10:29 compute-0 podman[180147]: 2026-01-10 17:10:29.562533842 +0000 UTC m=+0.871541113 container exec 69622407e4b336ab6e593d34ac16bfb19f7f8835a32ed22c7a89e50ee8c8d8e7 (image=quay.io/ceph/ceph:v20, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-mon-compute-0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 10 17:10:29 compute-0 podman[180147]: 2026-01-10 17:10:29.707682121 +0000 UTC m=+1.016689372 container exec_died 69622407e4b336ab6e593d34ac16bfb19f7f8835a32ed22c7a89e50ee8c8d8e7 (image=quay.io/ceph/ceph:v20, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-mon-compute-0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 17:10:30 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v445: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:10:30 compute-0 sudo[160201]: pam_unix(sudo:session): session closed for user root
Jan 10 17:10:30 compute-0 sudo[179398]: pam_unix(sudo:session): session closed for user root
Jan 10 17:10:30 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 10 17:10:30 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:10:30 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 10 17:10:30 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:10:30 compute-0 sudo[182088]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 17:10:30 compute-0 sudo[182088]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:10:30 compute-0 sudo[182088]: pam_unix(sudo:session): session closed for user root
Jan 10 17:10:30 compute-0 sudo[182220]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 10 17:10:30 compute-0 sudo[182220]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:10:31 compute-0 sudo[182669]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mjdjjlclasrhqkcsosincvofjepaqsex ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065030.626197-331-262877261196072/AnsiballZ_systemd.py'
Jan 10 17:10:31 compute-0 sudo[182669]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:10:31 compute-0 sudo[182220]: pam_unix(sudo:session): session closed for user root
Jan 10 17:10:31 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 17:10:31 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 17:10:31 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 10 17:10:31 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 10 17:10:31 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 10 17:10:31 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:10:31 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 10 17:10:31 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 10 17:10:31 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 10 17:10:31 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 10 17:10:31 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 17:10:31 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 17:10:31 compute-0 sudo[182780]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 17:10:31 compute-0 sudo[182780]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:10:31 compute-0 sudo[182780]: pam_unix(sudo:session): session closed for user root
Jan 10 17:10:31 compute-0 sudo[182877]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 10 17:10:31 compute-0 sudo[182877]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:10:31 compute-0 python3.9[182695]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 10 17:10:31 compute-0 ceph-mon[75249]: pgmap v445: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:10:31 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:10:31 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:10:31 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 17:10:31 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 10 17:10:31 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:10:31 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 10 17:10:31 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 10 17:10:31 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 17:10:31 compute-0 systemd[1]: Reloading.
Jan 10 17:10:31 compute-0 systemd-rc-local-generator[183241]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 17:10:31 compute-0 systemd-sysv-generator[183244]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 10 17:10:31 compute-0 podman[183170]: 2026-01-10 17:10:31.754544336 +0000 UTC m=+0.071146790 container create c036a71f5edf3060507e789ef9603cf2381269aa92aecf97a56773c98230e072 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_franklin, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 17:10:31 compute-0 podman[183170]: 2026-01-10 17:10:31.721526289 +0000 UTC m=+0.038128773 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:10:31 compute-0 systemd[1]: Started libpod-conmon-c036a71f5edf3060507e789ef9603cf2381269aa92aecf97a56773c98230e072.scope.
Jan 10 17:10:31 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:10:32 compute-0 sudo[182669]: pam_unix(sudo:session): session closed for user root
Jan 10 17:10:32 compute-0 podman[183170]: 2026-01-10 17:10:32.00910688 +0000 UTC m=+0.325709364 container init c036a71f5edf3060507e789ef9603cf2381269aa92aecf97a56773c98230e072 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_franklin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 17:10:32 compute-0 podman[183170]: 2026-01-10 17:10:32.018507846 +0000 UTC m=+0.335110320 container start c036a71f5edf3060507e789ef9603cf2381269aa92aecf97a56773c98230e072 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_franklin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True)
Jan 10 17:10:32 compute-0 podman[183170]: 2026-01-10 17:10:32.02284392 +0000 UTC m=+0.339446424 container attach c036a71f5edf3060507e789ef9603cf2381269aa92aecf97a56773c98230e072 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_franklin, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 10 17:10:32 compute-0 eager_franklin[183451]: 167 167
Jan 10 17:10:32 compute-0 systemd[1]: libpod-c036a71f5edf3060507e789ef9603cf2381269aa92aecf97a56773c98230e072.scope: Deactivated successfully.
Jan 10 17:10:32 compute-0 podman[183170]: 2026-01-10 17:10:32.026834033 +0000 UTC m=+0.343436557 container died c036a71f5edf3060507e789ef9603cf2381269aa92aecf97a56773c98230e072 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_franklin, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True)
Jan 10 17:10:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-34d36906dee0b14a2834cf381c242f1de5ce707410bef75d3f8a141f7641eba0-merged.mount: Deactivated successfully.
Jan 10 17:10:32 compute-0 podman[183170]: 2026-01-10 17:10:32.079913839 +0000 UTC m=+0.396516293 container remove c036a71f5edf3060507e789ef9603cf2381269aa92aecf97a56773c98230e072 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_franklin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 10 17:10:32 compute-0 systemd[1]: libpod-conmon-c036a71f5edf3060507e789ef9603cf2381269aa92aecf97a56773c98230e072.scope: Deactivated successfully.
Jan 10 17:10:32 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v446: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:10:32 compute-0 podman[183767]: 2026-01-10 17:10:32.285544713 +0000 UTC m=+0.053560381 container create 04cf323b205a08c2d59f5ddb986b489de577e4b20fb0fb24a967b182dda53082 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_faraday, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 17:10:32 compute-0 systemd[1]: Started libpod-conmon-04cf323b205a08c2d59f5ddb986b489de577e4b20fb0fb24a967b182dda53082.scope.
Jan 10 17:10:32 compute-0 podman[183767]: 2026-01-10 17:10:32.260289817 +0000 UTC m=+0.028305495 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:10:32 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:10:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04a55dd953155c7e5424dc2dedeee5f85002e556834a286292ed4fe28f6487e5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 17:10:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04a55dd953155c7e5424dc2dedeee5f85002e556834a286292ed4fe28f6487e5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 17:10:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04a55dd953155c7e5424dc2dedeee5f85002e556834a286292ed4fe28f6487e5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 17:10:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04a55dd953155c7e5424dc2dedeee5f85002e556834a286292ed4fe28f6487e5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 17:10:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04a55dd953155c7e5424dc2dedeee5f85002e556834a286292ed4fe28f6487e5/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 10 17:10:32 compute-0 podman[183767]: 2026-01-10 17:10:32.3802338 +0000 UTC m=+0.148249508 container init 04cf323b205a08c2d59f5ddb986b489de577e4b20fb0fb24a967b182dda53082 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_faraday, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 10 17:10:32 compute-0 podman[183767]: 2026-01-10 17:10:32.38868439 +0000 UTC m=+0.156700088 container start 04cf323b205a08c2d59f5ddb986b489de577e4b20fb0fb24a967b182dda53082 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_faraday, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 10 17:10:32 compute-0 podman[183767]: 2026-01-10 17:10:32.392995812 +0000 UTC m=+0.161011480 container attach 04cf323b205a08c2d59f5ddb986b489de577e4b20fb0fb24a967b182dda53082 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_faraday, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 17:10:32 compute-0 sudo[184108]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gbpzipcjevaetkdruepdjiueletktmwb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065032.16719-331-224752488308741/AnsiballZ_systemd.py'
Jan 10 17:10:32 compute-0 sudo[184108]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:10:32 compute-0 python3.9[184132]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 10 17:10:32 compute-0 systemd[1]: Reloading.
Jan 10 17:10:32 compute-0 brave_faraday[183942]: --> passed data devices: 0 physical, 3 LVM
Jan 10 17:10:32 compute-0 brave_faraday[183942]: --> All data devices are unavailable
Jan 10 17:10:32 compute-0 podman[183767]: 2026-01-10 17:10:32.930514026 +0000 UTC m=+0.698529704 container died 04cf323b205a08c2d59f5ddb986b489de577e4b20fb0fb24a967b182dda53082 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_faraday, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 10 17:10:32 compute-0 systemd-rc-local-generator[184556]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 17:10:32 compute-0 systemd-sysv-generator[184565]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 10 17:10:33 compute-0 systemd[1]: libpod-04cf323b205a08c2d59f5ddb986b489de577e4b20fb0fb24a967b182dda53082.scope: Deactivated successfully.
Jan 10 17:10:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-04a55dd953155c7e5424dc2dedeee5f85002e556834a286292ed4fe28f6487e5-merged.mount: Deactivated successfully.
Jan 10 17:10:33 compute-0 podman[183767]: 2026-01-10 17:10:33.176313991 +0000 UTC m=+0.944329689 container remove 04cf323b205a08c2d59f5ddb986b489de577e4b20fb0fb24a967b182dda53082 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_faraday, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 10 17:10:33 compute-0 sudo[184108]: pam_unix(sudo:session): session closed for user root
Jan 10 17:10:33 compute-0 systemd[1]: libpod-conmon-04cf323b205a08c2d59f5ddb986b489de577e4b20fb0fb24a967b182dda53082.scope: Deactivated successfully.
Jan 10 17:10:33 compute-0 sudo[182877]: pam_unix(sudo:session): session closed for user root
Jan 10 17:10:33 compute-0 sudo[184913]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 17:10:33 compute-0 sudo[184913]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:10:33 compute-0 sudo[184913]: pam_unix(sudo:session): session closed for user root
Jan 10 17:10:33 compute-0 sudo[185009]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 -- lvm list --format json
Jan 10 17:10:33 compute-0 sudo[185009]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:10:33 compute-0 ceph-mon[75249]: pgmap v446: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:10:33 compute-0 sudo[185418]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aihtdcmqxxaegszsalzrjixuzrugvguu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065033.359495-331-143937896287437/AnsiballZ_systemd.py'
Jan 10 17:10:33 compute-0 sudo[185418]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:10:33 compute-0 podman[185365]: 2026-01-10 17:10:33.724284621 +0000 UTC m=+0.063480632 container create aed072865a6b1ea879be3be1881f06eb8366734c085bd668addadcd1fbf8c348 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_keldysh, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 10 17:10:33 compute-0 systemd[1]: Started libpod-conmon-aed072865a6b1ea879be3be1881f06eb8366734c085bd668addadcd1fbf8c348.scope.
Jan 10 17:10:33 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:10:33 compute-0 podman[185365]: 2026-01-10 17:10:33.696937015 +0000 UTC m=+0.036133036 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:10:33 compute-0 podman[185365]: 2026-01-10 17:10:33.81655612 +0000 UTC m=+0.155752121 container init aed072865a6b1ea879be3be1881f06eb8366734c085bd668addadcd1fbf8c348 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_keldysh, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 10 17:10:33 compute-0 podman[185365]: 2026-01-10 17:10:33.8246829 +0000 UTC m=+0.163878901 container start aed072865a6b1ea879be3be1881f06eb8366734c085bd668addadcd1fbf8c348 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_keldysh, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 10 17:10:33 compute-0 podman[185365]: 2026-01-10 17:10:33.828975062 +0000 UTC m=+0.168171103 container attach aed072865a6b1ea879be3be1881f06eb8366734c085bd668addadcd1fbf8c348 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_keldysh, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 17:10:33 compute-0 busy_keldysh[185490]: 167 167
Jan 10 17:10:33 compute-0 systemd[1]: libpod-aed072865a6b1ea879be3be1881f06eb8366734c085bd668addadcd1fbf8c348.scope: Deactivated successfully.
Jan 10 17:10:33 compute-0 podman[185365]: 2026-01-10 17:10:33.83278212 +0000 UTC m=+0.171978091 container died aed072865a6b1ea879be3be1881f06eb8366734c085bd668addadcd1fbf8c348 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_keldysh, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 10 17:10:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-aef8e8aacc099adc0655ad65a161ab96f1a445326e7598f6c596a1621d9e414d-merged.mount: Deactivated successfully.
Jan 10 17:10:33 compute-0 podman[185365]: 2026-01-10 17:10:33.888253744 +0000 UTC m=+0.227449755 container remove aed072865a6b1ea879be3be1881f06eb8366734c085bd668addadcd1fbf8c348 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_keldysh, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 17:10:33 compute-0 systemd[1]: libpod-conmon-aed072865a6b1ea879be3be1881f06eb8366734c085bd668addadcd1fbf8c348.scope: Deactivated successfully.
Jan 10 17:10:33 compute-0 python3.9[185443]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 10 17:10:34 compute-0 systemd[1]: Reloading.
Jan 10 17:10:34 compute-0 podman[185703]: 2026-01-10 17:10:34.083094853 +0000 UTC m=+0.063449671 container create e46bf49d2ada4c65184f885aace18a29a1b5161084c18f132e1470b43b147acb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_snyder, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 10 17:10:34 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:10:34 compute-0 systemd-rc-local-generator[185845]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 17:10:34 compute-0 systemd-sysv-generator[185849]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 10 17:10:34 compute-0 podman[185703]: 2026-01-10 17:10:34.063003663 +0000 UTC m=+0.043358491 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:10:34 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v447: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:10:34 compute-0 systemd[1]: Started libpod-conmon-e46bf49d2ada4c65184f885aace18a29a1b5161084c18f132e1470b43b147acb.scope.
Jan 10 17:10:34 compute-0 sudo[185418]: pam_unix(sudo:session): session closed for user root
Jan 10 17:10:34 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:10:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/051dc749b002bc16f7dc5e83d4df41a9839bcc6f8cb58ea2fa122fa3774ed6ef/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 17:10:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/051dc749b002bc16f7dc5e83d4df41a9839bcc6f8cb58ea2fa122fa3774ed6ef/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 17:10:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/051dc749b002bc16f7dc5e83d4df41a9839bcc6f8cb58ea2fa122fa3774ed6ef/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 17:10:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/051dc749b002bc16f7dc5e83d4df41a9839bcc6f8cb58ea2fa122fa3774ed6ef/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 17:10:34 compute-0 podman[185703]: 2026-01-10 17:10:34.484557036 +0000 UTC m=+0.464911894 container init e46bf49d2ada4c65184f885aace18a29a1b5161084c18f132e1470b43b147acb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_snyder, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 17:10:34 compute-0 podman[185703]: 2026-01-10 17:10:34.498642255 +0000 UTC m=+0.478997073 container start e46bf49d2ada4c65184f885aace18a29a1b5161084c18f132e1470b43b147acb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_snyder, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 10 17:10:34 compute-0 podman[185703]: 2026-01-10 17:10:34.502710691 +0000 UTC m=+0.483065529 container attach e46bf49d2ada4c65184f885aace18a29a1b5161084c18f132e1470b43b147acb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_snyder, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 10 17:10:34 compute-0 eager_snyder[186059]: {
Jan 10 17:10:34 compute-0 eager_snyder[186059]:     "0": [
Jan 10 17:10:34 compute-0 eager_snyder[186059]:         {
Jan 10 17:10:34 compute-0 eager_snyder[186059]:             "devices": [
Jan 10 17:10:34 compute-0 eager_snyder[186059]:                 "/dev/loop3"
Jan 10 17:10:34 compute-0 eager_snyder[186059]:             ],
Jan 10 17:10:34 compute-0 eager_snyder[186059]:             "lv_name": "ceph_lv0",
Jan 10 17:10:34 compute-0 eager_snyder[186059]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 10 17:10:34 compute-0 eager_snyder[186059]:             "lv_size": "21470642176",
Jan 10 17:10:34 compute-0 eager_snyder[186059]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9aa1dcc9-88f4-49c0-be40-744313964d3e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 17:10:34 compute-0 eager_snyder[186059]:             "lv_uuid": "fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2",
Jan 10 17:10:34 compute-0 eager_snyder[186059]:             "name": "ceph_lv0",
Jan 10 17:10:34 compute-0 eager_snyder[186059]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 10 17:10:34 compute-0 eager_snyder[186059]:             "tags": {
Jan 10 17:10:34 compute-0 eager_snyder[186059]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 10 17:10:34 compute-0 eager_snyder[186059]:                 "ceph.block_uuid": "fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2",
Jan 10 17:10:34 compute-0 eager_snyder[186059]:                 "ceph.cephx_lockbox_secret": "",
Jan 10 17:10:34 compute-0 eager_snyder[186059]:                 "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 17:10:34 compute-0 eager_snyder[186059]:                 "ceph.cluster_name": "ceph",
Jan 10 17:10:34 compute-0 eager_snyder[186059]:                 "ceph.crush_device_class": "",
Jan 10 17:10:34 compute-0 eager_snyder[186059]:                 "ceph.encrypted": "0",
Jan 10 17:10:34 compute-0 eager_snyder[186059]:                 "ceph.objectstore": "bluestore",
Jan 10 17:10:34 compute-0 eager_snyder[186059]:                 "ceph.osd_fsid": "9aa1dcc9-88f4-49c0-be40-744313964d3e",
Jan 10 17:10:34 compute-0 eager_snyder[186059]:                 "ceph.osd_id": "0",
Jan 10 17:10:34 compute-0 eager_snyder[186059]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 10 17:10:34 compute-0 eager_snyder[186059]:                 "ceph.type": "block",
Jan 10 17:10:34 compute-0 eager_snyder[186059]:                 "ceph.vdo": "0",
Jan 10 17:10:34 compute-0 eager_snyder[186059]:                 "ceph.with_tpm": "0"
Jan 10 17:10:34 compute-0 eager_snyder[186059]:             },
Jan 10 17:10:34 compute-0 eager_snyder[186059]:             "type": "block",
Jan 10 17:10:34 compute-0 eager_snyder[186059]:             "vg_name": "ceph_vg0"
Jan 10 17:10:34 compute-0 eager_snyder[186059]:         }
Jan 10 17:10:34 compute-0 eager_snyder[186059]:     ],
Jan 10 17:10:34 compute-0 eager_snyder[186059]:     "1": [
Jan 10 17:10:34 compute-0 eager_snyder[186059]:         {
Jan 10 17:10:34 compute-0 eager_snyder[186059]:             "devices": [
Jan 10 17:10:34 compute-0 eager_snyder[186059]:                 "/dev/loop4"
Jan 10 17:10:34 compute-0 eager_snyder[186059]:             ],
Jan 10 17:10:34 compute-0 eager_snyder[186059]:             "lv_name": "ceph_lv1",
Jan 10 17:10:34 compute-0 eager_snyder[186059]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 10 17:10:34 compute-0 eager_snyder[186059]:             "lv_size": "21470642176",
Jan 10 17:10:34 compute-0 eager_snyder[186059]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e8e31518-65ae-476c-891c-e2fc550d0a1c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 17:10:34 compute-0 eager_snyder[186059]:             "lv_uuid": "jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE",
Jan 10 17:10:34 compute-0 eager_snyder[186059]:             "name": "ceph_lv1",
Jan 10 17:10:34 compute-0 eager_snyder[186059]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 10 17:10:34 compute-0 eager_snyder[186059]:             "tags": {
Jan 10 17:10:34 compute-0 eager_snyder[186059]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 10 17:10:34 compute-0 eager_snyder[186059]:                 "ceph.block_uuid": "jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE",
Jan 10 17:10:34 compute-0 eager_snyder[186059]:                 "ceph.cephx_lockbox_secret": "",
Jan 10 17:10:34 compute-0 eager_snyder[186059]:                 "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 17:10:34 compute-0 eager_snyder[186059]:                 "ceph.cluster_name": "ceph",
Jan 10 17:10:34 compute-0 eager_snyder[186059]:                 "ceph.crush_device_class": "",
Jan 10 17:10:34 compute-0 eager_snyder[186059]:                 "ceph.encrypted": "0",
Jan 10 17:10:34 compute-0 eager_snyder[186059]:                 "ceph.objectstore": "bluestore",
Jan 10 17:10:34 compute-0 eager_snyder[186059]:                 "ceph.osd_fsid": "e8e31518-65ae-476c-891c-e2fc550d0a1c",
Jan 10 17:10:34 compute-0 eager_snyder[186059]:                 "ceph.osd_id": "1",
Jan 10 17:10:34 compute-0 eager_snyder[186059]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 10 17:10:34 compute-0 eager_snyder[186059]:                 "ceph.type": "block",
Jan 10 17:10:34 compute-0 eager_snyder[186059]:                 "ceph.vdo": "0",
Jan 10 17:10:34 compute-0 eager_snyder[186059]:                 "ceph.with_tpm": "0"
Jan 10 17:10:34 compute-0 eager_snyder[186059]:             },
Jan 10 17:10:34 compute-0 eager_snyder[186059]:             "type": "block",
Jan 10 17:10:34 compute-0 eager_snyder[186059]:             "vg_name": "ceph_vg1"
Jan 10 17:10:34 compute-0 eager_snyder[186059]:         }
Jan 10 17:10:34 compute-0 eager_snyder[186059]:     ],
Jan 10 17:10:34 compute-0 eager_snyder[186059]:     "2": [
Jan 10 17:10:34 compute-0 eager_snyder[186059]:         {
Jan 10 17:10:34 compute-0 eager_snyder[186059]:             "devices": [
Jan 10 17:10:34 compute-0 eager_snyder[186059]:                 "/dev/loop5"
Jan 10 17:10:34 compute-0 eager_snyder[186059]:             ],
Jan 10 17:10:34 compute-0 eager_snyder[186059]:             "lv_name": "ceph_lv2",
Jan 10 17:10:34 compute-0 eager_snyder[186059]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 10 17:10:34 compute-0 eager_snyder[186059]:             "lv_size": "21470642176",
Jan 10 17:10:34 compute-0 eager_snyder[186059]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=87473727-6468-4f68-8371-e0bf60edaa43,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 17:10:34 compute-0 eager_snyder[186059]:             "lv_uuid": "47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL",
Jan 10 17:10:34 compute-0 eager_snyder[186059]:             "name": "ceph_lv2",
Jan 10 17:10:34 compute-0 eager_snyder[186059]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 10 17:10:34 compute-0 eager_snyder[186059]:             "tags": {
Jan 10 17:10:34 compute-0 eager_snyder[186059]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 10 17:10:34 compute-0 eager_snyder[186059]:                 "ceph.block_uuid": "47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL",
Jan 10 17:10:34 compute-0 eager_snyder[186059]:                 "ceph.cephx_lockbox_secret": "",
Jan 10 17:10:34 compute-0 eager_snyder[186059]:                 "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 17:10:34 compute-0 eager_snyder[186059]:                 "ceph.cluster_name": "ceph",
Jan 10 17:10:34 compute-0 eager_snyder[186059]:                 "ceph.crush_device_class": "",
Jan 10 17:10:34 compute-0 eager_snyder[186059]:                 "ceph.encrypted": "0",
Jan 10 17:10:34 compute-0 eager_snyder[186059]:                 "ceph.objectstore": "bluestore",
Jan 10 17:10:34 compute-0 eager_snyder[186059]:                 "ceph.osd_fsid": "87473727-6468-4f68-8371-e0bf60edaa43",
Jan 10 17:10:34 compute-0 eager_snyder[186059]:                 "ceph.osd_id": "2",
Jan 10 17:10:34 compute-0 eager_snyder[186059]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 10 17:10:34 compute-0 eager_snyder[186059]:                 "ceph.type": "block",
Jan 10 17:10:34 compute-0 eager_snyder[186059]:                 "ceph.vdo": "0",
Jan 10 17:10:34 compute-0 eager_snyder[186059]:                 "ceph.with_tpm": "0"
Jan 10 17:10:34 compute-0 eager_snyder[186059]:             },
Jan 10 17:10:34 compute-0 eager_snyder[186059]:             "type": "block",
Jan 10 17:10:34 compute-0 eager_snyder[186059]:             "vg_name": "ceph_vg2"
Jan 10 17:10:34 compute-0 eager_snyder[186059]:         }
Jan 10 17:10:34 compute-0 eager_snyder[186059]:     ]
Jan 10 17:10:34 compute-0 eager_snyder[186059]: }
Jan 10 17:10:34 compute-0 systemd[1]: libpod-e46bf49d2ada4c65184f885aace18a29a1b5161084c18f132e1470b43b147acb.scope: Deactivated successfully.
Jan 10 17:10:34 compute-0 podman[185703]: 2026-01-10 17:10:34.827553169 +0000 UTC m=+0.807908037 container died e46bf49d2ada4c65184f885aace18a29a1b5161084c18f132e1470b43b147acb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_snyder, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 10 17:10:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-051dc749b002bc16f7dc5e83d4df41a9839bcc6f8cb58ea2fa122fa3774ed6ef-merged.mount: Deactivated successfully.
Jan 10 17:10:34 compute-0 podman[185703]: 2026-01-10 17:10:34.881407277 +0000 UTC m=+0.861762095 container remove e46bf49d2ada4c65184f885aace18a29a1b5161084c18f132e1470b43b147acb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_snyder, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 10 17:10:34 compute-0 systemd[1]: libpod-conmon-e46bf49d2ada4c65184f885aace18a29a1b5161084c18f132e1470b43b147acb.scope: Deactivated successfully.
Jan 10 17:10:34 compute-0 sudo[185009]: pam_unix(sudo:session): session closed for user root
Jan 10 17:10:34 compute-0 sudo[186622]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qdrbvwcjdsaxxqlsvjmgnlwosrpupocr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065034.610909-331-79435846134883/AnsiballZ_systemd.py'
Jan 10 17:10:34 compute-0 sudo[186622]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:10:35 compute-0 sudo[186606]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 17:10:35 compute-0 sudo[186606]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:10:35 compute-0 sudo[186606]: pam_unix(sudo:session): session closed for user root
Jan 10 17:10:35 compute-0 sudo[186692]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 -- raw list --format json
Jan 10 17:10:35 compute-0 sudo[186692]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:10:35 compute-0 python3.9[186666]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 10 17:10:35 compute-0 systemd[1]: Reloading.
Jan 10 17:10:35 compute-0 podman[187005]: 2026-01-10 17:10:35.408928507 +0000 UTC m=+0.045149752 container create a7d436eb8c65983c543dd45362d9fa954c587cef9f39e69cde16b0803e859d6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_dirac, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 17:10:35 compute-0 systemd-rc-local-generator[187120]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 17:10:35 compute-0 systemd-sysv-generator[187125]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 10 17:10:35 compute-0 podman[187005]: 2026-01-10 17:10:35.389254899 +0000 UTC m=+0.025476154 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:10:35 compute-0 ceph-mon[75249]: pgmap v447: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:10:35 compute-0 systemd[1]: Started libpod-conmon-a7d436eb8c65983c543dd45362d9fa954c587cef9f39e69cde16b0803e859d6a.scope.
Jan 10 17:10:35 compute-0 sudo[186622]: pam_unix(sudo:session): session closed for user root
Jan 10 17:10:35 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:10:35 compute-0 podman[187005]: 2026-01-10 17:10:35.723685688 +0000 UTC m=+0.359907023 container init a7d436eb8c65983c543dd45362d9fa954c587cef9f39e69cde16b0803e859d6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_dirac, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 10 17:10:35 compute-0 podman[187005]: 2026-01-10 17:10:35.732295073 +0000 UTC m=+0.368516308 container start a7d436eb8c65983c543dd45362d9fa954c587cef9f39e69cde16b0803e859d6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_dirac, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 10 17:10:35 compute-0 vigilant_dirac[187389]: 167 167
Jan 10 17:10:35 compute-0 systemd[1]: libpod-a7d436eb8c65983c543dd45362d9fa954c587cef9f39e69cde16b0803e859d6a.scope: Deactivated successfully.
Jan 10 17:10:35 compute-0 podman[187005]: 2026-01-10 17:10:35.768334765 +0000 UTC m=+0.404556000 container attach a7d436eb8c65983c543dd45362d9fa954c587cef9f39e69cde16b0803e859d6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_dirac, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 10 17:10:35 compute-0 podman[187005]: 2026-01-10 17:10:35.76955563 +0000 UTC m=+0.405776885 container died a7d436eb8c65983c543dd45362d9fa954c587cef9f39e69cde16b0803e859d6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_dirac, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 17:10:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-2bac474c00fbbc4397d63901a523d8dcb8f7dc01fdb0182025c38e12ee30047d-merged.mount: Deactivated successfully.
Jan 10 17:10:35 compute-0 podman[187005]: 2026-01-10 17:10:35.888333561 +0000 UTC m=+0.524554806 container remove a7d436eb8c65983c543dd45362d9fa954c587cef9f39e69cde16b0803e859d6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_dirac, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 17:10:35 compute-0 systemd[1]: libpod-conmon-a7d436eb8c65983c543dd45362d9fa954c587cef9f39e69cde16b0803e859d6a.scope: Deactivated successfully.
Jan 10 17:10:36 compute-0 podman[187823]: 2026-01-10 17:10:36.091913638 +0000 UTC m=+0.063940576 container create 7842f8e45d7aabdca053302e58c596317d7ce809f049668d0c7d91feeacd10b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_fermat, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 17:10:36 compute-0 systemd[1]: Started libpod-conmon-7842f8e45d7aabdca053302e58c596317d7ce809f049668d0c7d91feeacd10b6.scope.
Jan 10 17:10:36 compute-0 podman[187823]: 2026-01-10 17:10:36.071675353 +0000 UTC m=+0.043702261 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:10:36 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:10:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e03a3c65f34a40c92acbbb8dbe09931fc4af0c142769c385555968e66cef2572/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 17:10:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e03a3c65f34a40c92acbbb8dbe09931fc4af0c142769c385555968e66cef2572/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 17:10:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e03a3c65f34a40c92acbbb8dbe09931fc4af0c142769c385555968e66cef2572/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 17:10:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e03a3c65f34a40c92acbbb8dbe09931fc4af0c142769c385555968e66cef2572/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 17:10:36 compute-0 podman[187823]: 2026-01-10 17:10:36.19453953 +0000 UTC m=+0.166566438 container init 7842f8e45d7aabdca053302e58c596317d7ce809f049668d0c7d91feeacd10b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_fermat, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 10 17:10:36 compute-0 podman[187823]: 2026-01-10 17:10:36.20758485 +0000 UTC m=+0.179611748 container start 7842f8e45d7aabdca053302e58c596317d7ce809f049668d0c7d91feeacd10b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_fermat, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 10 17:10:36 compute-0 podman[187823]: 2026-01-10 17:10:36.21147037 +0000 UTC m=+0.183497268 container attach 7842f8e45d7aabdca053302e58c596317d7ce809f049668d0c7d91feeacd10b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_fermat, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 17:10:36 compute-0 sudo[188133]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cjknfnkekahnhsuroonxkuqtkevvbdwr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065035.8951788-360-148990059137805/AnsiballZ_systemd.py'
Jan 10 17:10:36 compute-0 sudo[188133]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:10:36 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v448: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:10:36 compute-0 python3.9[188153]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 10 17:10:36 compute-0 systemd[1]: Reloading.
Jan 10 17:10:36 compute-0 systemd-sysv-generator[188570]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 10 17:10:36 compute-0 systemd-rc-local-generator[188567]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 17:10:36 compute-0 sudo[188133]: pam_unix(sudo:session): session closed for user root
Jan 10 17:10:36 compute-0 lvm[188842]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 10 17:10:36 compute-0 lvm[188842]: VG ceph_vg0 finished
Jan 10 17:10:36 compute-0 bold_fermat[188051]: {}
Jan 10 17:10:36 compute-0 lvm[188854]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 10 17:10:36 compute-0 lvm[188854]: VG ceph_vg1 finished
Jan 10 17:10:36 compute-0 lvm[188859]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 10 17:10:36 compute-0 lvm[188859]: VG ceph_vg2 finished
Jan 10 17:10:36 compute-0 systemd[1]: libpod-7842f8e45d7aabdca053302e58c596317d7ce809f049668d0c7d91feeacd10b6.scope: Deactivated successfully.
Jan 10 17:10:36 compute-0 conmon[188051]: conmon 7842f8e45d7aabdca053 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7842f8e45d7aabdca053302e58c596317d7ce809f049668d0c7d91feeacd10b6.scope/container/memory.events
Jan 10 17:10:36 compute-0 systemd[1]: libpod-7842f8e45d7aabdca053302e58c596317d7ce809f049668d0c7d91feeacd10b6.scope: Consumed 1.272s CPU time.
Jan 10 17:10:36 compute-0 podman[187823]: 2026-01-10 17:10:36.992151454 +0000 UTC m=+0.964178352 container died 7842f8e45d7aabdca053302e58c596317d7ce809f049668d0c7d91feeacd10b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_fermat, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 17:10:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-e03a3c65f34a40c92acbbb8dbe09931fc4af0c142769c385555968e66cef2572-merged.mount: Deactivated successfully.
Jan 10 17:10:37 compute-0 podman[187823]: 2026-01-10 17:10:37.04662614 +0000 UTC m=+1.018653038 container remove 7842f8e45d7aabdca053302e58c596317d7ce809f049668d0c7d91feeacd10b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_fermat, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default)
Jan 10 17:10:37 compute-0 systemd[1]: libpod-conmon-7842f8e45d7aabdca053302e58c596317d7ce809f049668d0c7d91feeacd10b6.scope: Deactivated successfully.
Jan 10 17:10:37 compute-0 sudo[186692]: pam_unix(sudo:session): session closed for user root
Jan 10 17:10:37 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 10 17:10:37 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:10:37 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 10 17:10:37 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:10:37 compute-0 sudo[189099]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 10 17:10:37 compute-0 sudo[189099]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:10:37 compute-0 sudo[189099]: pam_unix(sudo:session): session closed for user root
Jan 10 17:10:37 compute-0 sudo[189418]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hwamqgdtpzugztcphtrkkayqukixneim ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065037.0569487-360-72255283583778/AnsiballZ_systemd.py'
Jan 10 17:10:37 compute-0 sudo[189418]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:10:37 compute-0 ceph-mon[75249]: pgmap v448: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:10:37 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:10:37 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:10:37 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 10 17:10:37 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 10 17:10:37 compute-0 systemd[1]: man-db-cache-update.service: Consumed 12.381s CPU time.
Jan 10 17:10:37 compute-0 systemd[1]: run-r835e44116d024a76a38504c44c3a9875.service: Deactivated successfully.
Jan 10 17:10:37 compute-0 python3.9[189439]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 10 17:10:37 compute-0 systemd[1]: Reloading.
Jan 10 17:10:37 compute-0 systemd-sysv-generator[189544]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 10 17:10:37 compute-0 systemd-rc-local-generator[189540]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 17:10:38 compute-0 ceph-mgr[75538]: [balancer INFO root] Optimize plan auto_2026-01-10_17:10:38
Jan 10 17:10:38 compute-0 ceph-mgr[75538]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 10 17:10:38 compute-0 ceph-mgr[75538]: [balancer INFO root] do_upmap
Jan 10 17:10:38 compute-0 ceph-mgr[75538]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'cephfs.cephfs.data', '.mgr', 'images', 'volumes', 'backups', 'vms']
Jan 10 17:10:38 compute-0 ceph-mgr[75538]: [balancer INFO root] prepared 0/10 upmap changes
Jan 10 17:10:38 compute-0 sudo[189418]: pam_unix(sudo:session): session closed for user root
Jan 10 17:10:38 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v449: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:10:38 compute-0 sudo[189700]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ptemmtxrmvtdufkcuffivpxvgayonfru ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065038.3763986-360-48943663987762/AnsiballZ_systemd.py'
Jan 10 17:10:38 compute-0 sudo[189700]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:10:38 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:10:38 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:10:38 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:10:38 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:10:38 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:10:38 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:10:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 10 17:10:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 10 17:10:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 10 17:10:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 10 17:10:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 10 17:10:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 10 17:10:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 10 17:10:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 10 17:10:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 10 17:10:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 10 17:10:39 compute-0 python3.9[189702]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 10 17:10:39 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:10:39 compute-0 systemd[1]: Reloading.
Jan 10 17:10:39 compute-0 systemd-rc-local-generator[189729]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 17:10:39 compute-0 systemd-sysv-generator[189736]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 10 17:10:39 compute-0 sudo[189700]: pam_unix(sudo:session): session closed for user root
Jan 10 17:10:39 compute-0 ceph-mon[75249]: pgmap v449: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:10:40 compute-0 sudo[189890]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mgxdtvntywwxkzbjuzbgolsqjzthpeiq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065039.7601113-360-83484132449871/AnsiballZ_systemd.py'
Jan 10 17:10:40 compute-0 sudo[189890]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:10:40 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v450: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:10:40 compute-0 python3.9[189892]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 10 17:10:40 compute-0 sudo[189890]: pam_unix(sudo:session): session closed for user root
Jan 10 17:10:41 compute-0 sudo[190045]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kkcarzsxkdxwyakldgwlfwguyzwcliwv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065040.7811153-360-8097668032342/AnsiballZ_systemd.py'
Jan 10 17:10:41 compute-0 sudo[190045]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:10:41 compute-0 python3.9[190047]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 10 17:10:41 compute-0 systemd[1]: Reloading.
Jan 10 17:10:41 compute-0 systemd-rc-local-generator[190076]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 17:10:41 compute-0 systemd-sysv-generator[190083]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 10 17:10:41 compute-0 ceph-mon[75249]: pgmap v450: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:10:41 compute-0 sudo[190045]: pam_unix(sudo:session): session closed for user root
Jan 10 17:10:42 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v451: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:10:42 compute-0 sudo[190235]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pzlcyqvyddxijwjrsyowkgcpqgcafjer ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065042.100409-396-265243202414099/AnsiballZ_systemd.py'
Jan 10 17:10:42 compute-0 sudo[190235]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:10:42 compute-0 python3.9[190237]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 10 17:10:42 compute-0 systemd[1]: Reloading.
Jan 10 17:10:42 compute-0 systemd-rc-local-generator[190268]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 17:10:42 compute-0 systemd-sysv-generator[190271]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 10 17:10:43 compute-0 systemd[1]: Listening on libvirt proxy daemon socket.
Jan 10 17:10:43 compute-0 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Jan 10 17:10:43 compute-0 sudo[190235]: pam_unix(sudo:session): session closed for user root
Jan 10 17:10:43 compute-0 ceph-mon[75249]: pgmap v451: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:10:43 compute-0 sudo[190428]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jkhelyswllahjhozwfauuknrvtwinjxl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065043.4769657-404-268882175128552/AnsiballZ_systemd.py'
Jan 10 17:10:43 compute-0 sudo[190428]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:10:44 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:10:44 compute-0 python3.9[190430]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 10 17:10:44 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v452: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:10:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] _maybe_adjust
Jan 10 17:10:44 compute-0 sudo[190428]: pam_unix(sudo:session): session closed for user root
Jan 10 17:10:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:10:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 10 17:10:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:10:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 17:10:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:10:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 17:10:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:10:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 17:10:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:10:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 17:10:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:10:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 9.302004027771843e-07 of space, bias 4.0, pg target 0.0011162404833326212 quantized to 16 (current 16)
Jan 10 17:10:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:10:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 17:10:44 compute-0 sudo[190583]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pxqwsazrnikjglbswqvomwtbgztfgbed ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065044.6031756-404-170616817622490/AnsiballZ_systemd.py'
Jan 10 17:10:44 compute-0 sudo[190583]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:10:45 compute-0 python3.9[190585]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 10 17:10:45 compute-0 sudo[190583]: pam_unix(sudo:session): session closed for user root
Jan 10 17:10:45 compute-0 ceph-mon[75249]: pgmap v452: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:10:45 compute-0 sudo[190738]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pcjfidvlkptyrmkyogrwzbilsuypgwzy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065045.5194595-404-105452084660329/AnsiballZ_systemd.py'
Jan 10 17:10:45 compute-0 sudo[190738]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:10:46 compute-0 python3.9[190740]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 10 17:10:46 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v453: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:10:46 compute-0 sudo[190738]: pam_unix(sudo:session): session closed for user root
Jan 10 17:10:46 compute-0 podman[190742]: 2026-01-10 17:10:46.426678971 +0000 UTC m=+0.125078301 container health_status a2049b55215718ead6719d161f51721cb7ba61ad8a59a8a1174e05b17008fa6f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 10 17:10:47 compute-0 sudo[190919]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aerdnamqeajarhjbeoardbkewmiilgif ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065046.5275264-404-49400704532051/AnsiballZ_systemd.py'
Jan 10 17:10:47 compute-0 sudo[190919]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:10:47 compute-0 python3.9[190921]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 10 17:10:47 compute-0 sudo[190919]: pam_unix(sudo:session): session closed for user root
Jan 10 17:10:47 compute-0 ceph-mon[75249]: pgmap v453: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:10:47 compute-0 sudo[191074]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wosenrkflrbcrypjmbmvhpqtralzigfk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065047.6087182-404-70597164538657/AnsiballZ_systemd.py'
Jan 10 17:10:47 compute-0 sudo[191074]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:10:48 compute-0 python3.9[191076]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 10 17:10:48 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v454: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:10:48 compute-0 sudo[191074]: pam_unix(sudo:session): session closed for user root
Jan 10 17:10:48 compute-0 sshd-session[191077]: Received disconnect from 80.94.93.233 port 49400:11:  [preauth]
Jan 10 17:10:48 compute-0 sshd-session[191077]: Disconnected from authenticating user root 80.94.93.233 port 49400 [preauth]
Jan 10 17:10:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:10:48.911 152671 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 10 17:10:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:10:48.912 152671 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 10 17:10:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:10:48.912 152671 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 10 17:10:48 compute-0 sudo[191231]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tqczhkxozytxljdxsiertaemzoqnqlnl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065048.5802462-404-11322167777642/AnsiballZ_systemd.py'
Jan 10 17:10:48 compute-0 sudo[191231]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:10:49 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:10:49 compute-0 python3.9[191233]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 10 17:10:49 compute-0 sudo[191231]: pam_unix(sudo:session): session closed for user root
Jan 10 17:10:49 compute-0 ceph-mon[75249]: pgmap v454: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:10:49 compute-0 sudo[191386]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rtwffsstlpmlxdlbugjfqocrupnfclgu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065049.5962691-404-156451305249158/AnsiballZ_systemd.py'
Jan 10 17:10:50 compute-0 sudo[191386]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:10:50 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v455: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:10:50 compute-0 python3.9[191388]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 10 17:10:50 compute-0 sudo[191386]: pam_unix(sudo:session): session closed for user root
Jan 10 17:10:50 compute-0 sudo[191551]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zqxukqejspdqipaspatouohqzzufofhf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065050.5832584-404-235334903681303/AnsiballZ_systemd.py'
Jan 10 17:10:50 compute-0 sudo[191551]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:10:50 compute-0 podman[191515]: 2026-01-10 17:10:50.97467993 +0000 UTC m=+0.073912809 container health_status 4a66317a3a3660e903224f5e823ead0a57597ae17c6ac34919f8a3aeea25124f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 10 17:10:51 compute-0 python3.9[191562]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 10 17:10:51 compute-0 sudo[191551]: pam_unix(sudo:session): session closed for user root
Jan 10 17:10:51 compute-0 ceph-mon[75249]: pgmap v455: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:10:51 compute-0 sudo[191715]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sslwkjxclpatjvcztngxhoancifvjjse ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065051.5517764-404-120218543130044/AnsiballZ_systemd.py'
Jan 10 17:10:51 compute-0 sudo[191715]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:10:52 compute-0 python3.9[191717]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 10 17:10:52 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v456: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:10:52 compute-0 sudo[191715]: pam_unix(sudo:session): session closed for user root
Jan 10 17:10:53 compute-0 sudo[191870]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pnqqyrwjwnlvgjlfjhzvovnlbevfoybs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065052.6719344-404-29449267920362/AnsiballZ_systemd.py'
Jan 10 17:10:53 compute-0 sudo[191870]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:10:53 compute-0 python3.9[191872]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 10 17:10:53 compute-0 sudo[191870]: pam_unix(sudo:session): session closed for user root
Jan 10 17:10:53 compute-0 ceph-mon[75249]: pgmap v456: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:10:54 compute-0 sudo[192025]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-diwxkskizubtwmuigaxuzvdzstbhsvqc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065053.6903262-404-198359965552746/AnsiballZ_systemd.py'
Jan 10 17:10:54 compute-0 sudo[192025]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:10:54 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:10:54 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v457: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:10:54 compute-0 python3.9[192027]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 10 17:10:54 compute-0 sudo[192025]: pam_unix(sudo:session): session closed for user root
Jan 10 17:10:55 compute-0 sudo[192180]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mbkggfikesuxwxabkzfdxfvdbnikgegq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065054.624759-404-281136445641245/AnsiballZ_systemd.py'
Jan 10 17:10:55 compute-0 sudo[192180]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:10:55 compute-0 python3.9[192182]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 10 17:10:55 compute-0 sudo[192180]: pam_unix(sudo:session): session closed for user root
Jan 10 17:10:55 compute-0 ceph-mon[75249]: pgmap v457: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:10:56 compute-0 sudo[192335]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gkqssfktxnpbzxpaxhqsmnbpaedylgrm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065055.6795387-404-263701734605600/AnsiballZ_systemd.py'
Jan 10 17:10:56 compute-0 sudo[192335]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:10:56 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v458: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:10:56 compute-0 python3.9[192337]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 10 17:10:56 compute-0 sudo[192335]: pam_unix(sudo:session): session closed for user root
Jan 10 17:10:57 compute-0 sudo[192490]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fbdrecztccupahbdcnfinvxglangqnbd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065056.7489483-404-35331026084956/AnsiballZ_systemd.py'
Jan 10 17:10:57 compute-0 sudo[192490]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:10:57 compute-0 python3.9[192492]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 10 17:10:57 compute-0 sudo[192490]: pam_unix(sudo:session): session closed for user root
Jan 10 17:10:57 compute-0 ceph-mon[75249]: pgmap v458: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:10:58 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v459: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:10:58 compute-0 sudo[192645]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qwzsdzpnbsqrfgzfsnskgddedxghpdos ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065058.1111012-506-179166005017670/AnsiballZ_file.py'
Jan 10 17:10:58 compute-0 sudo[192645]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:10:58 compute-0 python3.9[192647]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 10 17:10:58 compute-0 sudo[192645]: pam_unix(sudo:session): session closed for user root
Jan 10 17:10:59 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:10:59 compute-0 sudo[192797]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-biimpxlqkyfjtkxsmqitfvvviitqczmv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065058.9733336-506-73456004660439/AnsiballZ_file.py'
Jan 10 17:10:59 compute-0 sudo[192797]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:10:59 compute-0 python3.9[192799]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 10 17:10:59 compute-0 sudo[192797]: pam_unix(sudo:session): session closed for user root
Jan 10 17:10:59 compute-0 ceph-mon[75249]: pgmap v459: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:11:00 compute-0 sudo[192949]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bixnpxlurmwimslwpjoltyqiezbflnlt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065059.785257-506-80786639034417/AnsiballZ_file.py'
Jan 10 17:11:00 compute-0 sudo[192949]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:11:00 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v460: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:11:00 compute-0 python3.9[192951]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 10 17:11:00 compute-0 sudo[192949]: pam_unix(sudo:session): session closed for user root
Jan 10 17:11:00 compute-0 sudo[193101]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kbzhugirlpvmmkmhqjjrxestcejqranp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065060.5861733-506-214533461270071/AnsiballZ_file.py'
Jan 10 17:11:00 compute-0 sudo[193101]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:11:01 compute-0 python3.9[193103]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 10 17:11:01 compute-0 sudo[193101]: pam_unix(sudo:session): session closed for user root
Jan 10 17:11:01 compute-0 sudo[193253]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jguuksctxpzkjyggtcjspjackjbrqiju ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065061.2583146-506-194407069009276/AnsiballZ_file.py'
Jan 10 17:11:01 compute-0 sudo[193253]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:11:01 compute-0 ceph-mon[75249]: pgmap v460: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:11:01 compute-0 python3.9[193255]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 10 17:11:01 compute-0 sudo[193253]: pam_unix(sudo:session): session closed for user root
Jan 10 17:11:02 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v461: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:11:02 compute-0 sudo[193405]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-odimzcccjokfqpprieilvbcusqnmfwna ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065062.0653553-506-41442441235680/AnsiballZ_file.py'
Jan 10 17:11:02 compute-0 sudo[193405]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:11:02 compute-0 python3.9[193407]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 10 17:11:02 compute-0 sudo[193405]: pam_unix(sudo:session): session closed for user root
Jan 10 17:11:03 compute-0 sudo[193557]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vkzdwwtysrpftuoqzgpegrochlciaekq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065062.964221-549-125725843356237/AnsiballZ_stat.py'
Jan 10 17:11:03 compute-0 sudo[193557]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:11:03 compute-0 python3.9[193559]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 17:11:03 compute-0 sudo[193557]: pam_unix(sudo:session): session closed for user root
Jan 10 17:11:03 compute-0 ceph-mon[75249]: pgmap v461: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:11:04 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:11:04 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v462: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:11:04 compute-0 sudo[193682]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bxafcifnijgurfdfkcxjelvioxzhjvdv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065062.964221-549-125725843356237/AnsiballZ_copy.py'
Jan 10 17:11:04 compute-0 sudo[193682]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:11:04 compute-0 python3.9[193684]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1768065062.964221-549-125725843356237/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:11:04 compute-0 sudo[193682]: pam_unix(sudo:session): session closed for user root
Jan 10 17:11:04 compute-0 ceph-mon[75249]: pgmap v462: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:11:05 compute-0 sudo[193834]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bnasnrboceppfsiaoogtmqkqiqbgdpjx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065064.8034894-549-280104799303355/AnsiballZ_stat.py'
Jan 10 17:11:05 compute-0 sudo[193834]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:11:05 compute-0 python3.9[193836]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 17:11:05 compute-0 sudo[193834]: pam_unix(sudo:session): session closed for user root
Jan 10 17:11:05 compute-0 sudo[193959]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-blxccvjhqyqukiplwiakyotnpcsvzppl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065064.8034894-549-280104799303355/AnsiballZ_copy.py'
Jan 10 17:11:05 compute-0 sudo[193959]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:11:06 compute-0 python3.9[193961]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1768065064.8034894-549-280104799303355/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:11:06 compute-0 sudo[193959]: pam_unix(sudo:session): session closed for user root
Jan 10 17:11:06 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v463: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:11:06 compute-0 sudo[194111]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-usskddfythqrslfavsycymnrhabunzat ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065066.2994268-549-60611555623907/AnsiballZ_stat.py'
Jan 10 17:11:06 compute-0 sudo[194111]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:11:06 compute-0 python3.9[194113]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 17:11:06 compute-0 sudo[194111]: pam_unix(sudo:session): session closed for user root
Jan 10 17:11:07 compute-0 ceph-mon[75249]: pgmap v463: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:11:07 compute-0 sudo[194236]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-myaxwnebtdcqdoqazhrjaaquojxjzsqh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065066.2994268-549-60611555623907/AnsiballZ_copy.py'
Jan 10 17:11:07 compute-0 sudo[194236]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:11:07 compute-0 python3.9[194238]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1768065066.2994268-549-60611555623907/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:11:07 compute-0 sudo[194236]: pam_unix(sudo:session): session closed for user root
Jan 10 17:11:08 compute-0 sudo[194388]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vegbubmcwjajntsskhfslowicpmmjjga ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065067.755869-549-254126462766809/AnsiballZ_stat.py'
Jan 10 17:11:08 compute-0 sudo[194388]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:11:08 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v464: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:11:08 compute-0 python3.9[194390]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 17:11:08 compute-0 sudo[194388]: pam_unix(sudo:session): session closed for user root
Jan 10 17:11:08 compute-0 sudo[194513]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wvlhlkyztkjjcbgjuznlybgztnaimkra ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065067.755869-549-254126462766809/AnsiballZ_copy.py'
Jan 10 17:11:08 compute-0 sudo[194513]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:11:08 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:11:08 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:11:08 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:11:08 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:11:08 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:11:08 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:11:09 compute-0 python3.9[194515]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1768065067.755869-549-254126462766809/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:11:09 compute-0 sudo[194513]: pam_unix(sudo:session): session closed for user root
Jan 10 17:11:09 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:11:09 compute-0 ceph-mon[75249]: pgmap v464: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:11:09 compute-0 sudo[194665]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tyrtngomjpvkhgzoyjxtvqxmjkldmgpo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065069.3454819-549-115994752453085/AnsiballZ_stat.py'
Jan 10 17:11:09 compute-0 sudo[194665]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:11:09 compute-0 python3.9[194667]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 17:11:09 compute-0 sudo[194665]: pam_unix(sudo:session): session closed for user root
Jan 10 17:11:10 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v465: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:11:10 compute-0 sudo[194790]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lsraeznbtbwdzqrhdbefigpsdcziowmc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065069.3454819-549-115994752453085/AnsiballZ_copy.py'
Jan 10 17:11:10 compute-0 sudo[194790]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:11:10 compute-0 python3.9[194792]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1768065069.3454819-549-115994752453085/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:11:10 compute-0 sudo[194790]: pam_unix(sudo:session): session closed for user root
Jan 10 17:11:11 compute-0 sudo[194942]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wjoettjutpgiwlblwvnmdojqdpuutgzu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065070.8902962-549-280479923475804/AnsiballZ_stat.py'
Jan 10 17:11:11 compute-0 sudo[194942]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:11:11 compute-0 ceph-mon[75249]: pgmap v465: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:11:11 compute-0 python3.9[194944]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 17:11:11 compute-0 sudo[194942]: pam_unix(sudo:session): session closed for user root
Jan 10 17:11:11 compute-0 sudo[195067]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fqmqawlbzhbioxncszfjffjltpcmrgqv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065070.8902962-549-280479923475804/AnsiballZ_copy.py'
Jan 10 17:11:11 compute-0 sudo[195067]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:11:12 compute-0 python3.9[195069]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1768065070.8902962-549-280479923475804/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:11:12 compute-0 sudo[195067]: pam_unix(sudo:session): session closed for user root
Jan 10 17:11:12 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v466: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:11:12 compute-0 sudo[195219]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nqcgyxunthsnutrfuyruffrogrornldk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065072.391514-549-252219836644825/AnsiballZ_stat.py'
Jan 10 17:11:12 compute-0 sudo[195219]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:11:13 compute-0 python3.9[195221]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 17:11:13 compute-0 sudo[195219]: pam_unix(sudo:session): session closed for user root
Jan 10 17:11:13 compute-0 ceph-mon[75249]: pgmap v466: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:11:13 compute-0 sudo[195342]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nbgomaitzxqiywzrhqunmcdzvgoaivoq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065072.391514-549-252219836644825/AnsiballZ_copy.py'
Jan 10 17:11:13 compute-0 sudo[195342]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:11:13 compute-0 python3.9[195344]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1768065072.391514-549-252219836644825/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:11:13 compute-0 sudo[195342]: pam_unix(sudo:session): session closed for user root
Jan 10 17:11:14 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:11:14 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v467: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:11:14 compute-0 sudo[195494]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jfbycasqwpdcjodwiykbwyoulpmwoxvx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065073.9724784-549-39402097765111/AnsiballZ_stat.py'
Jan 10 17:11:14 compute-0 sudo[195494]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:11:14 compute-0 python3.9[195496]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 17:11:14 compute-0 sudo[195494]: pam_unix(sudo:session): session closed for user root
Jan 10 17:11:14 compute-0 sudo[195619]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-stqzbeiymqgwvkefpfejvmnzvgrcsnbq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065073.9724784-549-39402097765111/AnsiballZ_copy.py'
Jan 10 17:11:14 compute-0 sudo[195619]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:11:15 compute-0 python3.9[195621]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1768065073.9724784-549-39402097765111/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:11:15 compute-0 sudo[195619]: pam_unix(sudo:session): session closed for user root
Jan 10 17:11:15 compute-0 ceph-mon[75249]: pgmap v467: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:11:15 compute-0 sudo[195771]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wotkxgywpahehddspikivtotyxemhwgl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065075.4019926-662-143676005092241/AnsiballZ_command.py'
Jan 10 17:11:15 compute-0 sudo[195771]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:11:15 compute-0 python3.9[195773]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Jan 10 17:11:16 compute-0 sudo[195771]: pam_unix(sudo:session): session closed for user root
Jan 10 17:11:16 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v468: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:11:16 compute-0 sudo[195935]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pkgruqtpzyxlxvwgnjrjhnyebrhucaik ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065076.3455555-671-82870012724343/AnsiballZ_file.py'
Jan 10 17:11:16 compute-0 sudo[195935]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:11:16 compute-0 podman[195898]: 2026-01-10 17:11:16.867988011 +0000 UTC m=+0.189173069 container health_status a2049b55215718ead6719d161f51721cb7ba61ad8a59a8a1174e05b17008fa6f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20251202, config_id=ovn_controller, managed_by=edpm_ansible)
Jan 10 17:11:16 compute-0 python3.9[195943]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:11:16 compute-0 sudo[195935]: pam_unix(sudo:session): session closed for user root
Jan 10 17:11:17 compute-0 ceph-mon[75249]: pgmap v468: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:11:17 compute-0 sudo[196100]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jzsgsjpkzovrdolsycysezkorlevclps ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065077.1083329-671-238187473735722/AnsiballZ_file.py'
Jan 10 17:11:17 compute-0 sudo[196100]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:11:17 compute-0 python3.9[196102]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:11:17 compute-0 sudo[196100]: pam_unix(sudo:session): session closed for user root
Jan 10 17:11:18 compute-0 sudo[196252]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ddxierspqmecztbyfkwqlrihzelufdrk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065077.800732-671-107583755479068/AnsiballZ_file.py'
Jan 10 17:11:18 compute-0 sudo[196252]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:11:18 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v469: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:11:18 compute-0 python3.9[196254]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:11:18 compute-0 sudo[196252]: pam_unix(sudo:session): session closed for user root
Jan 10 17:11:18 compute-0 sudo[196404]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ksfkitcjvsopjlmrjhzicmtagdrajxui ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065078.6069012-671-235805773102374/AnsiballZ_file.py'
Jan 10 17:11:19 compute-0 sudo[196404]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:11:19 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:11:19 compute-0 python3.9[196406]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:11:19 compute-0 sudo[196404]: pam_unix(sudo:session): session closed for user root
Jan 10 17:11:19 compute-0 ceph-mon[75249]: pgmap v469: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:11:19 compute-0 sudo[196556]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mncomtnhdzrfurluczxgiumvoqtzagsb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065079.4112968-671-249112141871628/AnsiballZ_file.py'
Jan 10 17:11:19 compute-0 sudo[196556]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:11:20 compute-0 python3.9[196558]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:11:20 compute-0 sudo[196556]: pam_unix(sudo:session): session closed for user root
Jan 10 17:11:20 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v470: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:11:20 compute-0 sudo[196708]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ooicaxiwxwlhmhgvnrfabpgfsxcwhqpf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065080.2045596-671-232726098078392/AnsiballZ_file.py'
Jan 10 17:11:20 compute-0 sudo[196708]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:11:20 compute-0 python3.9[196710]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:11:20 compute-0 sudo[196708]: pam_unix(sudo:session): session closed for user root
Jan 10 17:11:21 compute-0 sudo[196873]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-csmeuvwgpijkbvygyswqdjjlzyoxobrt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065081.0109882-671-279396232435527/AnsiballZ_file.py'
Jan 10 17:11:21 compute-0 sudo[196873]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:11:21 compute-0 ceph-mon[75249]: pgmap v470: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:11:21 compute-0 podman[196834]: 2026-01-10 17:11:21.435804964 +0000 UTC m=+0.095208993 container health_status 4a66317a3a3660e903224f5e823ead0a57597ae17c6ac34919f8a3aeea25124f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 10 17:11:21 compute-0 python3.9[196881]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:11:21 compute-0 sudo[196873]: pam_unix(sudo:session): session closed for user root
Jan 10 17:11:22 compute-0 sudo[197031]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dgvojmrgwylmxlveowohpqtjtwmplffs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065081.8385503-671-55910304126518/AnsiballZ_file.py'
Jan 10 17:11:22 compute-0 sudo[197031]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:11:22 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v471: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:11:22 compute-0 python3.9[197033]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:11:22 compute-0 sudo[197031]: pam_unix(sudo:session): session closed for user root
Jan 10 17:11:22 compute-0 sudo[197183]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-buofynouyehvdalupzcjpvhfehkqazlq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065082.5624082-671-205052366595104/AnsiballZ_file.py'
Jan 10 17:11:22 compute-0 sudo[197183]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:11:23 compute-0 python3.9[197185]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:11:23 compute-0 sudo[197183]: pam_unix(sudo:session): session closed for user root
Jan 10 17:11:23 compute-0 ceph-mon[75249]: pgmap v471: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:11:23 compute-0 sudo[197335]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qeoasrpoxwfihpmblzgrydclyitytgst ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065083.2683942-671-43015440656637/AnsiballZ_file.py'
Jan 10 17:11:23 compute-0 sudo[197335]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:11:23 compute-0 python3.9[197337]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:11:23 compute-0 sudo[197335]: pam_unix(sudo:session): session closed for user root
Jan 10 17:11:24 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:11:24 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v472: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:11:24 compute-0 sudo[197487]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbpxkthhiganwakccoogwfywypqezvnc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065084.0589304-671-195017735528579/AnsiballZ_file.py'
Jan 10 17:11:24 compute-0 sudo[197487]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:11:24 compute-0 python3.9[197489]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:11:24 compute-0 sudo[197487]: pam_unix(sudo:session): session closed for user root
Jan 10 17:11:25 compute-0 sudo[197639]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jhksbjgfszlmcawvepnodswpxkeladmn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065084.8533552-671-231741783560742/AnsiballZ_file.py'
Jan 10 17:11:25 compute-0 sudo[197639]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:11:25 compute-0 python3.9[197641]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:11:25 compute-0 ceph-mon[75249]: pgmap v472: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:11:25 compute-0 sudo[197639]: pam_unix(sudo:session): session closed for user root
Jan 10 17:11:25 compute-0 sudo[197791]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-duzquhteeypkbeaosmgqgtolmkeujtug ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065085.581183-671-151438886423789/AnsiballZ_file.py'
Jan 10 17:11:25 compute-0 sudo[197791]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:11:26 compute-0 python3.9[197793]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:11:26 compute-0 sudo[197791]: pam_unix(sudo:session): session closed for user root
Jan 10 17:11:26 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v473: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:11:26 compute-0 sudo[197943]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gwqqamqmfufwccqbedmyojkgnlxnnpvh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065086.296042-671-84019673666423/AnsiballZ_file.py'
Jan 10 17:11:26 compute-0 sudo[197943]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:11:26 compute-0 python3.9[197945]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:11:26 compute-0 sudo[197943]: pam_unix(sudo:session): session closed for user root
Jan 10 17:11:27 compute-0 sudo[198095]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hmpsxonvzpzkqexxrxladkhjzxyqmfal ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065087.000799-770-85275796795406/AnsiballZ_stat.py'
Jan 10 17:11:27 compute-0 sudo[198095]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:11:27 compute-0 ceph-mon[75249]: pgmap v473: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:11:27 compute-0 python3.9[198097]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 17:11:27 compute-0 sudo[198095]: pam_unix(sudo:session): session closed for user root
Jan 10 17:11:28 compute-0 sudo[198218]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rbdysycvkvxbowgwawtbqoybebpceeer ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065087.000799-770-85275796795406/AnsiballZ_copy.py'
Jan 10 17:11:28 compute-0 sudo[198218]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:11:28 compute-0 python3.9[198220]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768065087.000799-770-85275796795406/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:11:28 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v474: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:11:28 compute-0 sudo[198218]: pam_unix(sudo:session): session closed for user root
Jan 10 17:11:28 compute-0 sudo[198370]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pfzvphggyyelbzliwalcasqimngdbezz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065088.4722717-770-132047822032570/AnsiballZ_stat.py'
Jan 10 17:11:28 compute-0 sudo[198370]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:11:29 compute-0 python3.9[198372]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 17:11:29 compute-0 sudo[198370]: pam_unix(sudo:session): session closed for user root
Jan 10 17:11:29 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:11:29 compute-0 ceph-mon[75249]: pgmap v474: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:11:29 compute-0 sudo[198493]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jvszyiytvglvaaceudtpmuozrsfivjek ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065088.4722717-770-132047822032570/AnsiballZ_copy.py'
Jan 10 17:11:29 compute-0 sudo[198493]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:11:29 compute-0 python3.9[198495]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768065088.4722717-770-132047822032570/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:11:29 compute-0 sudo[198493]: pam_unix(sudo:session): session closed for user root
Jan 10 17:11:30 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v475: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:11:30 compute-0 sudo[198645]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lgdaxqslkvcrxtlkmaokjysdtliqeejx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065089.9553614-770-250132790497331/AnsiballZ_stat.py'
Jan 10 17:11:30 compute-0 sudo[198645]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:11:30 compute-0 python3.9[198647]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 17:11:30 compute-0 sudo[198645]: pam_unix(sudo:session): session closed for user root
Jan 10 17:11:31 compute-0 sudo[198768]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-afnwqdfajwvqcktndgbdypjcdzxgrlxj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065089.9553614-770-250132790497331/AnsiballZ_copy.py'
Jan 10 17:11:31 compute-0 sudo[198768]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:11:31 compute-0 python3.9[198770]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768065089.9553614-770-250132790497331/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:11:31 compute-0 sudo[198768]: pam_unix(sudo:session): session closed for user root
Jan 10 17:11:31 compute-0 ceph-mon[75249]: pgmap v475: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:11:31 compute-0 sudo[198920]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lbfmltzjacpeufejjbpclrtiqapcgesa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065091.4737563-770-172816879163191/AnsiballZ_stat.py'
Jan 10 17:11:31 compute-0 sudo[198920]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:11:32 compute-0 python3.9[198922]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 17:11:32 compute-0 sudo[198920]: pam_unix(sudo:session): session closed for user root
Jan 10 17:11:32 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v476: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:11:32 compute-0 sudo[199043]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cecytfpbakubietduradopkoupcnpyun ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065091.4737563-770-172816879163191/AnsiballZ_copy.py'
Jan 10 17:11:32 compute-0 sudo[199043]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:11:32 compute-0 python3.9[199045]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768065091.4737563-770-172816879163191/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:11:32 compute-0 sudo[199043]: pam_unix(sudo:session): session closed for user root
Jan 10 17:11:33 compute-0 ceph-mon[75249]: pgmap v476: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:11:33 compute-0 sudo[199195]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kxmtckuffbgwasgdhqvehhaojicijqco ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065093.1165762-770-230128918859271/AnsiballZ_stat.py'
Jan 10 17:11:33 compute-0 sudo[199195]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:11:33 compute-0 python3.9[199197]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 17:11:33 compute-0 sudo[199195]: pam_unix(sudo:session): session closed for user root
Jan 10 17:11:34 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:11:34 compute-0 sudo[199318]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gbfvblknkcphugykgovsykemdzysoiak ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065093.1165762-770-230128918859271/AnsiballZ_copy.py'
Jan 10 17:11:34 compute-0 sudo[199318]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:11:34 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v477: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:11:34 compute-0 python3.9[199320]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768065093.1165762-770-230128918859271/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:11:34 compute-0 sudo[199318]: pam_unix(sudo:session): session closed for user root
Jan 10 17:11:35 compute-0 sudo[199470]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vnhlvfzdrsvwnwnyimdclfcnleyutdei ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065094.6834524-770-102312907588834/AnsiballZ_stat.py'
Jan 10 17:11:35 compute-0 sudo[199470]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:11:35 compute-0 python3.9[199472]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 17:11:35 compute-0 sudo[199470]: pam_unix(sudo:session): session closed for user root
Jan 10 17:11:35 compute-0 ceph-mon[75249]: pgmap v477: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:11:35 compute-0 sudo[199593]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cnufqrwgfeizwjcxpoqkknjgqsnimwsj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065094.6834524-770-102312907588834/AnsiballZ_copy.py'
Jan 10 17:11:35 compute-0 sudo[199593]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:11:36 compute-0 python3.9[199595]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768065094.6834524-770-102312907588834/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:11:36 compute-0 sudo[199593]: pam_unix(sudo:session): session closed for user root
Jan 10 17:11:36 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v478: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:11:36 compute-0 sudo[199745]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dfbyzsdviuqdonfixazjjdllwdlhvlhs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065096.2127438-770-232869817582423/AnsiballZ_stat.py'
Jan 10 17:11:36 compute-0 sudo[199745]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:11:36 compute-0 python3.9[199747]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 17:11:36 compute-0 sudo[199745]: pam_unix(sudo:session): session closed for user root
Jan 10 17:11:37 compute-0 sudo[199842]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 17:11:37 compute-0 sudo[199842]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:11:37 compute-0 sudo[199842]: pam_unix(sudo:session): session closed for user root
Jan 10 17:11:37 compute-0 sudo[199892]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uykelgatjwgxoyxtcsdjfpfnzefzqzeu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065096.2127438-770-232869817582423/AnsiballZ_copy.py'
Jan 10 17:11:37 compute-0 sudo[199892]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:11:37 compute-0 sudo[199895]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 10 17:11:37 compute-0 sudo[199895]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:11:37 compute-0 python3.9[199898]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768065096.2127438-770-232869817582423/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:11:37 compute-0 sudo[199892]: pam_unix(sudo:session): session closed for user root
Jan 10 17:11:37 compute-0 ceph-mon[75249]: pgmap v478: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:11:38 compute-0 sudo[200100]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bwtwniqnhyeuxqhgkgpfvoatoiviesxw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065097.65904-770-197731767263542/AnsiballZ_stat.py'
Jan 10 17:11:38 compute-0 sudo[200100]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:11:38 compute-0 sudo[199895]: pam_unix(sudo:session): session closed for user root
Jan 10 17:11:38 compute-0 ceph-mgr[75538]: [balancer INFO root] Optimize plan auto_2026-01-10_17:11:38
Jan 10 17:11:38 compute-0 ceph-mgr[75538]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 10 17:11:38 compute-0 ceph-mgr[75538]: [balancer INFO root] do_upmap
Jan 10 17:11:38 compute-0 ceph-mgr[75538]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.meta', 'volumes', '.mgr', 'images', 'backups', 'cephfs.cephfs.data']
Jan 10 17:11:38 compute-0 ceph-mgr[75538]: [balancer INFO root] prepared 0/10 upmap changes
Jan 10 17:11:38 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 17:11:38 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 17:11:38 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 10 17:11:38 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 10 17:11:38 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 10 17:11:38 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:11:38 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 10 17:11:38 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 10 17:11:38 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 10 17:11:38 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 10 17:11:38 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 17:11:38 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 17:11:38 compute-0 sudo[200105]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 17:11:38 compute-0 sudo[200105]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:11:38 compute-0 sudo[200105]: pam_unix(sudo:session): session closed for user root
Jan 10 17:11:38 compute-0 sudo[200130]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 10 17:11:38 compute-0 sudo[200130]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:11:38 compute-0 python3.9[200104]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 17:11:38 compute-0 sudo[200100]: pam_unix(sudo:session): session closed for user root
Jan 10 17:11:38 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v479: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:11:38 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 17:11:38 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 10 17:11:38 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:11:38 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 10 17:11:38 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 10 17:11:38 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 17:11:38 compute-0 podman[200237]: 2026-01-10 17:11:38.587569276 +0000 UTC m=+0.077560336 container create 90f057f83eb8d3d6ee893ad97e4a481463639fadf6b4c1c002334e1cdfebf8da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_mclean, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 10 17:11:38 compute-0 systemd[1]: Started libpod-conmon-90f057f83eb8d3d6ee893ad97e4a481463639fadf6b4c1c002334e1cdfebf8da.scope.
Jan 10 17:11:38 compute-0 podman[200237]: 2026-01-10 17:11:38.554772705 +0000 UTC m=+0.044763785 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:11:38 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:11:38 compute-0 podman[200237]: 2026-01-10 17:11:38.706168788 +0000 UTC m=+0.196159878 container init 90f057f83eb8d3d6ee893ad97e4a481463639fadf6b4c1c002334e1cdfebf8da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_mclean, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 10 17:11:38 compute-0 podman[200237]: 2026-01-10 17:11:38.719455629 +0000 UTC m=+0.209446669 container start 90f057f83eb8d3d6ee893ad97e4a481463639fadf6b4c1c002334e1cdfebf8da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_mclean, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 10 17:11:38 compute-0 podman[200237]: 2026-01-10 17:11:38.724575416 +0000 UTC m=+0.214566456 container attach 90f057f83eb8d3d6ee893ad97e4a481463639fadf6b4c1c002334e1cdfebf8da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_mclean, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 17:11:38 compute-0 eloquent_mclean[200277]: 167 167
Jan 10 17:11:38 compute-0 systemd[1]: libpod-90f057f83eb8d3d6ee893ad97e4a481463639fadf6b4c1c002334e1cdfebf8da.scope: Deactivated successfully.
Jan 10 17:11:38 compute-0 podman[200237]: 2026-01-10 17:11:38.728926731 +0000 UTC m=+0.218917741 container died 90f057f83eb8d3d6ee893ad97e4a481463639fadf6b4c1c002334e1cdfebf8da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_mclean, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 10 17:11:38 compute-0 sudo[200307]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sbvppxutrtgekuownpkxpxbxmgsequpr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065097.65904-770-197731767263542/AnsiballZ_copy.py'
Jan 10 17:11:38 compute-0 sudo[200307]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:11:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-2773bebcaee26635972924452a7f742337b0c97ff138b834a823a0942bab93d3-merged.mount: Deactivated successfully.
Jan 10 17:11:38 compute-0 podman[200237]: 2026-01-10 17:11:38.776933148 +0000 UTC m=+0.266924158 container remove 90f057f83eb8d3d6ee893ad97e4a481463639fadf6b4c1c002334e1cdfebf8da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_mclean, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 10 17:11:38 compute-0 systemd[1]: libpod-conmon-90f057f83eb8d3d6ee893ad97e4a481463639fadf6b4c1c002334e1cdfebf8da.scope: Deactivated successfully.
Jan 10 17:11:38 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:11:38 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:11:38 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:11:38 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:11:38 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:11:38 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:11:38 compute-0 python3.9[200312]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768065097.65904-770-197731767263542/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:11:39 compute-0 sudo[200307]: pam_unix(sudo:session): session closed for user root
Jan 10 17:11:39 compute-0 podman[200328]: 2026-01-10 17:11:39.039065297 +0000 UTC m=+0.065749577 container create 46e6ec422e44104248941f16dd00fe6ff10ca0d40c545260c45bcb4866154798 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_mcclintock, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 17:11:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 10 17:11:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 10 17:11:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 10 17:11:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 10 17:11:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 10 17:11:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 10 17:11:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 10 17:11:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 10 17:11:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 10 17:11:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 10 17:11:39 compute-0 systemd[1]: Started libpod-conmon-46e6ec422e44104248941f16dd00fe6ff10ca0d40c545260c45bcb4866154798.scope.
Jan 10 17:11:39 compute-0 podman[200328]: 2026-01-10 17:11:39.009750276 +0000 UTC m=+0.036434626 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:11:39 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:11:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9753a717e14bf02cf9f48bfe75564743b7eeab2911b57ad90f8c8395e1bb539/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 17:11:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9753a717e14bf02cf9f48bfe75564743b7eeab2911b57ad90f8c8395e1bb539/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 17:11:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9753a717e14bf02cf9f48bfe75564743b7eeab2911b57ad90f8c8395e1bb539/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 17:11:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9753a717e14bf02cf9f48bfe75564743b7eeab2911b57ad90f8c8395e1bb539/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 17:11:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9753a717e14bf02cf9f48bfe75564743b7eeab2911b57ad90f8c8395e1bb539/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 10 17:11:39 compute-0 podman[200328]: 2026-01-10 17:11:39.149253918 +0000 UTC m=+0.175938258 container init 46e6ec422e44104248941f16dd00fe6ff10ca0d40c545260c45bcb4866154798 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_mcclintock, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 17:11:39 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:11:39 compute-0 podman[200328]: 2026-01-10 17:11:39.163921709 +0000 UTC m=+0.190605999 container start 46e6ec422e44104248941f16dd00fe6ff10ca0d40c545260c45bcb4866154798 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_mcclintock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 10 17:11:39 compute-0 podman[200328]: 2026-01-10 17:11:39.169282593 +0000 UTC m=+0.195966883 container attach 46e6ec422e44104248941f16dd00fe6ff10ca0d40c545260c45bcb4866154798 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_mcclintock, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 10 17:11:39 compute-0 ceph-mon[75249]: pgmap v479: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:11:39 compute-0 sudo[200505]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gocvwucydmfnqyoedethrkfuyrnujccz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065099.1982095-770-147458411985776/AnsiballZ_stat.py'
Jan 10 17:11:39 compute-0 sudo[200505]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:11:39 compute-0 infallible_mcclintock[200368]: --> passed data devices: 0 physical, 3 LVM
Jan 10 17:11:39 compute-0 infallible_mcclintock[200368]: --> All data devices are unavailable
Jan 10 17:11:39 compute-0 python3.9[200507]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 17:11:39 compute-0 systemd[1]: libpod-46e6ec422e44104248941f16dd00fe6ff10ca0d40c545260c45bcb4866154798.scope: Deactivated successfully.
Jan 10 17:11:39 compute-0 podman[200328]: 2026-01-10 17:11:39.766330649 +0000 UTC m=+0.793014929 container died 46e6ec422e44104248941f16dd00fe6ff10ca0d40c545260c45bcb4866154798 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_mcclintock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 10 17:11:39 compute-0 sudo[200505]: pam_unix(sudo:session): session closed for user root
Jan 10 17:11:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-d9753a717e14bf02cf9f48bfe75564743b7eeab2911b57ad90f8c8395e1bb539-merged.mount: Deactivated successfully.
Jan 10 17:11:39 compute-0 podman[200328]: 2026-01-10 17:11:39.835219225 +0000 UTC m=+0.861903475 container remove 46e6ec422e44104248941f16dd00fe6ff10ca0d40c545260c45bcb4866154798 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_mcclintock, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 17:11:39 compute-0 systemd[1]: libpod-conmon-46e6ec422e44104248941f16dd00fe6ff10ca0d40c545260c45bcb4866154798.scope: Deactivated successfully.
Jan 10 17:11:39 compute-0 sudo[200130]: pam_unix(sudo:session): session closed for user root
Jan 10 17:11:39 compute-0 sudo[200555]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 17:11:39 compute-0 sudo[200555]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:11:39 compute-0 sudo[200555]: pam_unix(sudo:session): session closed for user root
Jan 10 17:11:40 compute-0 sudo[200608]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 -- lvm list --format json
Jan 10 17:11:40 compute-0 sudo[200608]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:11:40 compute-0 sudo[200698]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ongkvqaragaoljmimqwovduhoafyizgq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065099.1982095-770-147458411985776/AnsiballZ_copy.py'
Jan 10 17:11:40 compute-0 sudo[200698]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:11:40 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v480: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:11:40 compute-0 python3.9[200700]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768065099.1982095-770-147458411985776/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:11:40 compute-0 podman[200713]: 2026-01-10 17:11:40.410573549 +0000 UTC m=+0.073650814 container create 923fb0b395789ad9bdb87369e159369d2be464e9d00feec853ace7cce34a8219 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_bell, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 17:11:40 compute-0 sudo[200698]: pam_unix(sudo:session): session closed for user root
Jan 10 17:11:40 compute-0 auditd[702]: Audit daemon rotating log files
Jan 10 17:11:40 compute-0 systemd[1]: Started libpod-conmon-923fb0b395789ad9bdb87369e159369d2be464e9d00feec853ace7cce34a8219.scope.
Jan 10 17:11:40 compute-0 podman[200713]: 2026-01-10 17:11:40.378656873 +0000 UTC m=+0.041734158 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:11:40 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:11:40 compute-0 podman[200713]: 2026-01-10 17:11:40.518270088 +0000 UTC m=+0.181347413 container init 923fb0b395789ad9bdb87369e159369d2be464e9d00feec853ace7cce34a8219 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_bell, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 17:11:40 compute-0 podman[200713]: 2026-01-10 17:11:40.529259943 +0000 UTC m=+0.192337188 container start 923fb0b395789ad9bdb87369e159369d2be464e9d00feec853ace7cce34a8219 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_bell, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 17:11:40 compute-0 podman[200713]: 2026-01-10 17:11:40.53298312 +0000 UTC m=+0.196060395 container attach 923fb0b395789ad9bdb87369e159369d2be464e9d00feec853ace7cce34a8219 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_bell, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 10 17:11:40 compute-0 fervent_bell[200736]: 167 167
Jan 10 17:11:40 compute-0 systemd[1]: libpod-923fb0b395789ad9bdb87369e159369d2be464e9d00feec853ace7cce34a8219.scope: Deactivated successfully.
Jan 10 17:11:40 compute-0 podman[200713]: 2026-01-10 17:11:40.537760397 +0000 UTC m=+0.200837662 container died 923fb0b395789ad9bdb87369e159369d2be464e9d00feec853ace7cce34a8219 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_bell, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 17:11:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-b340413d28be8038ba683d8d5bf16cd03bfc9069d5318865fafaca02da68f09e-merged.mount: Deactivated successfully.
Jan 10 17:11:40 compute-0 podman[200713]: 2026-01-10 17:11:40.594366321 +0000 UTC m=+0.257443596 container remove 923fb0b395789ad9bdb87369e159369d2be464e9d00feec853ace7cce34a8219 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_bell, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 10 17:11:40 compute-0 systemd[1]: libpod-conmon-923fb0b395789ad9bdb87369e159369d2be464e9d00feec853ace7cce34a8219.scope: Deactivated successfully.
Jan 10 17:11:40 compute-0 podman[200829]: 2026-01-10 17:11:40.834589922 +0000 UTC m=+0.050126029 container create d1057a97de0ef2a3ffcf8b585af6afe64ff95df7540c283e776173334e8b8dd6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_mahavira, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 10 17:11:40 compute-0 systemd[1]: Started libpod-conmon-d1057a97de0ef2a3ffcf8b585af6afe64ff95df7540c283e776173334e8b8dd6.scope.
Jan 10 17:11:40 compute-0 podman[200829]: 2026-01-10 17:11:40.81257337 +0000 UTC m=+0.028109457 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:11:40 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:11:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/851b02ed48d195721f08c8e8eb5c2d9e922d0d6ced6b2979d4aa6f3364eaee7a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 17:11:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/851b02ed48d195721f08c8e8eb5c2d9e922d0d6ced6b2979d4aa6f3364eaee7a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 17:11:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/851b02ed48d195721f08c8e8eb5c2d9e922d0d6ced6b2979d4aa6f3364eaee7a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 17:11:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/851b02ed48d195721f08c8e8eb5c2d9e922d0d6ced6b2979d4aa6f3364eaee7a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 17:11:40 compute-0 podman[200829]: 2026-01-10 17:11:40.954844842 +0000 UTC m=+0.170380929 container init d1057a97de0ef2a3ffcf8b585af6afe64ff95df7540c283e776173334e8b8dd6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_mahavira, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 10 17:11:40 compute-0 podman[200829]: 2026-01-10 17:11:40.962853421 +0000 UTC m=+0.178389538 container start d1057a97de0ef2a3ffcf8b585af6afe64ff95df7540c283e776173334e8b8dd6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_mahavira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 17:11:40 compute-0 podman[200829]: 2026-01-10 17:11:40.967652679 +0000 UTC m=+0.183188776 container attach d1057a97de0ef2a3ffcf8b585af6afe64ff95df7540c283e776173334e8b8dd6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_mahavira, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 17:11:41 compute-0 sudo[200920]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xzqvcolccvkolufchbwzqcqqfjwxtpgx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065100.6172652-770-258523656904194/AnsiballZ_stat.py'
Jan 10 17:11:41 compute-0 sudo[200920]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:11:41 compute-0 python3.9[200922]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 17:11:41 compute-0 sudo[200920]: pam_unix(sudo:session): session closed for user root
Jan 10 17:11:41 compute-0 beautiful_mahavira[200886]: {
Jan 10 17:11:41 compute-0 beautiful_mahavira[200886]:     "0": [
Jan 10 17:11:41 compute-0 beautiful_mahavira[200886]:         {
Jan 10 17:11:41 compute-0 beautiful_mahavira[200886]:             "devices": [
Jan 10 17:11:41 compute-0 beautiful_mahavira[200886]:                 "/dev/loop3"
Jan 10 17:11:41 compute-0 beautiful_mahavira[200886]:             ],
Jan 10 17:11:41 compute-0 beautiful_mahavira[200886]:             "lv_name": "ceph_lv0",
Jan 10 17:11:41 compute-0 beautiful_mahavira[200886]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 10 17:11:41 compute-0 beautiful_mahavira[200886]:             "lv_size": "21470642176",
Jan 10 17:11:41 compute-0 beautiful_mahavira[200886]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9aa1dcc9-88f4-49c0-be40-744313964d3e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 17:11:41 compute-0 beautiful_mahavira[200886]:             "lv_uuid": "fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2",
Jan 10 17:11:41 compute-0 beautiful_mahavira[200886]:             "name": "ceph_lv0",
Jan 10 17:11:41 compute-0 beautiful_mahavira[200886]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 10 17:11:41 compute-0 beautiful_mahavira[200886]:             "tags": {
Jan 10 17:11:41 compute-0 beautiful_mahavira[200886]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 10 17:11:41 compute-0 beautiful_mahavira[200886]:                 "ceph.block_uuid": "fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2",
Jan 10 17:11:41 compute-0 beautiful_mahavira[200886]:                 "ceph.cephx_lockbox_secret": "",
Jan 10 17:11:41 compute-0 beautiful_mahavira[200886]:                 "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 17:11:41 compute-0 beautiful_mahavira[200886]:                 "ceph.cluster_name": "ceph",
Jan 10 17:11:41 compute-0 beautiful_mahavira[200886]:                 "ceph.crush_device_class": "",
Jan 10 17:11:41 compute-0 beautiful_mahavira[200886]:                 "ceph.encrypted": "0",
Jan 10 17:11:41 compute-0 beautiful_mahavira[200886]:                 "ceph.objectstore": "bluestore",
Jan 10 17:11:41 compute-0 beautiful_mahavira[200886]:                 "ceph.osd_fsid": "9aa1dcc9-88f4-49c0-be40-744313964d3e",
Jan 10 17:11:41 compute-0 beautiful_mahavira[200886]:                 "ceph.osd_id": "0",
Jan 10 17:11:41 compute-0 beautiful_mahavira[200886]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 10 17:11:41 compute-0 beautiful_mahavira[200886]:                 "ceph.type": "block",
Jan 10 17:11:41 compute-0 beautiful_mahavira[200886]:                 "ceph.vdo": "0",
Jan 10 17:11:41 compute-0 beautiful_mahavira[200886]:                 "ceph.with_tpm": "0"
Jan 10 17:11:41 compute-0 beautiful_mahavira[200886]:             },
Jan 10 17:11:41 compute-0 beautiful_mahavira[200886]:             "type": "block",
Jan 10 17:11:41 compute-0 beautiful_mahavira[200886]:             "vg_name": "ceph_vg0"
Jan 10 17:11:41 compute-0 beautiful_mahavira[200886]:         }
Jan 10 17:11:41 compute-0 beautiful_mahavira[200886]:     ],
Jan 10 17:11:41 compute-0 beautiful_mahavira[200886]:     "1": [
Jan 10 17:11:41 compute-0 beautiful_mahavira[200886]:         {
Jan 10 17:11:41 compute-0 beautiful_mahavira[200886]:             "devices": [
Jan 10 17:11:41 compute-0 beautiful_mahavira[200886]:                 "/dev/loop4"
Jan 10 17:11:41 compute-0 beautiful_mahavira[200886]:             ],
Jan 10 17:11:41 compute-0 beautiful_mahavira[200886]:             "lv_name": "ceph_lv1",
Jan 10 17:11:41 compute-0 beautiful_mahavira[200886]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 10 17:11:41 compute-0 beautiful_mahavira[200886]:             "lv_size": "21470642176",
Jan 10 17:11:41 compute-0 beautiful_mahavira[200886]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e8e31518-65ae-476c-891c-e2fc550d0a1c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 17:11:41 compute-0 beautiful_mahavira[200886]:             "lv_uuid": "jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE",
Jan 10 17:11:41 compute-0 beautiful_mahavira[200886]:             "name": "ceph_lv1",
Jan 10 17:11:41 compute-0 beautiful_mahavira[200886]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 10 17:11:41 compute-0 beautiful_mahavira[200886]:             "tags": {
Jan 10 17:11:41 compute-0 beautiful_mahavira[200886]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 10 17:11:41 compute-0 beautiful_mahavira[200886]:                 "ceph.block_uuid": "jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE",
Jan 10 17:11:41 compute-0 beautiful_mahavira[200886]:                 "ceph.cephx_lockbox_secret": "",
Jan 10 17:11:41 compute-0 beautiful_mahavira[200886]:                 "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 17:11:41 compute-0 beautiful_mahavira[200886]:                 "ceph.cluster_name": "ceph",
Jan 10 17:11:41 compute-0 beautiful_mahavira[200886]:                 "ceph.crush_device_class": "",
Jan 10 17:11:41 compute-0 beautiful_mahavira[200886]:                 "ceph.encrypted": "0",
Jan 10 17:11:41 compute-0 beautiful_mahavira[200886]:                 "ceph.objectstore": "bluestore",
Jan 10 17:11:41 compute-0 beautiful_mahavira[200886]:                 "ceph.osd_fsid": "e8e31518-65ae-476c-891c-e2fc550d0a1c",
Jan 10 17:11:41 compute-0 beautiful_mahavira[200886]:                 "ceph.osd_id": "1",
Jan 10 17:11:41 compute-0 beautiful_mahavira[200886]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 10 17:11:41 compute-0 beautiful_mahavira[200886]:                 "ceph.type": "block",
Jan 10 17:11:41 compute-0 beautiful_mahavira[200886]:                 "ceph.vdo": "0",
Jan 10 17:11:41 compute-0 beautiful_mahavira[200886]:                 "ceph.with_tpm": "0"
Jan 10 17:11:41 compute-0 beautiful_mahavira[200886]:             },
Jan 10 17:11:41 compute-0 beautiful_mahavira[200886]:             "type": "block",
Jan 10 17:11:41 compute-0 beautiful_mahavira[200886]:             "vg_name": "ceph_vg1"
Jan 10 17:11:41 compute-0 beautiful_mahavira[200886]:         }
Jan 10 17:11:41 compute-0 beautiful_mahavira[200886]:     ],
Jan 10 17:11:41 compute-0 beautiful_mahavira[200886]:     "2": [
Jan 10 17:11:41 compute-0 beautiful_mahavira[200886]:         {
Jan 10 17:11:41 compute-0 beautiful_mahavira[200886]:             "devices": [
Jan 10 17:11:41 compute-0 beautiful_mahavira[200886]:                 "/dev/loop5"
Jan 10 17:11:41 compute-0 beautiful_mahavira[200886]:             ],
Jan 10 17:11:41 compute-0 beautiful_mahavira[200886]:             "lv_name": "ceph_lv2",
Jan 10 17:11:41 compute-0 beautiful_mahavira[200886]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 10 17:11:41 compute-0 beautiful_mahavira[200886]:             "lv_size": "21470642176",
Jan 10 17:11:41 compute-0 beautiful_mahavira[200886]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=87473727-6468-4f68-8371-e0bf60edaa43,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 17:11:41 compute-0 beautiful_mahavira[200886]:             "lv_uuid": "47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL",
Jan 10 17:11:41 compute-0 beautiful_mahavira[200886]:             "name": "ceph_lv2",
Jan 10 17:11:41 compute-0 beautiful_mahavira[200886]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 10 17:11:41 compute-0 beautiful_mahavira[200886]:             "tags": {
Jan 10 17:11:41 compute-0 beautiful_mahavira[200886]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 10 17:11:41 compute-0 beautiful_mahavira[200886]:                 "ceph.block_uuid": "47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL",
Jan 10 17:11:41 compute-0 beautiful_mahavira[200886]:                 "ceph.cephx_lockbox_secret": "",
Jan 10 17:11:41 compute-0 beautiful_mahavira[200886]:                 "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 17:11:41 compute-0 beautiful_mahavira[200886]:                 "ceph.cluster_name": "ceph",
Jan 10 17:11:41 compute-0 beautiful_mahavira[200886]:                 "ceph.crush_device_class": "",
Jan 10 17:11:41 compute-0 beautiful_mahavira[200886]:                 "ceph.encrypted": "0",
Jan 10 17:11:41 compute-0 beautiful_mahavira[200886]:                 "ceph.objectstore": "bluestore",
Jan 10 17:11:41 compute-0 beautiful_mahavira[200886]:                 "ceph.osd_fsid": "87473727-6468-4f68-8371-e0bf60edaa43",
Jan 10 17:11:41 compute-0 beautiful_mahavira[200886]:                 "ceph.osd_id": "2",
Jan 10 17:11:41 compute-0 beautiful_mahavira[200886]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 10 17:11:41 compute-0 beautiful_mahavira[200886]:                 "ceph.type": "block",
Jan 10 17:11:41 compute-0 beautiful_mahavira[200886]:                 "ceph.vdo": "0",
Jan 10 17:11:41 compute-0 beautiful_mahavira[200886]:                 "ceph.with_tpm": "0"
Jan 10 17:11:41 compute-0 beautiful_mahavira[200886]:             },
Jan 10 17:11:41 compute-0 beautiful_mahavira[200886]:             "type": "block",
Jan 10 17:11:41 compute-0 beautiful_mahavira[200886]:             "vg_name": "ceph_vg2"
Jan 10 17:11:41 compute-0 beautiful_mahavira[200886]:         }
Jan 10 17:11:41 compute-0 beautiful_mahavira[200886]:     ]
Jan 10 17:11:41 compute-0 beautiful_mahavira[200886]: }
Jan 10 17:11:41 compute-0 systemd[1]: libpod-d1057a97de0ef2a3ffcf8b585af6afe64ff95df7540c283e776173334e8b8dd6.scope: Deactivated successfully.
Jan 10 17:11:41 compute-0 podman[200829]: 2026-01-10 17:11:41.30653886 +0000 UTC m=+0.522074927 container died d1057a97de0ef2a3ffcf8b585af6afe64ff95df7540c283e776173334e8b8dd6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_mahavira, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 10 17:11:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-851b02ed48d195721f08c8e8eb5c2d9e922d0d6ced6b2979d4aa6f3364eaee7a-merged.mount: Deactivated successfully.
Jan 10 17:11:41 compute-0 podman[200829]: 2026-01-10 17:11:41.352852429 +0000 UTC m=+0.568388496 container remove d1057a97de0ef2a3ffcf8b585af6afe64ff95df7540c283e776173334e8b8dd6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_mahavira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 17:11:41 compute-0 systemd[1]: libpod-conmon-d1057a97de0ef2a3ffcf8b585af6afe64ff95df7540c283e776173334e8b8dd6.scope: Deactivated successfully.
Jan 10 17:11:41 compute-0 sudo[200608]: pam_unix(sudo:session): session closed for user root
Jan 10 17:11:41 compute-0 sudo[200986]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 17:11:41 compute-0 sudo[200986]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:11:41 compute-0 sudo[200986]: pam_unix(sudo:session): session closed for user root
Jan 10 17:11:41 compute-0 sudo[201033]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 -- raw list --format json
Jan 10 17:11:41 compute-0 sudo[201033]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:11:41 compute-0 ceph-mon[75249]: pgmap v480: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:11:41 compute-0 sudo[201108]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ltuumelxywmvellrbfkzocbnqzrpdnas ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065100.6172652-770-258523656904194/AnsiballZ_copy.py'
Jan 10 17:11:41 compute-0 sudo[201108]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:11:41 compute-0 python3.9[201110]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768065100.6172652-770-258523656904194/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:11:41 compute-0 sudo[201108]: pam_unix(sudo:session): session closed for user root
Jan 10 17:11:41 compute-0 podman[201123]: 2026-01-10 17:11:41.832006304 +0000 UTC m=+0.057000377 container create fef76ad95a6897be16b648f543223f6fdfde38c0bebc9420b58768ef7f9a49b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_gates, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 10 17:11:41 compute-0 systemd[1]: Started libpod-conmon-fef76ad95a6897be16b648f543223f6fdfde38c0bebc9420b58768ef7f9a49b9.scope.
Jan 10 17:11:41 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:11:41 compute-0 podman[201123]: 2026-01-10 17:11:41.816388166 +0000 UTC m=+0.041382209 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:11:41 compute-0 podman[201123]: 2026-01-10 17:11:41.92599958 +0000 UTC m=+0.150993633 container init fef76ad95a6897be16b648f543223f6fdfde38c0bebc9420b58768ef7f9a49b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_gates, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 17:11:41 compute-0 podman[201123]: 2026-01-10 17:11:41.936183822 +0000 UTC m=+0.161177845 container start fef76ad95a6897be16b648f543223f6fdfde38c0bebc9420b58768ef7f9a49b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_gates, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 10 17:11:41 compute-0 happy_gates[201160]: 167 167
Jan 10 17:11:41 compute-0 podman[201123]: 2026-01-10 17:11:41.942011819 +0000 UTC m=+0.167005862 container attach fef76ad95a6897be16b648f543223f6fdfde38c0bebc9420b58768ef7f9a49b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_gates, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 17:11:41 compute-0 systemd[1]: libpod-fef76ad95a6897be16b648f543223f6fdfde38c0bebc9420b58768ef7f9a49b9.scope: Deactivated successfully.
Jan 10 17:11:41 compute-0 podman[201123]: 2026-01-10 17:11:41.943509642 +0000 UTC m=+0.168503665 container died fef76ad95a6897be16b648f543223f6fdfde38c0bebc9420b58768ef7f9a49b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_gates, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 10 17:11:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-09643f5472ecdff20441e64cba84c9202e7e059b4778127df19b32b392e7982e-merged.mount: Deactivated successfully.
Jan 10 17:11:41 compute-0 podman[201123]: 2026-01-10 17:11:41.980330218 +0000 UTC m=+0.205324241 container remove fef76ad95a6897be16b648f543223f6fdfde38c0bebc9420b58768ef7f9a49b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_gates, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 17:11:41 compute-0 systemd[1]: libpod-conmon-fef76ad95a6897be16b648f543223f6fdfde38c0bebc9420b58768ef7f9a49b9.scope: Deactivated successfully.
Jan 10 17:11:42 compute-0 podman[201242]: 2026-01-10 17:11:42.186918084 +0000 UTC m=+0.055656158 container create b172ed5c6ef5f7dc018a5c01a53e3671c3e18a3320222223df29061c28c8aa35 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_pike, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 10 17:11:42 compute-0 systemd[1]: Started libpod-conmon-b172ed5c6ef5f7dc018a5c01a53e3671c3e18a3320222223df29061c28c8aa35.scope.
Jan 10 17:11:42 compute-0 podman[201242]: 2026-01-10 17:11:42.158225351 +0000 UTC m=+0.026963464 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:11:42 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:11:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83d9b997710b2bffed4ddf87e0dd2f509d76b7fb1a41a43378a595dbc376f5fd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 17:11:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83d9b997710b2bffed4ddf87e0dd2f509d76b7fb1a41a43378a595dbc376f5fd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 17:11:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83d9b997710b2bffed4ddf87e0dd2f509d76b7fb1a41a43378a595dbc376f5fd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 17:11:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83d9b997710b2bffed4ddf87e0dd2f509d76b7fb1a41a43378a595dbc376f5fd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 17:11:42 compute-0 podman[201242]: 2026-01-10 17:11:42.301716418 +0000 UTC m=+0.170454531 container init b172ed5c6ef5f7dc018a5c01a53e3671c3e18a3320222223df29061c28c8aa35 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_pike, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 17:11:42 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v481: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:11:42 compute-0 podman[201242]: 2026-01-10 17:11:42.314236097 +0000 UTC m=+0.182974200 container start b172ed5c6ef5f7dc018a5c01a53e3671c3e18a3320222223df29061c28c8aa35 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_pike, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 17:11:42 compute-0 podman[201242]: 2026-01-10 17:11:42.318955542 +0000 UTC m=+0.187693625 container attach b172ed5c6ef5f7dc018a5c01a53e3671c3e18a3320222223df29061c28c8aa35 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_pike, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 17:11:42 compute-0 sudo[201332]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jujafsaxncwencjfkmhqbgazdohpgikp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065101.9927835-770-146702629131011/AnsiballZ_stat.py'
Jan 10 17:11:42 compute-0 sudo[201332]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:11:42 compute-0 python3.9[201334]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 17:11:42 compute-0 sudo[201332]: pam_unix(sudo:session): session closed for user root
Jan 10 17:11:42 compute-0 sudo[201510]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-blaisfskulyjgjauxtmhfypbcaznsddq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065101.9927835-770-146702629131011/AnsiballZ_copy.py'
Jan 10 17:11:42 compute-0 sudo[201510]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:11:43 compute-0 python3.9[201515]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768065101.9927835-770-146702629131011/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:11:43 compute-0 sudo[201510]: pam_unix(sudo:session): session closed for user root
Jan 10 17:11:43 compute-0 lvm[201534]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 10 17:11:43 compute-0 lvm[201534]: VG ceph_vg2 finished
Jan 10 17:11:43 compute-0 lvm[201533]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 10 17:11:43 compute-0 lvm[201533]: VG ceph_vg1 finished
Jan 10 17:11:43 compute-0 lvm[201532]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 10 17:11:43 compute-0 lvm[201532]: VG ceph_vg0 finished
Jan 10 17:11:43 compute-0 unruffled_pike[201298]: {}
Jan 10 17:11:43 compute-0 systemd[1]: libpod-b172ed5c6ef5f7dc018a5c01a53e3671c3e18a3320222223df29061c28c8aa35.scope: Deactivated successfully.
Jan 10 17:11:43 compute-0 systemd[1]: libpod-b172ed5c6ef5f7dc018a5c01a53e3671c3e18a3320222223df29061c28c8aa35.scope: Consumed 1.419s CPU time.
Jan 10 17:11:43 compute-0 podman[201562]: 2026-01-10 17:11:43.25733491 +0000 UTC m=+0.022980050 container died b172ed5c6ef5f7dc018a5c01a53e3671c3e18a3320222223df29061c28c8aa35 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_pike, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 10 17:11:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-83d9b997710b2bffed4ddf87e0dd2f509d76b7fb1a41a43378a595dbc376f5fd-merged.mount: Deactivated successfully.
Jan 10 17:11:43 compute-0 podman[201562]: 2026-01-10 17:11:43.297736149 +0000 UTC m=+0.063381289 container remove b172ed5c6ef5f7dc018a5c01a53e3671c3e18a3320222223df29061c28c8aa35 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_pike, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 17:11:43 compute-0 systemd[1]: libpod-conmon-b172ed5c6ef5f7dc018a5c01a53e3671c3e18a3320222223df29061c28c8aa35.scope: Deactivated successfully.
Jan 10 17:11:43 compute-0 sudo[201033]: pam_unix(sudo:session): session closed for user root
Jan 10 17:11:43 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 10 17:11:43 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:11:43 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 10 17:11:43 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:11:43 compute-0 sudo[201627]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 10 17:11:43 compute-0 sudo[201627]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:11:43 compute-0 sudo[201627]: pam_unix(sudo:session): session closed for user root
Jan 10 17:11:43 compute-0 ceph-mon[75249]: pgmap v481: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:11:43 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:11:43 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:11:43 compute-0 sudo[201725]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jliuawybivxbcudlaerxwixdymgwjeoq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065103.296403-770-148352254771167/AnsiballZ_stat.py'
Jan 10 17:11:43 compute-0 sudo[201725]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:11:43 compute-0 python3.9[201727]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 17:11:43 compute-0 sudo[201725]: pam_unix(sudo:session): session closed for user root
Jan 10 17:11:44 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:11:44 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v482: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:11:44 compute-0 sudo[201848]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tkouslrbojhcazneoxnuayxvcjeucvbr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065103.296403-770-148352254771167/AnsiballZ_copy.py'
Jan 10 17:11:44 compute-0 sudo[201848]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:11:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] _maybe_adjust
Jan 10 17:11:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:11:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 10 17:11:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:11:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 17:11:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:11:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 17:11:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:11:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 17:11:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:11:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 17:11:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:11:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 9.302004027771843e-07 of space, bias 4.0, pg target 0.0011162404833326212 quantized to 16 (current 16)
Jan 10 17:11:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:11:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 17:11:44 compute-0 python3.9[201850]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768065103.296403-770-148352254771167/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:11:44 compute-0 sudo[201848]: pam_unix(sudo:session): session closed for user root
Jan 10 17:11:45 compute-0 sudo[202000]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-swecwcwptiozsxqioeipybgsiyvyezeb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065104.7805882-770-207836343549169/AnsiballZ_stat.py'
Jan 10 17:11:45 compute-0 sudo[202000]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:11:45 compute-0 python3.9[202002]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 17:11:45 compute-0 sudo[202000]: pam_unix(sudo:session): session closed for user root
Jan 10 17:11:45 compute-0 ceph-mon[75249]: pgmap v482: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:11:45 compute-0 sudo[202123]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gcpxusdtbsjafloppdfenzazzqnujdcd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065104.7805882-770-207836343549169/AnsiballZ_copy.py'
Jan 10 17:11:45 compute-0 sudo[202123]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:11:46 compute-0 python3.9[202125]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768065104.7805882-770-207836343549169/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:11:46 compute-0 sudo[202123]: pam_unix(sudo:session): session closed for user root
Jan 10 17:11:46 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v483: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:11:46 compute-0 sudo[202275]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tmjquurfrsfsednqbemgpwvccjjuukez ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065106.3700457-770-278820222182827/AnsiballZ_stat.py'
Jan 10 17:11:46 compute-0 sudo[202275]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:11:46 compute-0 python3.9[202277]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 17:11:47 compute-0 sudo[202275]: pam_unix(sudo:session): session closed for user root
Jan 10 17:11:47 compute-0 podman[202278]: 2026-01-10 17:11:47.181105345 +0000 UTC m=+0.168834164 container health_status a2049b55215718ead6719d161f51721cb7ba61ad8a59a8a1174e05b17008fa6f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 10 17:11:47 compute-0 sudo[202425]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvtfwbuwxcsflujfjcswzopmnfhudmfh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065106.3700457-770-278820222182827/AnsiballZ_copy.py'
Jan 10 17:11:47 compute-0 sudo[202425]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:11:47 compute-0 ceph-mon[75249]: pgmap v483: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:11:47 compute-0 python3.9[202427]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768065106.3700457-770-278820222182827/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:11:47 compute-0 sudo[202425]: pam_unix(sudo:session): session closed for user root
Jan 10 17:11:48 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v484: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:11:48 compute-0 python3.9[202577]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ls -lRZ /run/libvirt | grep -E ':container_\S+_t'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 17:11:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:11:48.913 152671 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 10 17:11:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:11:48.915 152671 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 10 17:11:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:11:48.915 152671 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 10 17:11:49 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:11:49 compute-0 sudo[202730]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jquxsswzmyhmfchporojswuqyzammcko ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065108.9174883-976-199908110424935/AnsiballZ_seboolean.py'
Jan 10 17:11:49 compute-0 sudo[202730]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:11:49 compute-0 ceph-mon[75249]: pgmap v484: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:11:49 compute-0 python3.9[202732]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Jan 10 17:11:50 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v485: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:11:50 compute-0 sudo[202730]: pam_unix(sudo:session): session closed for user root
Jan 10 17:11:51 compute-0 dbus-broker-launch[777]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Jan 10 17:11:51 compute-0 sudo[202887]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-opafrxekggijihxqkvfosnfyyndeqfdz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065111.1443613-984-176577803951631/AnsiballZ_copy.py'
Jan 10 17:11:51 compute-0 sudo[202887]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:11:51 compute-0 podman[202889]: 2026-01-10 17:11:51.635077049 +0000 UTC m=+0.092751972 container health_status 4a66317a3a3660e903224f5e823ead0a57597ae17c6ac34919f8a3aeea25124f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 10 17:11:51 compute-0 ceph-mon[75249]: pgmap v485: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:11:51 compute-0 python3.9[202890]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:11:51 compute-0 sudo[202887]: pam_unix(sudo:session): session closed for user root
Jan 10 17:11:52 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v486: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:11:52 compute-0 sudo[203060]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vpaukonpanfrlstbjlvagssiaatkjtfo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065111.9816554-984-112073289352107/AnsiballZ_copy.py'
Jan 10 17:11:52 compute-0 sudo[203060]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:11:52 compute-0 python3.9[203062]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:11:52 compute-0 sudo[203060]: pam_unix(sudo:session): session closed for user root
Jan 10 17:11:53 compute-0 sudo[203212]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qkyzliszfysmwqfxzvxxuthtgpsewggm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065112.7901206-984-124903298649603/AnsiballZ_copy.py'
Jan 10 17:11:53 compute-0 sudo[203212]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:11:53 compute-0 python3.9[203214]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:11:53 compute-0 sudo[203212]: pam_unix(sudo:session): session closed for user root
Jan 10 17:11:53 compute-0 ceph-mon[75249]: pgmap v486: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:11:53 compute-0 sudo[203364]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rlkzejfnldenhlchkgziesmmouargtuf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065113.5842588-984-55829680144718/AnsiballZ_copy.py'
Jan 10 17:11:53 compute-0 sudo[203364]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:11:54 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:11:54 compute-0 python3.9[203366]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:11:54 compute-0 sudo[203364]: pam_unix(sudo:session): session closed for user root
Jan 10 17:11:54 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v487: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:11:54 compute-0 sudo[203516]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uqivqwcdornaskxgalvgzfpckxdnmpxi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065114.3341558-984-233225300770826/AnsiballZ_copy.py'
Jan 10 17:11:54 compute-0 sudo[203516]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:11:54 compute-0 python3.9[203518]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:11:54 compute-0 sudo[203516]: pam_unix(sudo:session): session closed for user root
Jan 10 17:11:55 compute-0 sudo[203668]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lgjscqtwvnfmmgjyxqkjfhcgdsbvtpwm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065115.161132-1020-24734019187149/AnsiballZ_copy.py'
Jan 10 17:11:55 compute-0 sudo[203668]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:11:55 compute-0 python3.9[203670]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:11:55 compute-0 sudo[203668]: pam_unix(sudo:session): session closed for user root
Jan 10 17:11:55 compute-0 ceph-mon[75249]: pgmap v487: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:11:56 compute-0 sudo[203820]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wvtazwebuuixkhabiluvsdbaiuphwikh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065115.786355-1020-225299003490249/AnsiballZ_copy.py'
Jan 10 17:11:56 compute-0 sudo[203820]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:11:56 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v488: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:11:56 compute-0 python3.9[203822]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:11:56 compute-0 sudo[203820]: pam_unix(sudo:session): session closed for user root
Jan 10 17:11:56 compute-0 sudo[203972]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbpnytnqjcncttyokdztcomskteyijml ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065116.5448766-1020-27946871392042/AnsiballZ_copy.py'
Jan 10 17:11:56 compute-0 sudo[203972]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:11:57 compute-0 python3.9[203974]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:11:57 compute-0 sudo[203972]: pam_unix(sudo:session): session closed for user root
Jan 10 17:11:57 compute-0 sudo[204124]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fhszcmxpwhorqgrhozenkftkzmfyvlco ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065117.2384627-1020-203799730607600/AnsiballZ_copy.py'
Jan 10 17:11:57 compute-0 sudo[204124]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:11:57 compute-0 ceph-mon[75249]: pgmap v488: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:11:57 compute-0 python3.9[204126]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:11:57 compute-0 sudo[204124]: pam_unix(sudo:session): session closed for user root
Jan 10 17:11:58 compute-0 sudo[204278]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kszmujwnfhvnzpiafwdmrhildviyrneu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065117.9557958-1020-232707454417526/AnsiballZ_copy.py'
Jan 10 17:11:58 compute-0 sudo[204278]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:11:58 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v489: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:11:58 compute-0 sshd-session[204151]: Invalid user kali from 216.36.124.133 port 47976
Jan 10 17:11:58 compute-0 sshd-session[204151]: Connection closed by invalid user kali 216.36.124.133 port 47976 [preauth]
Jan 10 17:11:58 compute-0 python3.9[204280]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:11:58 compute-0 sudo[204278]: pam_unix(sudo:session): session closed for user root
Jan 10 17:11:59 compute-0 sudo[204430]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jmipwikeksltnlqmrszmnvvcaeyjouva ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065118.7606318-1056-246545191337233/AnsiballZ_systemd.py'
Jan 10 17:11:59 compute-0 sudo[204430]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:11:59 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:11:59 compute-0 python3.9[204432]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 10 17:11:59 compute-0 systemd[1]: Reloading.
Jan 10 17:11:59 compute-0 systemd-rc-local-generator[204454]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 17:11:59 compute-0 systemd-sysv-generator[204458]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 10 17:11:59 compute-0 ceph-mon[75249]: pgmap v489: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:11:59 compute-0 systemd[1]: Starting libvirt logging daemon socket...
Jan 10 17:11:59 compute-0 systemd[1]: Listening on libvirt logging daemon socket.
Jan 10 17:11:59 compute-0 systemd[1]: Starting libvirt logging daemon admin socket...
Jan 10 17:11:59 compute-0 systemd[1]: Listening on libvirt logging daemon admin socket.
Jan 10 17:11:59 compute-0 systemd[1]: Starting libvirt logging daemon...
Jan 10 17:11:59 compute-0 systemd[1]: Started libvirt logging daemon.
Jan 10 17:11:59 compute-0 sudo[204430]: pam_unix(sudo:session): session closed for user root
Jan 10 17:12:00 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v490: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:12:00 compute-0 sudo[204622]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pnooxsciunbtfwfyjnwcrrcdyrfbygow ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065120.1456647-1056-250752862722940/AnsiballZ_systemd.py'
Jan 10 17:12:00 compute-0 sudo[204622]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:12:00 compute-0 python3.9[204624]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 10 17:12:00 compute-0 systemd[1]: Reloading.
Jan 10 17:12:01 compute-0 systemd-rc-local-generator[204652]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 17:12:01 compute-0 systemd-sysv-generator[204655]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 10 17:12:01 compute-0 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Jan 10 17:12:01 compute-0 systemd[1]: Starting libvirt nodedev daemon socket...
Jan 10 17:12:01 compute-0 systemd[1]: Listening on libvirt nodedev daemon socket.
Jan 10 17:12:01 compute-0 systemd[1]: Starting libvirt nodedev daemon admin socket...
Jan 10 17:12:01 compute-0 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Jan 10 17:12:01 compute-0 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Jan 10 17:12:01 compute-0 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Jan 10 17:12:01 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Jan 10 17:12:01 compute-0 systemd[1]: Started libvirt nodedev daemon.
Jan 10 17:12:01 compute-0 sudo[204622]: pam_unix(sudo:session): session closed for user root
Jan 10 17:12:01 compute-0 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Jan 10 17:12:01 compute-0 ceph-mon[75249]: pgmap v490: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:12:01 compute-0 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Jan 10 17:12:01 compute-0 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Jan 10 17:12:01 compute-0 sudo[204846]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xddazhdqlteylbijhcolsbtdflambdab ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065121.61318-1056-211806397655825/AnsiballZ_systemd.py'
Jan 10 17:12:02 compute-0 sudo[204846]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:12:02 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v491: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:12:02 compute-0 python3.9[204848]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 10 17:12:02 compute-0 systemd[1]: Reloading.
Jan 10 17:12:02 compute-0 systemd-rc-local-generator[204879]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 17:12:02 compute-0 systemd-sysv-generator[204882]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 10 17:12:02 compute-0 systemd[1]: Starting libvirt proxy daemon admin socket...
Jan 10 17:12:02 compute-0 systemd[1]: Starting libvirt proxy daemon read-only socket...
Jan 10 17:12:02 compute-0 systemd[1]: Listening on libvirt proxy daemon admin socket.
Jan 10 17:12:02 compute-0 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Jan 10 17:12:02 compute-0 systemd[1]: Starting libvirt proxy daemon...
Jan 10 17:12:02 compute-0 systemd[1]: Started libvirt proxy daemon.
Jan 10 17:12:02 compute-0 sudo[204846]: pam_unix(sudo:session): session closed for user root
Jan 10 17:12:02 compute-0 setroubleshoot[204661]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 57fdc539-06a9-4d4f-9887-6b4e9d44cade
Jan 10 17:12:02 compute-0 setroubleshoot[204661]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Jan 10 17:12:02 compute-0 setroubleshoot[204661]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 57fdc539-06a9-4d4f-9887-6b4e9d44cade
Jan 10 17:12:02 compute-0 setroubleshoot[204661]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Jan 10 17:12:03 compute-0 sudo[205061]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zpmrbosbvaxheuzbtrtybyprawzvggmw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065122.9338074-1056-252609680313824/AnsiballZ_systemd.py'
Jan 10 17:12:03 compute-0 sudo[205061]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:12:03 compute-0 python3.9[205063]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 10 17:12:03 compute-0 systemd[1]: Reloading.
Jan 10 17:12:03 compute-0 systemd-sysv-generator[205091]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 10 17:12:03 compute-0 systemd-rc-local-generator[205085]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 17:12:03 compute-0 ceph-mon[75249]: pgmap v491: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:12:03 compute-0 systemd[1]: Listening on libvirt locking daemon socket.
Jan 10 17:12:03 compute-0 systemd[1]: Starting libvirt QEMU daemon socket...
Jan 10 17:12:03 compute-0 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Jan 10 17:12:03 compute-0 systemd[1]: Starting Virtual Machine and Container Registration Service...
Jan 10 17:12:03 compute-0 systemd[1]: Listening on libvirt QEMU daemon socket.
Jan 10 17:12:03 compute-0 systemd[1]: Starting libvirt QEMU daemon admin socket...
Jan 10 17:12:03 compute-0 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Jan 10 17:12:03 compute-0 systemd[1]: Started Virtual Machine and Container Registration Service.
Jan 10 17:12:03 compute-0 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Jan 10 17:12:03 compute-0 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Jan 10 17:12:03 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Jan 10 17:12:03 compute-0 systemd[1]: Started libvirt QEMU daemon.
Jan 10 17:12:04 compute-0 sudo[205061]: pam_unix(sudo:session): session closed for user root
Jan 10 17:12:04 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:12:04 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v492: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:12:04 compute-0 sudo[205275]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pnuxnizxrngyfsppjrfzhnlkmihrxbvz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065124.1654572-1056-183284082842977/AnsiballZ_systemd.py'
Jan 10 17:12:04 compute-0 sudo[205275]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:12:04 compute-0 python3.9[205277]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 10 17:12:04 compute-0 systemd[1]: Reloading.
Jan 10 17:12:04 compute-0 systemd-rc-local-generator[205305]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 17:12:04 compute-0 systemd-sysv-generator[205308]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 10 17:12:05 compute-0 systemd[1]: Starting libvirt secret daemon socket...
Jan 10 17:12:05 compute-0 systemd[1]: Listening on libvirt secret daemon socket.
Jan 10 17:12:05 compute-0 systemd[1]: Starting libvirt secret daemon admin socket...
Jan 10 17:12:05 compute-0 systemd[1]: Starting libvirt secret daemon read-only socket...
Jan 10 17:12:05 compute-0 systemd[1]: Listening on libvirt secret daemon admin socket.
Jan 10 17:12:05 compute-0 systemd[1]: Listening on libvirt secret daemon read-only socket.
Jan 10 17:12:05 compute-0 systemd[1]: Starting libvirt secret daemon...
Jan 10 17:12:05 compute-0 systemd[1]: Started libvirt secret daemon.
Jan 10 17:12:05 compute-0 sudo[205275]: pam_unix(sudo:session): session closed for user root
Jan 10 17:12:05 compute-0 ceph-mon[75249]: pgmap v492: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:12:05 compute-0 sudo[205487]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sqtqkujhafrpxnpoqbrbeivioiiatsou ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065125.595007-1093-86722113260790/AnsiballZ_file.py'
Jan 10 17:12:05 compute-0 sudo[205487]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:12:06 compute-0 python3.9[205489]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:12:06 compute-0 sudo[205487]: pam_unix(sudo:session): session closed for user root
Jan 10 17:12:06 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v493: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:12:06 compute-0 sudo[205639]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-unzdngqtdunieuzwcwnmeanaowocsxwj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065126.3556442-1101-127158864277716/AnsiballZ_find.py'
Jan 10 17:12:06 compute-0 sudo[205639]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:12:06 compute-0 python3.9[205641]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 10 17:12:06 compute-0 sudo[205639]: pam_unix(sudo:session): session closed for user root
Jan 10 17:12:07 compute-0 sudo[205791]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sobhuhbuxhgfvpucjribfobrsyxqtvuz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065127.0869374-1109-71262984409944/AnsiballZ_command.py'
Jan 10 17:12:07 compute-0 sudo[205791]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:12:07 compute-0 python3.9[205793]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;
                                             echo ceph
                                             awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 17:12:07 compute-0 sudo[205791]: pam_unix(sudo:session): session closed for user root
Jan 10 17:12:07 compute-0 ceph-mon[75249]: pgmap v493: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:12:08 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v494: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:12:08 compute-0 python3.9[205947]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 10 17:12:08 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:12:08 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:12:08 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:12:08 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:12:08 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:12:08 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:12:09 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:12:09 compute-0 python3.9[206097]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 17:12:09 compute-0 ceph-mon[75249]: pgmap v494: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:12:10 compute-0 python3.9[206218]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1768065128.889809-1128-19812264224297/.source.xml follow=False _original_basename=secret.xml.j2 checksum=502388dc21d4b7fd5859feb0fdbea4c523b66fd1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:12:10 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v495: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:12:10 compute-0 sudo[206368]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-javmmafhuntuedwfpknkdpexujkjygqw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065130.3087611-1143-112116155237402/AnsiballZ_command.py'
Jan 10 17:12:10 compute-0 sudo[206368]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:12:10 compute-0 python3.9[206370]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4
                                             virsh secret-define --file /tmp/secret.xml
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 17:12:10 compute-0 polkitd[43532]: Registered Authentication Agent for unix-process:206372:290157 (system bus name :1.2506 [pkttyagent --process 206372 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Jan 10 17:12:10 compute-0 polkitd[43532]: Unregistered Authentication Agent for unix-process:206372:290157 (system bus name :1.2506, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Jan 10 17:12:10 compute-0 polkitd[43532]: Registered Authentication Agent for unix-process:206371:290156 (system bus name :1.2507 [pkttyagent --process 206371 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Jan 10 17:12:10 compute-0 polkitd[43532]: Unregistered Authentication Agent for unix-process:206371:290156 (system bus name :1.2507, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Jan 10 17:12:11 compute-0 sudo[206368]: pam_unix(sudo:session): session closed for user root
Jan 10 17:12:11 compute-0 python3.9[206532]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:12:11 compute-0 ceph-mon[75249]: pgmap v495: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:12:12 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v496: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:12:12 compute-0 sudo[206682]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dyrssibefunrphzvtbczeykhnatjnaps ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065132.02309-1159-130756539227700/AnsiballZ_command.py'
Jan 10 17:12:12 compute-0 sudo[206682]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:12:12 compute-0 sudo[206682]: pam_unix(sudo:session): session closed for user root
Jan 10 17:12:12 compute-0 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Jan 10 17:12:12 compute-0 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Consumed 1.018s CPU time.
Jan 10 17:12:12 compute-0 systemd[1]: setroubleshootd.service: Deactivated successfully.
Jan 10 17:12:13 compute-0 sudo[206835]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tycmwiduigxicigatqeldlicuewcodwf ; FSID=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 KEY=AQC7hGJpAAAAABAAX18vjtSqzsniwZc0Ni8AQg== /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065132.8248708-1167-223843998367897/AnsiballZ_command.py'
Jan 10 17:12:13 compute-0 sudo[206835]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:12:13 compute-0 polkitd[43532]: Registered Authentication Agent for unix-process:206838:290409 (system bus name :1.2510 [pkttyagent --process 206838 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Jan 10 17:12:13 compute-0 polkitd[43532]: Unregistered Authentication Agent for unix-process:206838:290409 (system bus name :1.2510, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Jan 10 17:12:13 compute-0 sudo[206835]: pam_unix(sudo:session): session closed for user root
Jan 10 17:12:13 compute-0 ceph-mon[75249]: pgmap v496: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:12:14 compute-0 sudo[206993]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uuudyntjeqyyhftgnvfxvhenbewohqmg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065133.6640456-1175-281299486050343/AnsiballZ_copy.py'
Jan 10 17:12:14 compute-0 sudo[206993]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:12:14 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:12:14 compute-0 python3.9[206995]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:12:14 compute-0 sudo[206993]: pam_unix(sudo:session): session closed for user root
Jan 10 17:12:14 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v497: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:12:14 compute-0 sudo[207145]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rekggdisuzgmgzegmbwjrkvhhcdbxfwe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065134.4450076-1183-255740683179064/AnsiballZ_stat.py'
Jan 10 17:12:14 compute-0 sudo[207145]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:12:15 compute-0 python3.9[207147]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 17:12:15 compute-0 sudo[207145]: pam_unix(sudo:session): session closed for user root
Jan 10 17:12:15 compute-0 sudo[207268]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lcwcsvakhhruttazqfxbpvkbldcmekmb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065134.4450076-1183-255740683179064/AnsiballZ_copy.py'
Jan 10 17:12:15 compute-0 sudo[207268]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:12:15 compute-0 python3.9[207270]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1768065134.4450076-1183-255740683179064/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:12:15 compute-0 sudo[207268]: pam_unix(sudo:session): session closed for user root
Jan 10 17:12:15 compute-0 ceph-mon[75249]: pgmap v497: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:12:16 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v498: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:12:16 compute-0 sudo[207420]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kovpcsecnetzplekmhmjtnaanfbssaxy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065136.0212464-1199-25636581667119/AnsiballZ_file.py'
Jan 10 17:12:16 compute-0 sudo[207420]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:12:16 compute-0 python3.9[207422]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:12:16 compute-0 sudo[207420]: pam_unix(sudo:session): session closed for user root
Jan 10 17:12:17 compute-0 sudo[207583]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rprtsixbjyurdpwbbvkkqhevtxdeiylz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065136.9357245-1207-111816638174461/AnsiballZ_stat.py'
Jan 10 17:12:17 compute-0 sudo[207583]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:12:17 compute-0 podman[207546]: 2026-01-10 17:12:17.366135315 +0000 UTC m=+0.116894084 container health_status a2049b55215718ead6719d161f51721cb7ba61ad8a59a8a1174e05b17008fa6f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller)
Jan 10 17:12:17 compute-0 python3.9[207593]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 17:12:17 compute-0 sudo[207583]: pam_unix(sudo:session): session closed for user root
Jan 10 17:12:17 compute-0 ceph-mon[75249]: pgmap v498: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:12:17 compute-0 sudo[207676]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dlrxvolyrnhhiyfppfyktpajlziuvlab ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065136.9357245-1207-111816638174461/AnsiballZ_file.py'
Jan 10 17:12:17 compute-0 sudo[207676]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:12:18 compute-0 python3.9[207678]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:12:18 compute-0 sudo[207676]: pam_unix(sudo:session): session closed for user root
Jan 10 17:12:18 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v499: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:12:18 compute-0 sudo[207828]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-teulvphmgxeymisvmahjguayqetouctw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065138.3145792-1219-22221332912846/AnsiballZ_stat.py'
Jan 10 17:12:18 compute-0 sudo[207828]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:12:18 compute-0 python3.9[207830]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 17:12:18 compute-0 sudo[207828]: pam_unix(sudo:session): session closed for user root
Jan 10 17:12:19 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:12:19 compute-0 sudo[207906]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lplvivhjdknicwaqqgbumclawtfwmalu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065138.3145792-1219-22221332912846/AnsiballZ_file.py'
Jan 10 17:12:19 compute-0 sudo[207906]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:12:19 compute-0 python3.9[207908]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.2ezhikv2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:12:19 compute-0 sudo[207906]: pam_unix(sudo:session): session closed for user root
Jan 10 17:12:19 compute-0 ceph-mon[75249]: pgmap v499: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:12:19 compute-0 sudo[208058]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tbzvzarhyblizygpkxcmhysrjhyhchkv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065139.6344898-1231-169429177137952/AnsiballZ_stat.py'
Jan 10 17:12:19 compute-0 sudo[208058]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:12:20 compute-0 python3.9[208060]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 17:12:20 compute-0 sudo[208058]: pam_unix(sudo:session): session closed for user root
Jan 10 17:12:20 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v500: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:12:20 compute-0 sudo[208136]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nflxhieomxbftalnmkzwvcecfoswjiyl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065139.6344898-1231-169429177137952/AnsiballZ_file.py'
Jan 10 17:12:20 compute-0 sudo[208136]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:12:20 compute-0 python3.9[208138]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:12:20 compute-0 sudo[208136]: pam_unix(sudo:session): session closed for user root
Jan 10 17:12:21 compute-0 sudo[208288]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dirvuvpmuliqwhyskepgjnhoqqrqssgl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065141.022598-1244-147504064063493/AnsiballZ_command.py'
Jan 10 17:12:21 compute-0 sudo[208288]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:12:21 compute-0 python3.9[208290]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 17:12:21 compute-0 sudo[208288]: pam_unix(sudo:session): session closed for user root
Jan 10 17:12:21 compute-0 ceph-mon[75249]: pgmap v500: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:12:22 compute-0 podman[208368]: 2026-01-10 17:12:22.082323981 +0000 UTC m=+0.076744532 container health_status 4a66317a3a3660e903224f5e823ead0a57597ae17c6ac34919f8a3aeea25124f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 10 17:12:22 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v501: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:12:22 compute-0 sudo[208460]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-udktcqssuhobohhcaiylyhnlwlkmyasj ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1768065141.8463507-1252-54758222431245/AnsiballZ_edpm_nftables_from_files.py'
Jan 10 17:12:22 compute-0 sudo[208460]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:12:22 compute-0 python3[208462]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 10 17:12:22 compute-0 sudo[208460]: pam_unix(sudo:session): session closed for user root
Jan 10 17:12:23 compute-0 sudo[208612]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-euymqlcukcyrntbcspivgtrghcxzawds ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065142.9183433-1260-29639913770369/AnsiballZ_stat.py'
Jan 10 17:12:23 compute-0 sudo[208612]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:12:23 compute-0 python3.9[208614]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 17:12:23 compute-0 sudo[208612]: pam_unix(sudo:session): session closed for user root
Jan 10 17:12:23 compute-0 ceph-mon[75249]: pgmap v501: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:12:23 compute-0 sudo[208690]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ygwzbjjnlcvfxfbzmgbkfgfjbkdycehx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065142.9183433-1260-29639913770369/AnsiballZ_file.py'
Jan 10 17:12:23 compute-0 sudo[208690]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:12:24 compute-0 python3.9[208692]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:12:24 compute-0 sudo[208690]: pam_unix(sudo:session): session closed for user root
Jan 10 17:12:24 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:12:24 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v502: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:12:24 compute-0 sudo[208842]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rdepucrnngxqqjqbiqfnxouiyowsanel ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065144.3799002-1272-62833923519049/AnsiballZ_stat.py'
Jan 10 17:12:24 compute-0 sudo[208842]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:12:25 compute-0 python3.9[208844]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 17:12:25 compute-0 sudo[208842]: pam_unix(sudo:session): session closed for user root
Jan 10 17:12:25 compute-0 sudo[208920]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nypmovcadvsoekklqmmheozlusgcdngg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065144.3799002-1272-62833923519049/AnsiballZ_file.py'
Jan 10 17:12:25 compute-0 sudo[208920]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:12:25 compute-0 python3.9[208922]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:12:25 compute-0 sudo[208920]: pam_unix(sudo:session): session closed for user root
Jan 10 17:12:25 compute-0 ceph-mon[75249]: pgmap v502: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:12:26 compute-0 sudo[209072]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ofbkoigdnbybwvvejaumkiybtegxgvpb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065145.737224-1284-69622479045943/AnsiballZ_stat.py'
Jan 10 17:12:26 compute-0 sudo[209072]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:12:26 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v503: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:12:26 compute-0 python3.9[209074]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 17:12:26 compute-0 sudo[209072]: pam_unix(sudo:session): session closed for user root
Jan 10 17:12:26 compute-0 sudo[209150]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ldodnmevyjszsjltlzaoborxogfzmmws ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065145.737224-1284-69622479045943/AnsiballZ_file.py'
Jan 10 17:12:26 compute-0 sudo[209150]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:12:26 compute-0 python3.9[209152]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:12:26 compute-0 sudo[209150]: pam_unix(sudo:session): session closed for user root
Jan 10 17:12:26 compute-0 ceph-mon[75249]: pgmap v503: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:12:27 compute-0 sudo[209302]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-numwdveflblrqxmiqmbcasqmhqchcujw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065147.082709-1296-92094886759161/AnsiballZ_stat.py'
Jan 10 17:12:27 compute-0 sudo[209302]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:12:27 compute-0 python3.9[209304]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 17:12:27 compute-0 sudo[209302]: pam_unix(sudo:session): session closed for user root
Jan 10 17:12:28 compute-0 sudo[209380]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gadmuhwsnrvktqenxwfsewypkhicmhls ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065147.082709-1296-92094886759161/AnsiballZ_file.py'
Jan 10 17:12:28 compute-0 sudo[209380]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:12:28 compute-0 python3.9[209382]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:12:28 compute-0 sudo[209380]: pam_unix(sudo:session): session closed for user root
Jan 10 17:12:28 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v504: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:12:29 compute-0 sudo[209532]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-outpisrztycwacnmlyrwygyumljhcjja ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065148.4302511-1308-14969805023374/AnsiballZ_stat.py'
Jan 10 17:12:29 compute-0 sudo[209532]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:12:29 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:12:29 compute-0 python3.9[209534]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 17:12:29 compute-0 sudo[209532]: pam_unix(sudo:session): session closed for user root
Jan 10 17:12:29 compute-0 ceph-mon[75249]: pgmap v504: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:12:29 compute-0 sudo[209657]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xjjmoqtqspmrfrkrbfpxarxxkhzgvqpl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065148.4302511-1308-14969805023374/AnsiballZ_copy.py'
Jan 10 17:12:29 compute-0 sudo[209657]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:12:29 compute-0 python3.9[209659]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768065148.4302511-1308-14969805023374/.source.nft follow=False _original_basename=ruleset.j2 checksum=ac3ce8ce2d33fa5fe0a79b0c811c97734ce43fa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:12:29 compute-0 sudo[209657]: pam_unix(sudo:session): session closed for user root
Jan 10 17:12:30 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v505: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:12:30 compute-0 sudo[209809]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gpaknrrngqexfijuxyxxqsqugnurlzcg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065150.1510575-1323-68163624248088/AnsiballZ_file.py'
Jan 10 17:12:30 compute-0 sudo[209809]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:12:30 compute-0 python3.9[209811]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:12:30 compute-0 sudo[209809]: pam_unix(sudo:session): session closed for user root
Jan 10 17:12:32 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v506: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:12:32 compute-0 ceph-mon[75249]: pgmap v505: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:12:32 compute-0 sudo[209961]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pvmzufeevtrxveziusxfhzgfivihyeie ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065150.9682007-1331-241493465601104/AnsiballZ_command.py'
Jan 10 17:12:32 compute-0 sudo[209961]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:12:32 compute-0 python3.9[209963]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 17:12:32 compute-0 sudo[209961]: pam_unix(sudo:session): session closed for user root
Jan 10 17:12:33 compute-0 ceph-mon[75249]: pgmap v506: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:12:33 compute-0 sudo[210116]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iufscpwwdftatkllpwxbfewsosgldykb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065153.1366189-1339-118447715571604/AnsiballZ_blockinfile.py'
Jan 10 17:12:33 compute-0 sudo[210116]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:12:33 compute-0 python3.9[210118]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:12:33 compute-0 sudo[210116]: pam_unix(sudo:session): session closed for user root
Jan 10 17:12:34 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:12:34 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v507: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:12:34 compute-0 sudo[210268]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dsndddjksjojevlxnjvyhypogmstdlsp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065154.1698117-1348-255991802202622/AnsiballZ_command.py'
Jan 10 17:12:34 compute-0 sudo[210268]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:12:34 compute-0 python3.9[210270]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 17:12:34 compute-0 sudo[210268]: pam_unix(sudo:session): session closed for user root
Jan 10 17:12:35 compute-0 sudo[210421]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mrjpjqookzbhqaifprvznfqtmhlpyefx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065155.045-1356-107145065154035/AnsiballZ_stat.py'
Jan 10 17:12:35 compute-0 sudo[210421]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:12:35 compute-0 python3.9[210423]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 10 17:12:35 compute-0 ceph-mon[75249]: pgmap v507: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:12:35 compute-0 sudo[210421]: pam_unix(sudo:session): session closed for user root
Jan 10 17:12:36 compute-0 sudo[210575]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-evillwkxtgepozdqkwjnartuucnkacdt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065155.8521042-1364-230352594907728/AnsiballZ_command.py'
Jan 10 17:12:36 compute-0 sudo[210575]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:12:36 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v508: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:12:36 compute-0 python3.9[210577]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 17:12:36 compute-0 sudo[210575]: pam_unix(sudo:session): session closed for user root
Jan 10 17:12:37 compute-0 sudo[210730]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wgdnedsyszkskhqafgtcmgugbvdzptxx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065156.8241377-1372-102540576496194/AnsiballZ_file.py'
Jan 10 17:12:37 compute-0 sudo[210730]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:12:37 compute-0 python3.9[210732]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:12:37 compute-0 sudo[210730]: pam_unix(sudo:session): session closed for user root
Jan 10 17:12:37 compute-0 ceph-mon[75249]: pgmap v508: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:12:37 compute-0 sudo[210882]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dxozmsjoiehoamaizldfpifiuwedtyxv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065157.6038177-1380-122392488073948/AnsiballZ_stat.py'
Jan 10 17:12:37 compute-0 sudo[210882]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:12:38 compute-0 ceph-mgr[75538]: [balancer INFO root] Optimize plan auto_2026-01-10_17:12:38
Jan 10 17:12:38 compute-0 ceph-mgr[75538]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 10 17:12:38 compute-0 ceph-mgr[75538]: [balancer INFO root] do_upmap
Jan 10 17:12:38 compute-0 ceph-mgr[75538]: [balancer INFO root] pools ['.mgr', 'backups', 'images', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'vms', 'volumes']
Jan 10 17:12:38 compute-0 ceph-mgr[75538]: [balancer INFO root] prepared 0/10 upmap changes
Jan 10 17:12:38 compute-0 python3.9[210884]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 17:12:38 compute-0 sudo[210882]: pam_unix(sudo:session): session closed for user root
Jan 10 17:12:38 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v509: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:12:38 compute-0 sudo[211005]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vrlqpwdkpfnpzkutpoyftgixcbbjqhuy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065157.6038177-1380-122392488073948/AnsiballZ_copy.py'
Jan 10 17:12:38 compute-0 sudo[211005]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:12:38 compute-0 python3.9[211007]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1768065157.6038177-1380-122392488073948/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:12:38 compute-0 sudo[211005]: pam_unix(sudo:session): session closed for user root
Jan 10 17:12:38 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:12:38 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:12:38 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:12:38 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:12:38 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:12:38 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:12:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 10 17:12:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 10 17:12:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 10 17:12:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 10 17:12:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 10 17:12:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 10 17:12:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 10 17:12:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 10 17:12:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 10 17:12:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 10 17:12:39 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:12:39 compute-0 sudo[211157]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-agkjgfahfsjemsnrnkpntxzdqidmneys ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065159.1715386-1395-232336190912929/AnsiballZ_stat.py'
Jan 10 17:12:39 compute-0 sudo[211157]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:12:39 compute-0 ceph-mon[75249]: pgmap v509: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:12:39 compute-0 python3.9[211159]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 17:12:39 compute-0 sudo[211157]: pam_unix(sudo:session): session closed for user root
Jan 10 17:12:40 compute-0 sudo[211280]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-temgkjnikmvwrpyrpqrnqjqtsoywmiaq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065159.1715386-1395-232336190912929/AnsiballZ_copy.py'
Jan 10 17:12:40 compute-0 sudo[211280]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:12:40 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v510: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:12:40 compute-0 python3.9[211282]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1768065159.1715386-1395-232336190912929/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:12:40 compute-0 sudo[211280]: pam_unix(sudo:session): session closed for user root
Jan 10 17:12:41 compute-0 sudo[211432]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rwhztodvjbgxwlhhqxwmlwdbpwzvtkzl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065160.7380064-1410-159791184435883/AnsiballZ_stat.py'
Jan 10 17:12:41 compute-0 sudo[211432]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:12:41 compute-0 python3.9[211434]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 17:12:41 compute-0 sudo[211432]: pam_unix(sudo:session): session closed for user root
Jan 10 17:12:41 compute-0 ceph-mon[75249]: pgmap v510: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:12:41 compute-0 sudo[211555]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zdjacwzjjrnohqriplckygatuwnssulg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065160.7380064-1410-159791184435883/AnsiballZ_copy.py'
Jan 10 17:12:41 compute-0 sudo[211555]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:12:41 compute-0 python3.9[211557]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1768065160.7380064-1410-159791184435883/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:12:41 compute-0 sudo[211555]: pam_unix(sudo:session): session closed for user root
Jan 10 17:12:42 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v511: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:12:42 compute-0 sudo[211707]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cnxobyixwhnlzfkkmjphculxgkjbooqd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065162.1855552-1425-224662875938736/AnsiballZ_systemd.py'
Jan 10 17:12:42 compute-0 sudo[211707]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:12:42 compute-0 python3.9[211709]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 10 17:12:42 compute-0 systemd[1]: Reloading.
Jan 10 17:12:43 compute-0 systemd-rc-local-generator[211734]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 17:12:43 compute-0 systemd-sysv-generator[211738]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 10 17:12:43 compute-0 systemd[1]: Reached target edpm_libvirt.target.
Jan 10 17:12:43 compute-0 sudo[211707]: pam_unix(sudo:session): session closed for user root
Jan 10 17:12:43 compute-0 sudo[211772]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 17:12:43 compute-0 sudo[211772]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:12:43 compute-0 sudo[211772]: pam_unix(sudo:session): session closed for user root
Jan 10 17:12:43 compute-0 ceph-mon[75249]: pgmap v511: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:12:43 compute-0 sudo[211801]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 10 17:12:43 compute-0 sudo[211801]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:12:44 compute-0 sudo[211961]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bkwrbwneykbmjewqxrujazaxcebmjmyc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065163.622718-1433-197735243432803/AnsiballZ_systemd.py'
Jan 10 17:12:44 compute-0 sudo[211961]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:12:44 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:12:44 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v512: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:12:44 compute-0 python3.9[211963]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Jan 10 17:12:44 compute-0 systemd[1]: Reloading.
Jan 10 17:12:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] _maybe_adjust
Jan 10 17:12:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:12:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 10 17:12:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:12:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 17:12:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:12:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 17:12:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:12:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 17:12:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:12:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 17:12:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:12:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 9.302004027771843e-07 of space, bias 4.0, pg target 0.0011162404833326212 quantized to 16 (current 16)
Jan 10 17:12:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:12:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 17:12:44 compute-0 sudo[211801]: pam_unix(sudo:session): session closed for user root
Jan 10 17:12:44 compute-0 systemd-rc-local-generator[212007]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 17:12:44 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 17:12:44 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 17:12:44 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 10 17:12:44 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 10 17:12:44 compute-0 systemd-sysv-generator[212011]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 10 17:12:44 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 10 17:12:44 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:12:44 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 10 17:12:44 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 10 17:12:44 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 10 17:12:44 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 10 17:12:44 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 17:12:44 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 17:12:44 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 17:12:44 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 10 17:12:44 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:12:44 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 10 17:12:44 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 10 17:12:44 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 17:12:44 compute-0 sudo[212016]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 17:12:44 compute-0 sudo[212016]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:12:44 compute-0 sudo[212016]: pam_unix(sudo:session): session closed for user root
Jan 10 17:12:44 compute-0 systemd[1]: Reloading.
Jan 10 17:12:44 compute-0 systemd-rc-local-generator[212090]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 17:12:44 compute-0 systemd-sysv-generator[212097]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 10 17:12:45 compute-0 sudo[212043]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 10 17:12:45 compute-0 sudo[212043]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:12:45 compute-0 sudo[211961]: pam_unix(sudo:session): session closed for user root
Jan 10 17:12:45 compute-0 podman[212138]: 2026-01-10 17:12:45.395930205 +0000 UTC m=+0.072509747 container create b8dc01457326a5fff68f8ebddd0b7f98d1aa1daaa1f53c4100540e9a66cc37e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_gauss, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 17:12:45 compute-0 systemd[1]: Started libpod-conmon-b8dc01457326a5fff68f8ebddd0b7f98d1aa1daaa1f53c4100540e9a66cc37e0.scope.
Jan 10 17:12:45 compute-0 podman[212138]: 2026-01-10 17:12:45.365656392 +0000 UTC m=+0.042235994 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:12:45 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:12:45 compute-0 podman[212138]: 2026-01-10 17:12:45.500778156 +0000 UTC m=+0.177357698 container init b8dc01457326a5fff68f8ebddd0b7f98d1aa1daaa1f53c4100540e9a66cc37e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_gauss, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 10 17:12:45 compute-0 podman[212138]: 2026-01-10 17:12:45.508274115 +0000 UTC m=+0.184853627 container start b8dc01457326a5fff68f8ebddd0b7f98d1aa1daaa1f53c4100540e9a66cc37e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_gauss, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 10 17:12:45 compute-0 podman[212138]: 2026-01-10 17:12:45.51152281 +0000 UTC m=+0.188102332 container attach b8dc01457326a5fff68f8ebddd0b7f98d1aa1daaa1f53c4100540e9a66cc37e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_gauss, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 17:12:45 compute-0 admiring_gauss[212154]: 167 167
Jan 10 17:12:45 compute-0 systemd[1]: libpod-b8dc01457326a5fff68f8ebddd0b7f98d1aa1daaa1f53c4100540e9a66cc37e0.scope: Deactivated successfully.
Jan 10 17:12:45 compute-0 podman[212138]: 2026-01-10 17:12:45.517820943 +0000 UTC m=+0.194400465 container died b8dc01457326a5fff68f8ebddd0b7f98d1aa1daaa1f53c4100540e9a66cc37e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_gauss, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default)
Jan 10 17:12:45 compute-0 sshd-session[153223]: Connection closed by 192.168.122.30 port 32862
Jan 10 17:12:45 compute-0 sshd-session[153220]: pam_unix(sshd:session): session closed for user zuul
Jan 10 17:12:45 compute-0 systemd[1]: session-49.scope: Deactivated successfully.
Jan 10 17:12:45 compute-0 systemd[1]: session-49.scope: Consumed 4min 1.728s CPU time.
Jan 10 17:12:45 compute-0 systemd-logind[798]: Session 49 logged out. Waiting for processes to exit.
Jan 10 17:12:45 compute-0 systemd-logind[798]: Removed session 49.
Jan 10 17:12:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-6d4e90fb64ba93a3edc0a4dd8ecaac67c5687f7fbaea358802a8958e63059908-merged.mount: Deactivated successfully.
Jan 10 17:12:45 compute-0 podman[212138]: 2026-01-10 17:12:45.56463108 +0000 UTC m=+0.241210582 container remove b8dc01457326a5fff68f8ebddd0b7f98d1aa1daaa1f53c4100540e9a66cc37e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_gauss, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 17:12:45 compute-0 systemd[1]: libpod-conmon-b8dc01457326a5fff68f8ebddd0b7f98d1aa1daaa1f53c4100540e9a66cc37e0.scope: Deactivated successfully.
Jan 10 17:12:45 compute-0 ceph-mon[75249]: pgmap v512: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:12:45 compute-0 podman[212177]: 2026-01-10 17:12:45.758345535 +0000 UTC m=+0.046779577 container create 3077141643eb467a83381ce38e6809e2a96c120e7094d590dd9ed40fea490f79 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_galileo, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 10 17:12:45 compute-0 systemd[1]: Started libpod-conmon-3077141643eb467a83381ce38e6809e2a96c120e7094d590dd9ed40fea490f79.scope.
Jan 10 17:12:45 compute-0 podman[212177]: 2026-01-10 17:12:45.740251806 +0000 UTC m=+0.028685828 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:12:45 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:12:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ba235553abba65d6b01dc9960ee3801569ce9cae3fea5025b362272669058d2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 17:12:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ba235553abba65d6b01dc9960ee3801569ce9cae3fea5025b362272669058d2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 17:12:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ba235553abba65d6b01dc9960ee3801569ce9cae3fea5025b362272669058d2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 17:12:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ba235553abba65d6b01dc9960ee3801569ce9cae3fea5025b362272669058d2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 17:12:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ba235553abba65d6b01dc9960ee3801569ce9cae3fea5025b362272669058d2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 10 17:12:45 compute-0 podman[212177]: 2026-01-10 17:12:45.879823791 +0000 UTC m=+0.168257833 container init 3077141643eb467a83381ce38e6809e2a96c120e7094d590dd9ed40fea490f79 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_galileo, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 10 17:12:45 compute-0 podman[212177]: 2026-01-10 17:12:45.895189469 +0000 UTC m=+0.183623511 container start 3077141643eb467a83381ce38e6809e2a96c120e7094d590dd9ed40fea490f79 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_galileo, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 10 17:12:45 compute-0 podman[212177]: 2026-01-10 17:12:45.926499173 +0000 UTC m=+0.214933215 container attach 3077141643eb467a83381ce38e6809e2a96c120e7094d590dd9ed40fea490f79 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_galileo, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 17:12:46 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v513: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:12:46 compute-0 musing_galileo[212193]: --> passed data devices: 0 physical, 3 LVM
Jan 10 17:12:46 compute-0 musing_galileo[212193]: --> All data devices are unavailable
Jan 10 17:12:46 compute-0 systemd[1]: libpod-3077141643eb467a83381ce38e6809e2a96c120e7094d590dd9ed40fea490f79.scope: Deactivated successfully.
Jan 10 17:12:46 compute-0 podman[212177]: 2026-01-10 17:12:46.524680635 +0000 UTC m=+0.813114657 container died 3077141643eb467a83381ce38e6809e2a96c120e7094d590dd9ed40fea490f79 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_galileo, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 10 17:12:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-9ba235553abba65d6b01dc9960ee3801569ce9cae3fea5025b362272669058d2-merged.mount: Deactivated successfully.
Jan 10 17:12:46 compute-0 podman[212177]: 2026-01-10 17:12:46.583554904 +0000 UTC m=+0.871988926 container remove 3077141643eb467a83381ce38e6809e2a96c120e7094d590dd9ed40fea490f79 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_galileo, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 17:12:46 compute-0 systemd[1]: libpod-conmon-3077141643eb467a83381ce38e6809e2a96c120e7094d590dd9ed40fea490f79.scope: Deactivated successfully.
Jan 10 17:12:46 compute-0 sudo[212043]: pam_unix(sudo:session): session closed for user root
Jan 10 17:12:46 compute-0 sudo[212225]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 17:12:46 compute-0 sudo[212225]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:12:46 compute-0 sudo[212225]: pam_unix(sudo:session): session closed for user root
Jan 10 17:12:46 compute-0 sudo[212250]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 -- lvm list --format json
Jan 10 17:12:46 compute-0 sudo[212250]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:12:47 compute-0 podman[212286]: 2026-01-10 17:12:47.102003438 +0000 UTC m=+0.061573589 container create cb4ad84e37f9da5ce56bfa4e40c83ca11f865608a5c64cc8514fa7350b5354cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_bhaskara, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 10 17:12:47 compute-0 systemd[1]: Started libpod-conmon-cb4ad84e37f9da5ce56bfa4e40c83ca11f865608a5c64cc8514fa7350b5354cb.scope.
Jan 10 17:12:47 compute-0 podman[212286]: 2026-01-10 17:12:47.075307609 +0000 UTC m=+0.034877860 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:12:47 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:12:47 compute-0 podman[212286]: 2026-01-10 17:12:47.199981348 +0000 UTC m=+0.159551569 container init cb4ad84e37f9da5ce56bfa4e40c83ca11f865608a5c64cc8514fa7350b5354cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_bhaskara, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 10 17:12:47 compute-0 podman[212286]: 2026-01-10 17:12:47.212123742 +0000 UTC m=+0.171693923 container start cb4ad84e37f9da5ce56bfa4e40c83ca11f865608a5c64cc8514fa7350b5354cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_bhaskara, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 10 17:12:47 compute-0 podman[212286]: 2026-01-10 17:12:47.216428748 +0000 UTC m=+0.175998939 container attach cb4ad84e37f9da5ce56bfa4e40c83ca11f865608a5c64cc8514fa7350b5354cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_bhaskara, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 17:12:47 compute-0 musing_bhaskara[212302]: 167 167
Jan 10 17:12:47 compute-0 systemd[1]: libpod-cb4ad84e37f9da5ce56bfa4e40c83ca11f865608a5c64cc8514fa7350b5354cb.scope: Deactivated successfully.
Jan 10 17:12:47 compute-0 podman[212286]: 2026-01-10 17:12:47.22163667 +0000 UTC m=+0.181206861 container died cb4ad84e37f9da5ce56bfa4e40c83ca11f865608a5c64cc8514fa7350b5354cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_bhaskara, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 10 17:12:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-303261a2b1ed1a789606cf9803987ec944758b267dbbfc9e5d15096fc3a4f641-merged.mount: Deactivated successfully.
Jan 10 17:12:47 compute-0 podman[212286]: 2026-01-10 17:12:47.274465092 +0000 UTC m=+0.234035283 container remove cb4ad84e37f9da5ce56bfa4e40c83ca11f865608a5c64cc8514fa7350b5354cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_bhaskara, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 17:12:47 compute-0 systemd[1]: libpod-conmon-cb4ad84e37f9da5ce56bfa4e40c83ca11f865608a5c64cc8514fa7350b5354cb.scope: Deactivated successfully.
Jan 10 17:12:47 compute-0 podman[212326]: 2026-01-10 17:12:47.511518572 +0000 UTC m=+0.066734499 container create e3851d1ae7d9fa80095fe142fdcb27fd4cfa5c296f93fd9d58bc1b9b783757c9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_northcutt, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 10 17:12:47 compute-0 systemd[1]: Started libpod-conmon-e3851d1ae7d9fa80095fe142fdcb27fd4cfa5c296f93fd9d58bc1b9b783757c9.scope.
Jan 10 17:12:47 compute-0 podman[212326]: 2026-01-10 17:12:47.482483915 +0000 UTC m=+0.037699892 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:12:47 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:12:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1478a8573712360fc3106c04254850da04d7f31f3216dee0f2c492be58eac0b0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 17:12:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1478a8573712360fc3106c04254850da04d7f31f3216dee0f2c492be58eac0b0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 17:12:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1478a8573712360fc3106c04254850da04d7f31f3216dee0f2c492be58eac0b0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 17:12:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1478a8573712360fc3106c04254850da04d7f31f3216dee0f2c492be58eac0b0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 17:12:47 compute-0 podman[212326]: 2026-01-10 17:12:47.605068953 +0000 UTC m=+0.160284880 container init e3851d1ae7d9fa80095fe142fdcb27fd4cfa5c296f93fd9d58bc1b9b783757c9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_northcutt, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 17:12:47 compute-0 podman[212326]: 2026-01-10 17:12:47.616806956 +0000 UTC m=+0.172022873 container start e3851d1ae7d9fa80095fe142fdcb27fd4cfa5c296f93fd9d58bc1b9b783757c9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_northcutt, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 10 17:12:47 compute-0 podman[212326]: 2026-01-10 17:12:47.621379999 +0000 UTC m=+0.176595926 container attach e3851d1ae7d9fa80095fe142fdcb27fd4cfa5c296f93fd9d58bc1b9b783757c9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_northcutt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 10 17:12:47 compute-0 ceph-mon[75249]: pgmap v513: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:12:47 compute-0 podman[212340]: 2026-01-10 17:12:47.726143557 +0000 UTC m=+0.159565099 container health_status a2049b55215718ead6719d161f51721cb7ba61ad8a59a8a1174e05b17008fa6f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 10 17:12:47 compute-0 dreamy_northcutt[212343]: {
Jan 10 17:12:47 compute-0 dreamy_northcutt[212343]:     "0": [
Jan 10 17:12:47 compute-0 dreamy_northcutt[212343]:         {
Jan 10 17:12:47 compute-0 dreamy_northcutt[212343]:             "devices": [
Jan 10 17:12:47 compute-0 dreamy_northcutt[212343]:                 "/dev/loop3"
Jan 10 17:12:47 compute-0 dreamy_northcutt[212343]:             ],
Jan 10 17:12:47 compute-0 dreamy_northcutt[212343]:             "lv_name": "ceph_lv0",
Jan 10 17:12:47 compute-0 dreamy_northcutt[212343]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 10 17:12:47 compute-0 dreamy_northcutt[212343]:             "lv_size": "21470642176",
Jan 10 17:12:47 compute-0 dreamy_northcutt[212343]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9aa1dcc9-88f4-49c0-be40-744313964d3e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 17:12:47 compute-0 dreamy_northcutt[212343]:             "lv_uuid": "fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2",
Jan 10 17:12:47 compute-0 dreamy_northcutt[212343]:             "name": "ceph_lv0",
Jan 10 17:12:47 compute-0 dreamy_northcutt[212343]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 10 17:12:47 compute-0 dreamy_northcutt[212343]:             "tags": {
Jan 10 17:12:47 compute-0 dreamy_northcutt[212343]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 10 17:12:47 compute-0 dreamy_northcutt[212343]:                 "ceph.block_uuid": "fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2",
Jan 10 17:12:47 compute-0 dreamy_northcutt[212343]:                 "ceph.cephx_lockbox_secret": "",
Jan 10 17:12:47 compute-0 dreamy_northcutt[212343]:                 "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 17:12:47 compute-0 dreamy_northcutt[212343]:                 "ceph.cluster_name": "ceph",
Jan 10 17:12:47 compute-0 dreamy_northcutt[212343]:                 "ceph.crush_device_class": "",
Jan 10 17:12:47 compute-0 dreamy_northcutt[212343]:                 "ceph.encrypted": "0",
Jan 10 17:12:47 compute-0 dreamy_northcutt[212343]:                 "ceph.objectstore": "bluestore",
Jan 10 17:12:47 compute-0 dreamy_northcutt[212343]:                 "ceph.osd_fsid": "9aa1dcc9-88f4-49c0-be40-744313964d3e",
Jan 10 17:12:47 compute-0 dreamy_northcutt[212343]:                 "ceph.osd_id": "0",
Jan 10 17:12:47 compute-0 dreamy_northcutt[212343]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 10 17:12:47 compute-0 dreamy_northcutt[212343]:                 "ceph.type": "block",
Jan 10 17:12:47 compute-0 dreamy_northcutt[212343]:                 "ceph.vdo": "0",
Jan 10 17:12:47 compute-0 dreamy_northcutt[212343]:                 "ceph.with_tpm": "0"
Jan 10 17:12:47 compute-0 dreamy_northcutt[212343]:             },
Jan 10 17:12:47 compute-0 dreamy_northcutt[212343]:             "type": "block",
Jan 10 17:12:47 compute-0 dreamy_northcutt[212343]:             "vg_name": "ceph_vg0"
Jan 10 17:12:47 compute-0 dreamy_northcutt[212343]:         }
Jan 10 17:12:47 compute-0 dreamy_northcutt[212343]:     ],
Jan 10 17:12:47 compute-0 dreamy_northcutt[212343]:     "1": [
Jan 10 17:12:47 compute-0 dreamy_northcutt[212343]:         {
Jan 10 17:12:47 compute-0 dreamy_northcutt[212343]:             "devices": [
Jan 10 17:12:47 compute-0 dreamy_northcutt[212343]:                 "/dev/loop4"
Jan 10 17:12:47 compute-0 dreamy_northcutt[212343]:             ],
Jan 10 17:12:47 compute-0 dreamy_northcutt[212343]:             "lv_name": "ceph_lv1",
Jan 10 17:12:47 compute-0 dreamy_northcutt[212343]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 10 17:12:47 compute-0 dreamy_northcutt[212343]:             "lv_size": "21470642176",
Jan 10 17:12:47 compute-0 dreamy_northcutt[212343]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e8e31518-65ae-476c-891c-e2fc550d0a1c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 17:12:47 compute-0 dreamy_northcutt[212343]:             "lv_uuid": "jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE",
Jan 10 17:12:47 compute-0 dreamy_northcutt[212343]:             "name": "ceph_lv1",
Jan 10 17:12:47 compute-0 dreamy_northcutt[212343]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 10 17:12:47 compute-0 dreamy_northcutt[212343]:             "tags": {
Jan 10 17:12:47 compute-0 dreamy_northcutt[212343]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 10 17:12:47 compute-0 dreamy_northcutt[212343]:                 "ceph.block_uuid": "jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE",
Jan 10 17:12:47 compute-0 dreamy_northcutt[212343]:                 "ceph.cephx_lockbox_secret": "",
Jan 10 17:12:47 compute-0 dreamy_northcutt[212343]:                 "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 17:12:47 compute-0 dreamy_northcutt[212343]:                 "ceph.cluster_name": "ceph",
Jan 10 17:12:47 compute-0 dreamy_northcutt[212343]:                 "ceph.crush_device_class": "",
Jan 10 17:12:47 compute-0 dreamy_northcutt[212343]:                 "ceph.encrypted": "0",
Jan 10 17:12:47 compute-0 dreamy_northcutt[212343]:                 "ceph.objectstore": "bluestore",
Jan 10 17:12:47 compute-0 dreamy_northcutt[212343]:                 "ceph.osd_fsid": "e8e31518-65ae-476c-891c-e2fc550d0a1c",
Jan 10 17:12:47 compute-0 dreamy_northcutt[212343]:                 "ceph.osd_id": "1",
Jan 10 17:12:47 compute-0 dreamy_northcutt[212343]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 10 17:12:47 compute-0 dreamy_northcutt[212343]:                 "ceph.type": "block",
Jan 10 17:12:47 compute-0 dreamy_northcutt[212343]:                 "ceph.vdo": "0",
Jan 10 17:12:47 compute-0 dreamy_northcutt[212343]:                 "ceph.with_tpm": "0"
Jan 10 17:12:47 compute-0 dreamy_northcutt[212343]:             },
Jan 10 17:12:47 compute-0 dreamy_northcutt[212343]:             "type": "block",
Jan 10 17:12:47 compute-0 dreamy_northcutt[212343]:             "vg_name": "ceph_vg1"
Jan 10 17:12:47 compute-0 dreamy_northcutt[212343]:         }
Jan 10 17:12:47 compute-0 dreamy_northcutt[212343]:     ],
Jan 10 17:12:47 compute-0 dreamy_northcutt[212343]:     "2": [
Jan 10 17:12:47 compute-0 dreamy_northcutt[212343]:         {
Jan 10 17:12:47 compute-0 dreamy_northcutt[212343]:             "devices": [
Jan 10 17:12:47 compute-0 dreamy_northcutt[212343]:                 "/dev/loop5"
Jan 10 17:12:47 compute-0 dreamy_northcutt[212343]:             ],
Jan 10 17:12:47 compute-0 dreamy_northcutt[212343]:             "lv_name": "ceph_lv2",
Jan 10 17:12:47 compute-0 dreamy_northcutt[212343]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 10 17:12:47 compute-0 dreamy_northcutt[212343]:             "lv_size": "21470642176",
Jan 10 17:12:47 compute-0 dreamy_northcutt[212343]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=87473727-6468-4f68-8371-e0bf60edaa43,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 17:12:47 compute-0 dreamy_northcutt[212343]:             "lv_uuid": "47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL",
Jan 10 17:12:47 compute-0 dreamy_northcutt[212343]:             "name": "ceph_lv2",
Jan 10 17:12:47 compute-0 dreamy_northcutt[212343]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 10 17:12:47 compute-0 dreamy_northcutt[212343]:             "tags": {
Jan 10 17:12:47 compute-0 dreamy_northcutt[212343]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 10 17:12:47 compute-0 dreamy_northcutt[212343]:                 "ceph.block_uuid": "47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL",
Jan 10 17:12:47 compute-0 dreamy_northcutt[212343]:                 "ceph.cephx_lockbox_secret": "",
Jan 10 17:12:47 compute-0 dreamy_northcutt[212343]:                 "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 17:12:47 compute-0 dreamy_northcutt[212343]:                 "ceph.cluster_name": "ceph",
Jan 10 17:12:47 compute-0 dreamy_northcutt[212343]:                 "ceph.crush_device_class": "",
Jan 10 17:12:47 compute-0 dreamy_northcutt[212343]:                 "ceph.encrypted": "0",
Jan 10 17:12:47 compute-0 dreamy_northcutt[212343]:                 "ceph.objectstore": "bluestore",
Jan 10 17:12:47 compute-0 dreamy_northcutt[212343]:                 "ceph.osd_fsid": "87473727-6468-4f68-8371-e0bf60edaa43",
Jan 10 17:12:47 compute-0 dreamy_northcutt[212343]:                 "ceph.osd_id": "2",
Jan 10 17:12:47 compute-0 dreamy_northcutt[212343]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 10 17:12:47 compute-0 dreamy_northcutt[212343]:                 "ceph.type": "block",
Jan 10 17:12:47 compute-0 dreamy_northcutt[212343]:                 "ceph.vdo": "0",
Jan 10 17:12:47 compute-0 dreamy_northcutt[212343]:                 "ceph.with_tpm": "0"
Jan 10 17:12:47 compute-0 dreamy_northcutt[212343]:             },
Jan 10 17:12:47 compute-0 dreamy_northcutt[212343]:             "type": "block",
Jan 10 17:12:47 compute-0 dreamy_northcutt[212343]:             "vg_name": "ceph_vg2"
Jan 10 17:12:47 compute-0 dreamy_northcutt[212343]:         }
Jan 10 17:12:47 compute-0 dreamy_northcutt[212343]:     ]
Jan 10 17:12:47 compute-0 dreamy_northcutt[212343]: }
Jan 10 17:12:47 compute-0 systemd[1]: libpod-e3851d1ae7d9fa80095fe142fdcb27fd4cfa5c296f93fd9d58bc1b9b783757c9.scope: Deactivated successfully.
Jan 10 17:12:47 compute-0 podman[212326]: 2026-01-10 17:12:47.993369878 +0000 UTC m=+0.548585775 container died e3851d1ae7d9fa80095fe142fdcb27fd4cfa5c296f93fd9d58bc1b9b783757c9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_northcutt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 17:12:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-1478a8573712360fc3106c04254850da04d7f31f3216dee0f2c492be58eac0b0-merged.mount: Deactivated successfully.
Jan 10 17:12:48 compute-0 podman[212326]: 2026-01-10 17:12:48.047689264 +0000 UTC m=+0.602905161 container remove e3851d1ae7d9fa80095fe142fdcb27fd4cfa5c296f93fd9d58bc1b9b783757c9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_northcutt, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 10 17:12:48 compute-0 systemd[1]: libpod-conmon-e3851d1ae7d9fa80095fe142fdcb27fd4cfa5c296f93fd9d58bc1b9b783757c9.scope: Deactivated successfully.
Jan 10 17:12:48 compute-0 sudo[212250]: pam_unix(sudo:session): session closed for user root
Jan 10 17:12:48 compute-0 sudo[212389]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 17:12:48 compute-0 sudo[212389]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:12:48 compute-0 sudo[212389]: pam_unix(sudo:session): session closed for user root
Jan 10 17:12:48 compute-0 sudo[212414]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 -- raw list --format json
Jan 10 17:12:48 compute-0 sudo[212414]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:12:48 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v514: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:12:48 compute-0 podman[212451]: 2026-01-10 17:12:48.692333351 +0000 UTC m=+0.065292347 container create 36b85e87dfe0c7b49a7aa686e246495e5d78df3e4c84f3ab5af66e4a1fa7bc32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_golick, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 10 17:12:48 compute-0 systemd[1]: Started libpod-conmon-36b85e87dfe0c7b49a7aa686e246495e5d78df3e4c84f3ab5af66e4a1fa7bc32.scope.
Jan 10 17:12:48 compute-0 podman[212451]: 2026-01-10 17:12:48.671561824 +0000 UTC m=+0.044520820 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:12:48 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:12:48 compute-0 podman[212451]: 2026-01-10 17:12:48.788765006 +0000 UTC m=+0.161723982 container init 36b85e87dfe0c7b49a7aa686e246495e5d78df3e4c84f3ab5af66e4a1fa7bc32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_golick, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 10 17:12:48 compute-0 podman[212451]: 2026-01-10 17:12:48.798769408 +0000 UTC m=+0.171728364 container start 36b85e87dfe0c7b49a7aa686e246495e5d78df3e4c84f3ab5af66e4a1fa7bc32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_golick, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 17:12:48 compute-0 podman[212451]: 2026-01-10 17:12:48.802778365 +0000 UTC m=+0.175737331 container attach 36b85e87dfe0c7b49a7aa686e246495e5d78df3e4c84f3ab5af66e4a1fa7bc32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_golick, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 17:12:48 compute-0 elastic_golick[212467]: 167 167
Jan 10 17:12:48 compute-0 systemd[1]: libpod-36b85e87dfe0c7b49a7aa686e246495e5d78df3e4c84f3ab5af66e4a1fa7bc32.scope: Deactivated successfully.
Jan 10 17:12:48 compute-0 podman[212451]: 2026-01-10 17:12:48.806163853 +0000 UTC m=+0.179122819 container died 36b85e87dfe0c7b49a7aa686e246495e5d78df3e4c84f3ab5af66e4a1fa7bc32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_golick, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default)
Jan 10 17:12:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-e2d182955f174c666d2b585a58b21850ca49c98bdd51786e84bacf2ee8416379-merged.mount: Deactivated successfully.
Jan 10 17:12:48 compute-0 podman[212451]: 2026-01-10 17:12:48.853192736 +0000 UTC m=+0.226151702 container remove 36b85e87dfe0c7b49a7aa686e246495e5d78df3e4c84f3ab5af66e4a1fa7bc32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_golick, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 17:12:48 compute-0 systemd[1]: libpod-conmon-36b85e87dfe0c7b49a7aa686e246495e5d78df3e4c84f3ab5af66e4a1fa7bc32.scope: Deactivated successfully.
Jan 10 17:12:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:12:48.913 152671 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 10 17:12:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:12:48.915 152671 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 10 17:12:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:12:48.915 152671 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 10 17:12:49 compute-0 podman[212489]: 2026-01-10 17:12:49.095744717 +0000 UTC m=+0.071637292 container create 02e1f58eb109da1cce1e99785b60ca5d3cc9312e0723bad0cf5e44c1b7024dfe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_mestorf, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True)
Jan 10 17:12:49 compute-0 systemd[1]: Started libpod-conmon-02e1f58eb109da1cce1e99785b60ca5d3cc9312e0723bad0cf5e44c1b7024dfe.scope.
Jan 10 17:12:49 compute-0 podman[212489]: 2026-01-10 17:12:49.063130265 +0000 UTC m=+0.039022900 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:12:49 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:12:49 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:12:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/309692ec14f9a03c3f66831fc824aef267ee7f0069c07a665921ed6f7d04632e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 17:12:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/309692ec14f9a03c3f66831fc824aef267ee7f0069c07a665921ed6f7d04632e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 17:12:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/309692ec14f9a03c3f66831fc824aef267ee7f0069c07a665921ed6f7d04632e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 17:12:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/309692ec14f9a03c3f66831fc824aef267ee7f0069c07a665921ed6f7d04632e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 17:12:49 compute-0 podman[212489]: 2026-01-10 17:12:49.196110927 +0000 UTC m=+0.172003542 container init 02e1f58eb109da1cce1e99785b60ca5d3cc9312e0723bad0cf5e44c1b7024dfe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_mestorf, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 17:12:49 compute-0 podman[212489]: 2026-01-10 17:12:49.210171827 +0000 UTC m=+0.186064412 container start 02e1f58eb109da1cce1e99785b60ca5d3cc9312e0723bad0cf5e44c1b7024dfe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_mestorf, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 17:12:49 compute-0 podman[212489]: 2026-01-10 17:12:49.214649448 +0000 UTC m=+0.190542033 container attach 02e1f58eb109da1cce1e99785b60ca5d3cc9312e0723bad0cf5e44c1b7024dfe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_mestorf, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 10 17:12:49 compute-0 ceph-mon[75249]: pgmap v514: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:12:49 compute-0 lvm[212584]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 10 17:12:49 compute-0 lvm[212584]: VG ceph_vg0 finished
Jan 10 17:12:49 compute-0 lvm[212585]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 10 17:12:49 compute-0 lvm[212585]: VG ceph_vg1 finished
Jan 10 17:12:49 compute-0 lvm[212587]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 10 17:12:49 compute-0 lvm[212587]: VG ceph_vg2 finished
Jan 10 17:12:50 compute-0 reverent_mestorf[212506]: {}
Jan 10 17:12:50 compute-0 systemd[1]: libpod-02e1f58eb109da1cce1e99785b60ca5d3cc9312e0723bad0cf5e44c1b7024dfe.scope: Deactivated successfully.
Jan 10 17:12:50 compute-0 podman[212489]: 2026-01-10 17:12:50.112563609 +0000 UTC m=+1.088456174 container died 02e1f58eb109da1cce1e99785b60ca5d3cc9312e0723bad0cf5e44c1b7024dfe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_mestorf, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True)
Jan 10 17:12:50 compute-0 systemd[1]: libpod-02e1f58eb109da1cce1e99785b60ca5d3cc9312e0723bad0cf5e44c1b7024dfe.scope: Consumed 1.464s CPU time.
Jan 10 17:12:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-309692ec14f9a03c3f66831fc824aef267ee7f0069c07a665921ed6f7d04632e-merged.mount: Deactivated successfully.
Jan 10 17:12:50 compute-0 podman[212489]: 2026-01-10 17:12:50.175741383 +0000 UTC m=+1.151633958 container remove 02e1f58eb109da1cce1e99785b60ca5d3cc9312e0723bad0cf5e44c1b7024dfe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_mestorf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 10 17:12:50 compute-0 systemd[1]: libpod-conmon-02e1f58eb109da1cce1e99785b60ca5d3cc9312e0723bad0cf5e44c1b7024dfe.scope: Deactivated successfully.
Jan 10 17:12:50 compute-0 sudo[212414]: pam_unix(sudo:session): session closed for user root
Jan 10 17:12:50 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 10 17:12:50 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:12:50 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 10 17:12:50 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:12:50 compute-0 sudo[212602]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 10 17:12:50 compute-0 sudo[212602]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:12:50 compute-0 sudo[212602]: pam_unix(sudo:session): session closed for user root
Jan 10 17:12:50 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v515: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:12:51 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:12:51 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:12:51 compute-0 ceph-mon[75249]: pgmap v515: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:12:51 compute-0 sshd-session[212627]: Accepted publickey for zuul from 192.168.122.30 port 43830 ssh2: ECDSA SHA256:YYROLJW/JwZAyyZtyl+88gzuUs1GqrQIhGb+AzXg9yc
Jan 10 17:12:51 compute-0 systemd-logind[798]: New session 50 of user zuul.
Jan 10 17:12:51 compute-0 systemd[1]: Started Session 50 of User zuul.
Jan 10 17:12:51 compute-0 sshd-session[212627]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 10 17:12:52 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v516: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:12:52 compute-0 python3.9[212780]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 10 17:12:53 compute-0 podman[212809]: 2026-01-10 17:12:53.075031446 +0000 UTC m=+0.072694753 container health_status 4a66317a3a3660e903224f5e823ead0a57597ae17c6ac34919f8a3aeea25124f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2)
Jan 10 17:12:53 compute-0 ceph-mon[75249]: pgmap v516: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:12:53 compute-0 python3.9[212953]: ansible-ansible.builtin.service_facts Invoked
Jan 10 17:12:53 compute-0 network[212970]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 10 17:12:53 compute-0 network[212971]: 'network-scripts' will be removed from distribution in near future.
Jan 10 17:12:53 compute-0 network[212972]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 10 17:12:54 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:12:54 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v517: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:12:55 compute-0 ceph-mon[75249]: pgmap v517: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:12:56 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v518: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:12:57 compute-0 ceph-mon[75249]: pgmap v518: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:12:58 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v519: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:12:59 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:12:59 compute-0 ceph-mon[75249]: pgmap v519: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:13:00 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v520: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:13:00 compute-0 sudo[213242]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hmztzftbospbwoqbafuytesvzvbhxtjz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065180.1515253-42-123434767106960/AnsiballZ_setup.py'
Jan 10 17:13:00 compute-0 sudo[213242]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:13:00 compute-0 python3.9[213244]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 10 17:13:01 compute-0 sudo[213242]: pam_unix(sudo:session): session closed for user root
Jan 10 17:13:01 compute-0 ceph-mon[75249]: pgmap v520: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:13:01 compute-0 sudo[213326]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yczdzdozwkaivdrwnahzelypodexmtgm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065180.1515253-42-123434767106960/AnsiballZ_dnf.py'
Jan 10 17:13:01 compute-0 sudo[213326]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:13:02 compute-0 python3.9[213328]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 10 17:13:02 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v521: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:13:03 compute-0 ceph-mon[75249]: pgmap v521: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:13:04 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:13:04 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v522: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:13:05 compute-0 ceph-mon[75249]: pgmap v522: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:13:06 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v523: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:13:07 compute-0 sudo[213326]: pam_unix(sudo:session): session closed for user root
Jan 10 17:13:07 compute-0 ceph-mon[75249]: pgmap v523: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:13:07 compute-0 ceph-mon[75249]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #27. Immutable memtables: 0.
Jan 10 17:13:07 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:13:07.485246) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 10 17:13:07 compute-0 ceph-mon[75249]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 27
Jan 10 17:13:07 compute-0 ceph-mon[75249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768065187485468, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 2042, "num_deletes": 251, "total_data_size": 2363109, "memory_usage": 2405216, "flush_reason": "Manual Compaction"}
Jan 10 17:13:07 compute-0 ceph-mon[75249]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #28: started
Jan 10 17:13:07 compute-0 ceph-mon[75249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768065187503618, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 28, "file_size": 2291267, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 9118, "largest_seqno": 11159, "table_properties": {"data_size": 2282087, "index_size": 5802, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2309, "raw_key_size": 17876, "raw_average_key_size": 19, "raw_value_size": 2263717, "raw_average_value_size": 2465, "num_data_blocks": 267, "num_entries": 918, "num_filter_entries": 918, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768064954, "oldest_key_time": 1768064954, "file_creation_time": 1768065187, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f71f9c2-f3c5-4fc3-bcd9-6ffe346ae9d4", "db_session_id": "VPFJD76VNV79HUMFHEYZ", "orig_file_number": 28, "seqno_to_time_mapping": "N/A"}}
Jan 10 17:13:07 compute-0 ceph-mon[75249]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 18461 microseconds, and 6705 cpu microseconds.
Jan 10 17:13:07 compute-0 ceph-mon[75249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 10 17:13:07 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:13:07.503756) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #28: 2291267 bytes OK
Jan 10 17:13:07 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:13:07.503803) [db/memtable_list.cc:519] [default] Level-0 commit table #28 started
Jan 10 17:13:07 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:13:07.505981) [db/memtable_list.cc:722] [default] Level-0 commit table #28: memtable #1 done
Jan 10 17:13:07 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:13:07.506002) EVENT_LOG_v1 {"time_micros": 1768065187505999, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 10 17:13:07 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:13:07.506028) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 10 17:13:07 compute-0 ceph-mon[75249]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 2354588, prev total WAL file size 2354588, number of live WAL files 2.
Jan 10 17:13:07 compute-0 ceph-mon[75249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000024.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 10 17:13:07 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:13:07.507270) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300353032' seq:72057594037927935, type:22 .. '7061786F7300373534' seq:0, type:0; will stop at (end)
Jan 10 17:13:07 compute-0 ceph-mon[75249]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 10 17:13:07 compute-0 ceph-mon[75249]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [28(2237KB)], [26(4823KB)]
Jan 10 17:13:07 compute-0 ceph-mon[75249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768065187507550, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [28], "files_L6": [26], "score": -1, "input_data_size": 7230724, "oldest_snapshot_seqno": -1}
Jan 10 17:13:07 compute-0 ceph-mon[75249]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #29: 3244 keys, 6088494 bytes, temperature: kUnknown
Jan 10 17:13:07 compute-0 ceph-mon[75249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768065187575772, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 29, "file_size": 6088494, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6062275, "index_size": 17021, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8133, "raw_key_size": 74895, "raw_average_key_size": 23, "raw_value_size": 5999676, "raw_average_value_size": 1849, "num_data_blocks": 751, "num_entries": 3244, "num_filter_entries": 3244, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768064235, "oldest_key_time": 0, "file_creation_time": 1768065187, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f71f9c2-f3c5-4fc3-bcd9-6ffe346ae9d4", "db_session_id": "VPFJD76VNV79HUMFHEYZ", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Jan 10 17:13:07 compute-0 ceph-mon[75249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 10 17:13:07 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:13:07.576281) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 6088494 bytes
Jan 10 17:13:07 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:13:07.578193) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 105.8 rd, 89.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.2, 4.7 +0.0 blob) out(5.8 +0.0 blob), read-write-amplify(5.8) write-amplify(2.7) OK, records in: 3758, records dropped: 514 output_compression: NoCompression
Jan 10 17:13:07 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:13:07.578248) EVENT_LOG_v1 {"time_micros": 1768065187578232, "job": 10, "event": "compaction_finished", "compaction_time_micros": 68368, "compaction_time_cpu_micros": 40696, "output_level": 6, "num_output_files": 1, "total_output_size": 6088494, "num_input_records": 3758, "num_output_records": 3244, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 10 17:13:07 compute-0 ceph-mon[75249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000028.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 10 17:13:07 compute-0 ceph-mon[75249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768065187579265, "job": 10, "event": "table_file_deletion", "file_number": 28}
Jan 10 17:13:07 compute-0 ceph-mon[75249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 10 17:13:07 compute-0 ceph-mon[75249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768065187581168, "job": 10, "event": "table_file_deletion", "file_number": 26}
Jan 10 17:13:07 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:13:07.506877) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 10 17:13:07 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:13:07.581283) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 10 17:13:07 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:13:07.581291) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 10 17:13:07 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:13:07.581294) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 10 17:13:07 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:13:07.581296) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 10 17:13:07 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:13:07.581298) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 10 17:13:08 compute-0 sudo[213479]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hkqfgekahsnicxunlqvncokolsxdehxg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065187.583807-54-39088666676346/AnsiballZ_stat.py'
Jan 10 17:13:08 compute-0 sudo[213479]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:13:08 compute-0 python3.9[213481]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 10 17:13:08 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v524: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:13:08 compute-0 sudo[213479]: pam_unix(sudo:session): session closed for user root
Jan 10 17:13:08 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:13:08 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:13:08 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:13:08 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:13:08 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:13:08 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:13:09 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:13:09 compute-0 sudo[213631]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xhfjclmjguzosrjxrqyejivdgxxtlxam ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065188.6608677-64-201136786656032/AnsiballZ_command.py'
Jan 10 17:13:09 compute-0 sudo[213631]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:13:09 compute-0 python3.9[213633]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 17:13:09 compute-0 sudo[213631]: pam_unix(sudo:session): session closed for user root
Jan 10 17:13:09 compute-0 ceph-mon[75249]: pgmap v524: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:13:10 compute-0 sudo[213784]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qcksxspfxebtoruxduzenbwckmomymdq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065189.8225951-74-146984630667832/AnsiballZ_stat.py'
Jan 10 17:13:10 compute-0 sudo[213784]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:13:10 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v525: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:13:10 compute-0 python3.9[213786]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 10 17:13:10 compute-0 sudo[213784]: pam_unix(sudo:session): session closed for user root
Jan 10 17:13:11 compute-0 sudo[213936]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ctlestvzpvwimwdtdlwkzlrihneywsct ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065190.6479678-82-254165530939312/AnsiballZ_command.py'
Jan 10 17:13:11 compute-0 sudo[213936]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:13:11 compute-0 python3.9[213938]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/iscsi-iname _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 17:13:11 compute-0 sudo[213936]: pam_unix(sudo:session): session closed for user root
Jan 10 17:13:11 compute-0 ceph-mon[75249]: pgmap v525: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:13:11 compute-0 sudo[214089]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lfmjmzxgustlakbmoorccnbktxofdtup ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065191.4991152-90-157725941526714/AnsiballZ_stat.py'
Jan 10 17:13:11 compute-0 sudo[214089]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:13:11 compute-0 python3.9[214091]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 17:13:11 compute-0 sudo[214089]: pam_unix(sudo:session): session closed for user root
Jan 10 17:13:12 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v526: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:13:12 compute-0 sudo[214212]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-htxzrcpfmbbvvqsagnegvbxwhlqdaedm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065191.4991152-90-157725941526714/AnsiballZ_copy.py'
Jan 10 17:13:12 compute-0 sudo[214212]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:13:12 compute-0 python3.9[214214]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1768065191.4991152-90-157725941526714/.source.iscsi _original_basename=.67edewgc follow=False checksum=20aa512fad3df14aa1fa2c6777f3d96658f0cf72 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:13:12 compute-0 sudo[214212]: pam_unix(sudo:session): session closed for user root
Jan 10 17:13:13 compute-0 ceph-mon[75249]: pgmap v526: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:13:13 compute-0 sudo[214364]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ytbblsqklqebslvzdivsfzfknlsmxcmq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065193.1342106-105-270178087996924/AnsiballZ_file.py'
Jan 10 17:13:13 compute-0 sudo[214364]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:13:13 compute-0 python3.9[214366]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:13:13 compute-0 sudo[214364]: pam_unix(sudo:session): session closed for user root
Jan 10 17:13:14 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:13:14 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v527: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:13:14 compute-0 sudo[214516]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-virpzhsyyggdfdhuimawdlhlihxellfj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065194.2037406-113-36343783708838/AnsiballZ_lineinfile.py'
Jan 10 17:13:14 compute-0 sudo[214516]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:13:14 compute-0 python3.9[214518]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:13:14 compute-0 sudo[214516]: pam_unix(sudo:session): session closed for user root
Jan 10 17:13:15 compute-0 ceph-mon[75249]: pgmap v527: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:13:15 compute-0 sudo[214668]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hywzoubwounexrpafqpnodpxfdwility ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065195.2532148-122-45327591372705/AnsiballZ_systemd_service.py'
Jan 10 17:13:15 compute-0 sudo[214668]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:13:16 compute-0 python3.9[214670]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 10 17:13:16 compute-0 systemd[1]: Listening on Open-iSCSI iscsid Socket.
Jan 10 17:13:16 compute-0 sudo[214668]: pam_unix(sudo:session): session closed for user root
Jan 10 17:13:16 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v528: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:13:16 compute-0 sudo[214824]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vztluxfxyyortcxfwyyojjoreslaasvk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065196.491057-130-156489683204986/AnsiballZ_systemd_service.py'
Jan 10 17:13:16 compute-0 sudo[214824]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:13:17 compute-0 python3.9[214826]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 10 17:13:17 compute-0 ceph-mon[75249]: pgmap v528: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:13:18 compute-0 podman[214829]: 2026-01-10 17:13:18.163293397 +0000 UTC m=+0.150320709 container health_status a2049b55215718ead6719d161f51721cb7ba61ad8a59a8a1174e05b17008fa6f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 10 17:13:18 compute-0 systemd[1]: Reloading.
Jan 10 17:13:18 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v529: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:13:18 compute-0 systemd-rc-local-generator[214883]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 17:13:18 compute-0 systemd-sysv-generator[214888]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 10 17:13:18 compute-0 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Jan 10 17:13:18 compute-0 systemd[1]: Starting Open-iSCSI...
Jan 10 17:13:18 compute-0 kernel: Loading iSCSI transport class v2.0-870.
Jan 10 17:13:18 compute-0 systemd[1]: Started Open-iSCSI.
Jan 10 17:13:18 compute-0 systemd[1]: Starting Logout off all iSCSI sessions on shutdown...
Jan 10 17:13:18 compute-0 systemd[1]: Finished Logout off all iSCSI sessions on shutdown.
Jan 10 17:13:18 compute-0 sudo[214824]: pam_unix(sudo:session): session closed for user root
Jan 10 17:13:19 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:13:19 compute-0 ceph-mon[75249]: pgmap v529: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:13:19 compute-0 python3.9[215055]: ansible-ansible.builtin.service_facts Invoked
Jan 10 17:13:19 compute-0 network[215072]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 10 17:13:19 compute-0 network[215073]: 'network-scripts' will be removed from distribution in near future.
Jan 10 17:13:19 compute-0 network[215074]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 10 17:13:20 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v530: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:13:21 compute-0 ceph-mon[75249]: pgmap v530: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:13:22 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v531: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:13:23 compute-0 podman[215165]: 2026-01-10 17:13:23.282032139 +0000 UTC m=+0.115956366 container health_status 4a66317a3a3660e903224f5e823ead0a57597ae17c6ac34919f8a3aeea25124f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 10 17:13:23 compute-0 ceph-mon[75249]: pgmap v531: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:13:24 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:13:24 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v532: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:13:24 compute-0 sudo[215364]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qrwztmpktlmiurewizatkhtpqtbshvfp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065204.4260545-153-168645638634313/AnsiballZ_dnf.py'
Jan 10 17:13:24 compute-0 sudo[215364]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:13:25 compute-0 python3.9[215366]: ansible-ansible.legacy.dnf Invoked with name=['device-mapper-multipath'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 10 17:13:25 compute-0 ceph-mon[75249]: pgmap v532: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:13:26 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v533: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:13:26 compute-0 sshd-session[215368]: Invalid user linaro from 216.36.124.133 port 48998
Jan 10 17:13:26 compute-0 sshd-session[215368]: Connection closed by invalid user linaro 216.36.124.133 port 48998 [preauth]
Jan 10 17:13:27 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 10 17:13:27 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 10 17:13:27 compute-0 systemd[1]: Reloading.
Jan 10 17:13:27 compute-0 ceph-mon[75249]: pgmap v533: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:13:27 compute-0 systemd-rc-local-generator[215412]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 17:13:27 compute-0 systemd-sysv-generator[215415]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 10 17:13:28 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 10 17:13:28 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 10 17:13:28 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 10 17:13:28 compute-0 systemd[1]: run-r17f0a6a0424845bbadf0dbee14a7a581.service: Deactivated successfully.
Jan 10 17:13:28 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v534: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:13:28 compute-0 sudo[215364]: pam_unix(sudo:session): session closed for user root
Jan 10 17:13:29 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:13:29 compute-0 sudo[215682]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jjjqyktpqctluaacqcuybunfpltbxjus ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065208.763077-162-142859858237442/AnsiballZ_file.py'
Jan 10 17:13:29 compute-0 sudo[215682]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:13:29 compute-0 python3.9[215684]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Jan 10 17:13:29 compute-0 sudo[215682]: pam_unix(sudo:session): session closed for user root
Jan 10 17:13:29 compute-0 ceph-mon[75249]: pgmap v534: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:13:30 compute-0 sudo[215834]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hstvikjrvmwyemqxpchoxkjxmwyexeyh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065209.7196167-170-247033456207019/AnsiballZ_modprobe.py'
Jan 10 17:13:30 compute-0 sudo[215834]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:13:30 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v535: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:13:30 compute-0 python3.9[215836]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Jan 10 17:13:30 compute-0 sudo[215834]: pam_unix(sudo:session): session closed for user root
Jan 10 17:13:31 compute-0 sudo[215990]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lwxlnlyozvuvatvvixhhzurocfcchncx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065210.752674-178-189788000314792/AnsiballZ_stat.py'
Jan 10 17:13:31 compute-0 sudo[215990]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:13:31 compute-0 python3.9[215992]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 17:13:31 compute-0 sudo[215990]: pam_unix(sudo:session): session closed for user root
Jan 10 17:13:31 compute-0 sudo[216113]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jwmxvrwhpqbzunbznilfwueihjnbxtcs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065210.752674-178-189788000314792/AnsiballZ_copy.py'
Jan 10 17:13:31 compute-0 sudo[216113]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:13:31 compute-0 ceph-mon[75249]: pgmap v535: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:13:31 compute-0 python3.9[216115]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1768065210.752674-178-189788000314792/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:13:31 compute-0 sudo[216113]: pam_unix(sudo:session): session closed for user root
Jan 10 17:13:32 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v536: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:13:32 compute-0 sudo[216265]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ppznvoldypdceijxxxunldlfdiikajmr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065212.2657797-194-118839888910810/AnsiballZ_lineinfile.py'
Jan 10 17:13:32 compute-0 sudo[216265]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:13:32 compute-0 python3.9[216267]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:13:32 compute-0 sudo[216265]: pam_unix(sudo:session): session closed for user root
Jan 10 17:13:33 compute-0 sudo[216417]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-elbutrgbxykpiooexoglkopwhddthxcg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065213.013119-202-229259384350978/AnsiballZ_systemd.py'
Jan 10 17:13:33 compute-0 sudo[216417]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:13:33 compute-0 ceph-mon[75249]: pgmap v536: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:13:34 compute-0 python3.9[216419]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 10 17:13:34 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Jan 10 17:13:34 compute-0 systemd[1]: Stopped Load Kernel Modules.
Jan 10 17:13:34 compute-0 systemd[1]: Stopping Load Kernel Modules...
Jan 10 17:13:34 compute-0 systemd[1]: Starting Load Kernel Modules...
Jan 10 17:13:34 compute-0 systemd[1]: Finished Load Kernel Modules.
Jan 10 17:13:34 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:13:34 compute-0 sudo[216417]: pam_unix(sudo:session): session closed for user root
Jan 10 17:13:34 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v537: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:13:34 compute-0 sudo[216573]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nfopwhpkzgtukaqwacwtxfqrwdhrgqnh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065214.441535-210-19336137615987/AnsiballZ_command.py'
Jan 10 17:13:34 compute-0 sudo[216573]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:13:34 compute-0 python3.9[216575]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/multipath _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 17:13:35 compute-0 sudo[216573]: pam_unix(sudo:session): session closed for user root
Jan 10 17:13:35 compute-0 sudo[216726]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-debxqyizthuaqazkpvvowpgcofaetgvg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065215.4172764-220-32614064327309/AnsiballZ_stat.py'
Jan 10 17:13:35 compute-0 sudo[216726]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:13:35 compute-0 ceph-mon[75249]: pgmap v537: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:13:35 compute-0 python3.9[216728]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 10 17:13:35 compute-0 sudo[216726]: pam_unix(sudo:session): session closed for user root
Jan 10 17:13:36 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v538: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:13:36 compute-0 sudo[216878]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhybhtwdypffwhbbwswxjcoxlusjjrjq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065216.1849818-229-181869298825951/AnsiballZ_stat.py'
Jan 10 17:13:36 compute-0 sudo[216878]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:13:36 compute-0 python3.9[216880]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 17:13:36 compute-0 sudo[216878]: pam_unix(sudo:session): session closed for user root
Jan 10 17:13:37 compute-0 sudo[217001]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-autwcnvypeukvdwqytgzcnudbkzqdkyd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065216.1849818-229-181869298825951/AnsiballZ_copy.py'
Jan 10 17:13:37 compute-0 sudo[217001]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:13:37 compute-0 python3.9[217003]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1768065216.1849818-229-181869298825951/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:13:37 compute-0 sudo[217001]: pam_unix(sudo:session): session closed for user root
Jan 10 17:13:37 compute-0 ceph-mon[75249]: pgmap v538: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:13:38 compute-0 sudo[217153]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qhldnxdqjkuqwyvnmiwhgulefnjxtrku ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065217.6646173-244-137285577468121/AnsiballZ_command.py'
Jan 10 17:13:38 compute-0 sudo[217153]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:13:38 compute-0 ceph-mgr[75538]: [balancer INFO root] Optimize plan auto_2026-01-10_17:13:38
Jan 10 17:13:38 compute-0 ceph-mgr[75538]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 10 17:13:38 compute-0 ceph-mgr[75538]: [balancer INFO root] do_upmap
Jan 10 17:13:38 compute-0 ceph-mgr[75538]: [balancer INFO root] pools ['backups', 'cephfs.cephfs.meta', 'vms', '.mgr', 'images', 'cephfs.cephfs.data', 'volumes']
Jan 10 17:13:38 compute-0 ceph-mgr[75538]: [balancer INFO root] prepared 0/10 upmap changes
Jan 10 17:13:38 compute-0 python3.9[217155]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 17:13:38 compute-0 sudo[217153]: pam_unix(sudo:session): session closed for user root
Jan 10 17:13:38 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v539: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:13:38 compute-0 sudo[217306]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ulnqmoitwperyvrhiottdofaoxtrexbg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065218.5010908-252-46454707863575/AnsiballZ_lineinfile.py'
Jan 10 17:13:38 compute-0 sudo[217306]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:13:38 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:13:38 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:13:38 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:13:38 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:13:38 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:13:38 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:13:39 compute-0 python3.9[217308]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:13:39 compute-0 sudo[217306]: pam_unix(sudo:session): session closed for user root
Jan 10 17:13:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 10 17:13:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 10 17:13:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 10 17:13:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 10 17:13:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 10 17:13:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 10 17:13:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 10 17:13:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 10 17:13:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 10 17:13:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 10 17:13:39 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:13:39 compute-0 sudo[217458]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-abgmpntyoqrxwxkdxtnguqapphwbxrtk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065219.297979-260-186685746377448/AnsiballZ_replace.py'
Jan 10 17:13:39 compute-0 sudo[217458]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:13:39 compute-0 ceph-mon[75249]: pgmap v539: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:13:40 compute-0 python3.9[217460]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:13:40 compute-0 sudo[217458]: pam_unix(sudo:session): session closed for user root
Jan 10 17:13:40 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v540: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:13:40 compute-0 sudo[217610]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-susboifxucxdsbzutlqsjswllsljauel ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065220.3701046-268-251925402803579/AnsiballZ_replace.py'
Jan 10 17:13:40 compute-0 sudo[217610]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:13:40 compute-0 python3.9[217612]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:13:40 compute-0 sudo[217610]: pam_unix(sudo:session): session closed for user root
Jan 10 17:13:41 compute-0 sudo[217762]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xqrrvgnbdntimwtcnqdabozsvudxzauy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065221.1668448-277-67823742898459/AnsiballZ_lineinfile.py'
Jan 10 17:13:41 compute-0 sudo[217762]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:13:41 compute-0 python3.9[217764]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:13:41 compute-0 sudo[217762]: pam_unix(sudo:session): session closed for user root
Jan 10 17:13:41 compute-0 ceph-mon[75249]: pgmap v540: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:13:42 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v541: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:13:42 compute-0 sudo[217914]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvtdcxbnbknzrtamglbtwfbzseyybjyy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065221.967875-277-99491827354485/AnsiballZ_lineinfile.py'
Jan 10 17:13:42 compute-0 sudo[217914]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:13:42 compute-0 python3.9[217916]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:13:42 compute-0 sudo[217914]: pam_unix(sudo:session): session closed for user root
Jan 10 17:13:43 compute-0 sudo[218066]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ldwokaaqkuijwhkiuvnsyrmlsiedpxyb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065222.7657442-277-234899161776963/AnsiballZ_lineinfile.py'
Jan 10 17:13:43 compute-0 sudo[218066]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:13:43 compute-0 python3.9[218068]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:13:43 compute-0 sudo[218066]: pam_unix(sudo:session): session closed for user root
Jan 10 17:13:43 compute-0 sudo[218218]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vfskitgvwgveczcckbemluiyiimbkhym ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065223.5377119-277-266691596591106/AnsiballZ_lineinfile.py'
Jan 10 17:13:43 compute-0 sudo[218218]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:13:43 compute-0 ceph-mon[75249]: pgmap v541: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:13:44 compute-0 python3.9[218220]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:13:44 compute-0 sudo[218218]: pam_unix(sudo:session): session closed for user root
Jan 10 17:13:44 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:13:44 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v542: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:13:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] _maybe_adjust
Jan 10 17:13:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:13:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 10 17:13:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:13:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 17:13:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:13:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 17:13:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:13:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 17:13:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:13:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 17:13:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:13:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 9.302004027771843e-07 of space, bias 4.0, pg target 0.0011162404833326212 quantized to 16 (current 16)
Jan 10 17:13:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:13:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 17:13:44 compute-0 sudo[218370]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cfzpjzzmlorpmtfcvddtqzkzauvigxix ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065224.349888-306-154722076986009/AnsiballZ_stat.py'
Jan 10 17:13:44 compute-0 sudo[218370]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:13:44 compute-0 python3.9[218372]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 10 17:13:44 compute-0 sudo[218370]: pam_unix(sudo:session): session closed for user root
Jan 10 17:13:45 compute-0 sudo[218524]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vzxrhwitthaebtzolsokzzsnmwiwzmvk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065225.1689644-314-219802693823360/AnsiballZ_command.py'
Jan 10 17:13:45 compute-0 sudo[218524]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:13:45 compute-0 python3.9[218526]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/true _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 17:13:45 compute-0 sudo[218524]: pam_unix(sudo:session): session closed for user root
Jan 10 17:13:45 compute-0 ceph-mon[75249]: pgmap v542: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:13:46 compute-0 sudo[218677]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mffyouytppyzynrsuabzeqpxbgbgzbpd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065225.9588006-323-100188270154879/AnsiballZ_systemd_service.py'
Jan 10 17:13:46 compute-0 sudo[218677]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:13:46 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v543: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:13:46 compute-0 python3.9[218679]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=multipathd.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 10 17:13:46 compute-0 systemd[1]: Listening on multipathd control socket.
Jan 10 17:13:46 compute-0 sudo[218677]: pam_unix(sudo:session): session closed for user root
Jan 10 17:13:47 compute-0 ceph-mon[75249]: pgmap v543: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:13:47 compute-0 sudo[218833]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ukctrlqngyabjrrbcenxdcdxemfgnqgo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065226.9469094-331-98177426088062/AnsiballZ_systemd_service.py'
Jan 10 17:13:47 compute-0 sudo[218833]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:13:47 compute-0 python3.9[218835]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=multipathd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 10 17:13:47 compute-0 systemd[1]: Starting Wait for udev To Complete Device Initialization...
Jan 10 17:13:47 compute-0 udevadm[218840]: systemd-udev-settle.service is deprecated. Please fix multipathd.service not to pull it in.
Jan 10 17:13:47 compute-0 systemd[1]: Finished Wait for udev To Complete Device Initialization.
Jan 10 17:13:47 compute-0 systemd[1]: Starting Device-Mapper Multipath Device Controller...
Jan 10 17:13:47 compute-0 multipathd[218844]: --------start up--------
Jan 10 17:13:47 compute-0 multipathd[218844]: read /etc/multipath.conf
Jan 10 17:13:47 compute-0 multipathd[218844]: path checkers start up
Jan 10 17:13:47 compute-0 systemd[1]: Started Device-Mapper Multipath Device Controller.
Jan 10 17:13:47 compute-0 sudo[218833]: pam_unix(sudo:session): session closed for user root
Jan 10 17:13:48 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v544: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:13:48 compute-0 sudo[219008]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xndmyvoalpslypsjdhqtzckxbcoxybgg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065228.3638704-343-72597521070983/AnsiballZ_file.py'
Jan 10 17:13:48 compute-0 sudo[219008]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:13:48 compute-0 podman[218975]: 2026-01-10 17:13:48.809072546 +0000 UTC m=+0.159638500 container health_status a2049b55215718ead6719d161f51721cb7ba61ad8a59a8a1174e05b17008fa6f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 10 17:13:48 compute-0 python3.9[219015]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Jan 10 17:13:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:13:48.915 152671 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 10 17:13:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:13:48.917 152671 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 10 17:13:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:13:48.917 152671 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 10 17:13:48 compute-0 sudo[219008]: pam_unix(sudo:session): session closed for user root
Jan 10 17:13:49 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:13:49 compute-0 ceph-mon[75249]: pgmap v544: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:13:49 compute-0 sudo[219178]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cucwajpmifbacytwxiyzdltolqvvmfkr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065229.1053648-351-53121362878699/AnsiballZ_modprobe.py'
Jan 10 17:13:49 compute-0 sudo[219178]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:13:49 compute-0 python3.9[219180]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Jan 10 17:13:49 compute-0 kernel: Key type psk registered
Jan 10 17:13:49 compute-0 sudo[219178]: pam_unix(sudo:session): session closed for user root
Jan 10 17:13:50 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v545: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:13:50 compute-0 sudo[219274]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 17:13:50 compute-0 sudo[219274]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:13:50 compute-0 sudo[219274]: pam_unix(sudo:session): session closed for user root
Jan 10 17:13:50 compute-0 sudo[219317]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 10 17:13:50 compute-0 sudo[219317]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:13:50 compute-0 sudo[219391]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mmodtzjxqwfufdaemhknfjbqhouopxjl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065230.183878-359-273205891564077/AnsiballZ_stat.py'
Jan 10 17:13:50 compute-0 sudo[219391]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:13:50 compute-0 python3.9[219393]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 17:13:50 compute-0 sudo[219391]: pam_unix(sudo:session): session closed for user root
Jan 10 17:13:51 compute-0 sudo[219317]: pam_unix(sudo:session): session closed for user root
Jan 10 17:13:51 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 17:13:51 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 17:13:51 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 10 17:13:51 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 10 17:13:51 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 10 17:13:51 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:13:51 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 10 17:13:51 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 10 17:13:51 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 10 17:13:51 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 10 17:13:51 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 17:13:51 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 17:13:51 compute-0 sudo[219557]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yhgeywmftqvsgjkqkupqfnoljqmutyxc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065230.183878-359-273205891564077/AnsiballZ_copy.py'
Jan 10 17:13:51 compute-0 sudo[219557]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:13:51 compute-0 sudo[219533]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 17:13:51 compute-0 sudo[219533]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:13:51 compute-0 sudo[219533]: pam_unix(sudo:session): session closed for user root
Jan 10 17:13:51 compute-0 sudo[219573]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 10 17:13:51 compute-0 sudo[219573]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:13:51 compute-0 python3.9[219570]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1768065230.183878-359-273205891564077/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:13:51 compute-0 sudo[219557]: pam_unix(sudo:session): session closed for user root
Jan 10 17:13:51 compute-0 ceph-mon[75249]: pgmap v545: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:13:51 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 17:13:51 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 10 17:13:51 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:13:51 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 10 17:13:51 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 10 17:13:51 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 17:13:51 compute-0 podman[219617]: 2026-01-10 17:13:51.462405792 +0000 UTC m=+0.034643714 container create 56266b49be06f17505d0e6177ac00ce49f7c72cc06033bacbe0f752fce565a29 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_driscoll, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 17:13:51 compute-0 systemd[1]: Started libpod-conmon-56266b49be06f17505d0e6177ac00ce49f7c72cc06033bacbe0f752fce565a29.scope.
Jan 10 17:13:51 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:13:51 compute-0 podman[219617]: 2026-01-10 17:13:51.448528787 +0000 UTC m=+0.020766739 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:13:51 compute-0 podman[219617]: 2026-01-10 17:13:51.558604185 +0000 UTC m=+0.130842207 container init 56266b49be06f17505d0e6177ac00ce49f7c72cc06033bacbe0f752fce565a29 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_driscoll, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 17:13:51 compute-0 podman[219617]: 2026-01-10 17:13:51.573857912 +0000 UTC m=+0.146095884 container start 56266b49be06f17505d0e6177ac00ce49f7c72cc06033bacbe0f752fce565a29 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_driscoll, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 10 17:13:51 compute-0 naughty_driscoll[219651]: 167 167
Jan 10 17:13:51 compute-0 podman[219617]: 2026-01-10 17:13:51.579097175 +0000 UTC m=+0.151335187 container attach 56266b49be06f17505d0e6177ac00ce49f7c72cc06033bacbe0f752fce565a29 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_driscoll, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 10 17:13:51 compute-0 systemd[1]: libpod-56266b49be06f17505d0e6177ac00ce49f7c72cc06033bacbe0f752fce565a29.scope: Deactivated successfully.
Jan 10 17:13:51 compute-0 podman[219617]: 2026-01-10 17:13:51.58063637 +0000 UTC m=+0.152874322 container died 56266b49be06f17505d0e6177ac00ce49f7c72cc06033bacbe0f752fce565a29 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_driscoll, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 10 17:13:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-9bba9fe20a4dcb21d7ca30235f18fbd4588ab3fbb26cbaa9309d5757a781a69d-merged.mount: Deactivated successfully.
Jan 10 17:13:51 compute-0 podman[219617]: 2026-01-10 17:13:51.631456566 +0000 UTC m=+0.203694498 container remove 56266b49be06f17505d0e6177ac00ce49f7c72cc06033bacbe0f752fce565a29 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_driscoll, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 17:13:51 compute-0 systemd[1]: libpod-conmon-56266b49be06f17505d0e6177ac00ce49f7c72cc06033bacbe0f752fce565a29.scope: Deactivated successfully.
Jan 10 17:13:51 compute-0 podman[219751]: 2026-01-10 17:13:51.851767568 +0000 UTC m=+0.051515897 container create 63501d05b0489bd91f96a964a3e0188f679ecc8310729e375131fd7599b2bcfa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_sanderson, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 17:13:51 compute-0 systemd[1]: Started libpod-conmon-63501d05b0489bd91f96a964a3e0188f679ecc8310729e375131fd7599b2bcfa.scope.
Jan 10 17:13:51 compute-0 podman[219751]: 2026-01-10 17:13:51.826406066 +0000 UTC m=+0.026154405 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:13:51 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:13:51 compute-0 sudo[219820]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wbnjlejlyarclriijtxnahlkkhcdouxc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065231.5966337-375-16600683009385/AnsiballZ_lineinfile.py'
Jan 10 17:13:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/000ec63c38fd57934754736dcd379bf237485491a86cf30686e52708ecd86d33/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 17:13:51 compute-0 sudo[219820]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:13:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/000ec63c38fd57934754736dcd379bf237485491a86cf30686e52708ecd86d33/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 17:13:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/000ec63c38fd57934754736dcd379bf237485491a86cf30686e52708ecd86d33/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 17:13:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/000ec63c38fd57934754736dcd379bf237485491a86cf30686e52708ecd86d33/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 17:13:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/000ec63c38fd57934754736dcd379bf237485491a86cf30686e52708ecd86d33/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 10 17:13:51 compute-0 podman[219751]: 2026-01-10 17:13:51.949433404 +0000 UTC m=+0.149181763 container init 63501d05b0489bd91f96a964a3e0188f679ecc8310729e375131fd7599b2bcfa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_sanderson, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 10 17:13:51 compute-0 podman[219751]: 2026-01-10 17:13:51.961323752 +0000 UTC m=+0.161072091 container start 63501d05b0489bd91f96a964a3e0188f679ecc8310729e375131fd7599b2bcfa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_sanderson, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 17:13:51 compute-0 podman[219751]: 2026-01-10 17:13:51.964832054 +0000 UTC m=+0.164580433 container attach 63501d05b0489bd91f96a964a3e0188f679ecc8310729e375131fd7599b2bcfa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_sanderson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 17:13:52 compute-0 python3.9[219822]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:13:52 compute-0 sudo[219820]: pam_unix(sudo:session): session closed for user root
Jan 10 17:13:52 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v546: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:13:52 compute-0 gracious_sanderson[219814]: --> passed data devices: 0 physical, 3 LVM
Jan 10 17:13:52 compute-0 gracious_sanderson[219814]: --> All data devices are unavailable
Jan 10 17:13:52 compute-0 systemd[1]: libpod-63501d05b0489bd91f96a964a3e0188f679ecc8310729e375131fd7599b2bcfa.scope: Deactivated successfully.
Jan 10 17:13:52 compute-0 podman[219751]: 2026-01-10 17:13:52.595570057 +0000 UTC m=+0.795318416 container died 63501d05b0489bd91f96a964a3e0188f679ecc8310729e375131fd7599b2bcfa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_sanderson, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 10 17:13:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-000ec63c38fd57934754736dcd379bf237485491a86cf30686e52708ecd86d33-merged.mount: Deactivated successfully.
Jan 10 17:13:52 compute-0 podman[219751]: 2026-01-10 17:13:52.653973295 +0000 UTC m=+0.853721624 container remove 63501d05b0489bd91f96a964a3e0188f679ecc8310729e375131fd7599b2bcfa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_sanderson, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS)
Jan 10 17:13:52 compute-0 systemd[1]: libpod-conmon-63501d05b0489bd91f96a964a3e0188f679ecc8310729e375131fd7599b2bcfa.scope: Deactivated successfully.
Jan 10 17:13:52 compute-0 sudo[220001]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eyboxacberdgoqhogovdnrljbnxrwhqf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065232.3007677-383-54972621876829/AnsiballZ_systemd.py'
Jan 10 17:13:52 compute-0 sudo[220001]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:13:52 compute-0 sudo[219573]: pam_unix(sudo:session): session closed for user root
Jan 10 17:13:52 compute-0 sudo[220004]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 17:13:52 compute-0 sudo[220004]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:13:52 compute-0 sudo[220004]: pam_unix(sudo:session): session closed for user root
Jan 10 17:13:52 compute-0 sudo[220029]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 -- lvm list --format json
Jan 10 17:13:52 compute-0 sudo[220029]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:13:52 compute-0 python3.9[220003]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 10 17:13:53 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Jan 10 17:13:53 compute-0 systemd[1]: Stopped Load Kernel Modules.
Jan 10 17:13:53 compute-0 systemd[1]: Stopping Load Kernel Modules...
Jan 10 17:13:53 compute-0 systemd[1]: Starting Load Kernel Modules...
Jan 10 17:13:53 compute-0 systemd[1]: Finished Load Kernel Modules.
Jan 10 17:13:53 compute-0 sudo[220001]: pam_unix(sudo:session): session closed for user root
Jan 10 17:13:53 compute-0 podman[220096]: 2026-01-10 17:13:53.311340077 +0000 UTC m=+0.059884132 container create 289e4fa6b8df4712d2a0c8a2d08aa43c780591bd2800b807b0a7f6801e414a7a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_hofstadter, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 17:13:53 compute-0 systemd[1]: Started libpod-conmon-289e4fa6b8df4712d2a0c8a2d08aa43c780591bd2800b807b0a7f6801e414a7a.scope.
Jan 10 17:13:53 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:13:53 compute-0 podman[220096]: 2026-01-10 17:13:53.287895082 +0000 UTC m=+0.036439177 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:13:53 compute-0 podman[220096]: 2026-01-10 17:13:53.397729504 +0000 UTC m=+0.146273599 container init 289e4fa6b8df4712d2a0c8a2d08aa43c780591bd2800b807b0a7f6801e414a7a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_hofstadter, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 10 17:13:53 compute-0 podman[220096]: 2026-01-10 17:13:53.40614545 +0000 UTC m=+0.154689505 container start 289e4fa6b8df4712d2a0c8a2d08aa43c780591bd2800b807b0a7f6801e414a7a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_hofstadter, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 10 17:13:53 compute-0 podman[220096]: 2026-01-10 17:13:53.411066984 +0000 UTC m=+0.159611069 container attach 289e4fa6b8df4712d2a0c8a2d08aa43c780591bd2800b807b0a7f6801e414a7a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_hofstadter, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 17:13:53 compute-0 pensive_hofstadter[220134]: 167 167
Jan 10 17:13:53 compute-0 podman[220096]: 2026-01-10 17:13:53.413209526 +0000 UTC m=+0.161753581 container died 289e4fa6b8df4712d2a0c8a2d08aa43c780591bd2800b807b0a7f6801e414a7a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_hofstadter, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 17:13:53 compute-0 systemd[1]: libpod-289e4fa6b8df4712d2a0c8a2d08aa43c780591bd2800b807b0a7f6801e414a7a.scope: Deactivated successfully.
Jan 10 17:13:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-1cdb004859ed21ea40f84001c6076fa79aed439ff16fe6d0685d6b4d32457db1-merged.mount: Deactivated successfully.
Jan 10 17:13:53 compute-0 ceph-mon[75249]: pgmap v546: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:13:53 compute-0 podman[220096]: 2026-01-10 17:13:53.466088232 +0000 UTC m=+0.214632287 container remove 289e4fa6b8df4712d2a0c8a2d08aa43c780591bd2800b807b0a7f6801e414a7a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_hofstadter, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 10 17:13:53 compute-0 systemd[1]: libpod-conmon-289e4fa6b8df4712d2a0c8a2d08aa43c780591bd2800b807b0a7f6801e414a7a.scope: Deactivated successfully.
Jan 10 17:13:53 compute-0 podman[220118]: 2026-01-10 17:13:53.478969269 +0000 UTC m=+0.116864088 container health_status 4a66317a3a3660e903224f5e823ead0a57597ae17c6ac34919f8a3aeea25124f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 10 17:13:53 compute-0 podman[220255]: 2026-01-10 17:13:53.714424604 +0000 UTC m=+0.076455076 container create 1a07f167c00aeb93e6661cb63022208ef0a677d84af6b0356f14a824c1d88699 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_lalande, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 10 17:13:53 compute-0 sudo[220295]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jyymaqcgfekzpaspmvgfuqsbpaslljgu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065233.3604043-391-149009112243462/AnsiballZ_dnf.py'
Jan 10 17:13:53 compute-0 sudo[220295]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:13:53 compute-0 podman[220255]: 2026-01-10 17:13:53.685786207 +0000 UTC m=+0.047816749 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:13:53 compute-0 systemd[1]: Started libpod-conmon-1a07f167c00aeb93e6661cb63022208ef0a677d84af6b0356f14a824c1d88699.scope.
Jan 10 17:13:53 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:13:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/993d574818403955e598a357c1237f29a4157da428c1810d924ee5848703306f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 17:13:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/993d574818403955e598a357c1237f29a4157da428c1810d924ee5848703306f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 17:13:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/993d574818403955e598a357c1237f29a4157da428c1810d924ee5848703306f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 17:13:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/993d574818403955e598a357c1237f29a4157da428c1810d924ee5848703306f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 17:13:53 compute-0 podman[220255]: 2026-01-10 17:13:53.868484329 +0000 UTC m=+0.230514881 container init 1a07f167c00aeb93e6661cb63022208ef0a677d84af6b0356f14a824c1d88699 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_lalande, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 10 17:13:53 compute-0 podman[220255]: 2026-01-10 17:13:53.883126517 +0000 UTC m=+0.245157009 container start 1a07f167c00aeb93e6661cb63022208ef0a677d84af6b0356f14a824c1d88699 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_lalande, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 17:13:53 compute-0 podman[220255]: 2026-01-10 17:13:53.887463494 +0000 UTC m=+0.249493996 container attach 1a07f167c00aeb93e6661cb63022208ef0a677d84af6b0356f14a824c1d88699 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_lalande, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 10 17:13:54 compute-0 python3.9[220297]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 10 17:13:54 compute-0 agitated_lalande[220300]: {
Jan 10 17:13:54 compute-0 agitated_lalande[220300]:     "0": [
Jan 10 17:13:54 compute-0 agitated_lalande[220300]:         {
Jan 10 17:13:54 compute-0 agitated_lalande[220300]:             "devices": [
Jan 10 17:13:54 compute-0 agitated_lalande[220300]:                 "/dev/loop3"
Jan 10 17:13:54 compute-0 agitated_lalande[220300]:             ],
Jan 10 17:13:54 compute-0 agitated_lalande[220300]:             "lv_name": "ceph_lv0",
Jan 10 17:13:54 compute-0 agitated_lalande[220300]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 10 17:13:54 compute-0 agitated_lalande[220300]:             "lv_size": "21470642176",
Jan 10 17:13:54 compute-0 agitated_lalande[220300]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9aa1dcc9-88f4-49c0-be40-744313964d3e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 17:13:54 compute-0 agitated_lalande[220300]:             "lv_uuid": "fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2",
Jan 10 17:13:54 compute-0 agitated_lalande[220300]:             "name": "ceph_lv0",
Jan 10 17:13:54 compute-0 agitated_lalande[220300]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 10 17:13:54 compute-0 agitated_lalande[220300]:             "tags": {
Jan 10 17:13:54 compute-0 agitated_lalande[220300]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 10 17:13:54 compute-0 agitated_lalande[220300]:                 "ceph.block_uuid": "fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2",
Jan 10 17:13:54 compute-0 agitated_lalande[220300]:                 "ceph.cephx_lockbox_secret": "",
Jan 10 17:13:54 compute-0 agitated_lalande[220300]:                 "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 17:13:54 compute-0 agitated_lalande[220300]:                 "ceph.cluster_name": "ceph",
Jan 10 17:13:54 compute-0 agitated_lalande[220300]:                 "ceph.crush_device_class": "",
Jan 10 17:13:54 compute-0 agitated_lalande[220300]:                 "ceph.encrypted": "0",
Jan 10 17:13:54 compute-0 agitated_lalande[220300]:                 "ceph.objectstore": "bluestore",
Jan 10 17:13:54 compute-0 agitated_lalande[220300]:                 "ceph.osd_fsid": "9aa1dcc9-88f4-49c0-be40-744313964d3e",
Jan 10 17:13:54 compute-0 agitated_lalande[220300]:                 "ceph.osd_id": "0",
Jan 10 17:13:54 compute-0 agitated_lalande[220300]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 10 17:13:54 compute-0 agitated_lalande[220300]:                 "ceph.type": "block",
Jan 10 17:13:54 compute-0 agitated_lalande[220300]:                 "ceph.vdo": "0",
Jan 10 17:13:54 compute-0 agitated_lalande[220300]:                 "ceph.with_tpm": "0"
Jan 10 17:13:54 compute-0 agitated_lalande[220300]:             },
Jan 10 17:13:54 compute-0 agitated_lalande[220300]:             "type": "block",
Jan 10 17:13:54 compute-0 agitated_lalande[220300]:             "vg_name": "ceph_vg0"
Jan 10 17:13:54 compute-0 agitated_lalande[220300]:         }
Jan 10 17:13:54 compute-0 agitated_lalande[220300]:     ],
Jan 10 17:13:54 compute-0 agitated_lalande[220300]:     "1": [
Jan 10 17:13:54 compute-0 agitated_lalande[220300]:         {
Jan 10 17:13:54 compute-0 agitated_lalande[220300]:             "devices": [
Jan 10 17:13:54 compute-0 agitated_lalande[220300]:                 "/dev/loop4"
Jan 10 17:13:54 compute-0 agitated_lalande[220300]:             ],
Jan 10 17:13:54 compute-0 agitated_lalande[220300]:             "lv_name": "ceph_lv1",
Jan 10 17:13:54 compute-0 agitated_lalande[220300]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 10 17:13:54 compute-0 agitated_lalande[220300]:             "lv_size": "21470642176",
Jan 10 17:13:54 compute-0 agitated_lalande[220300]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e8e31518-65ae-476c-891c-e2fc550d0a1c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 17:13:54 compute-0 agitated_lalande[220300]:             "lv_uuid": "jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE",
Jan 10 17:13:54 compute-0 agitated_lalande[220300]:             "name": "ceph_lv1",
Jan 10 17:13:54 compute-0 agitated_lalande[220300]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 10 17:13:54 compute-0 agitated_lalande[220300]:             "tags": {
Jan 10 17:13:54 compute-0 agitated_lalande[220300]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 10 17:13:54 compute-0 agitated_lalande[220300]:                 "ceph.block_uuid": "jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE",
Jan 10 17:13:54 compute-0 agitated_lalande[220300]:                 "ceph.cephx_lockbox_secret": "",
Jan 10 17:13:54 compute-0 agitated_lalande[220300]:                 "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 17:13:54 compute-0 agitated_lalande[220300]:                 "ceph.cluster_name": "ceph",
Jan 10 17:13:54 compute-0 agitated_lalande[220300]:                 "ceph.crush_device_class": "",
Jan 10 17:13:54 compute-0 agitated_lalande[220300]:                 "ceph.encrypted": "0",
Jan 10 17:13:54 compute-0 agitated_lalande[220300]:                 "ceph.objectstore": "bluestore",
Jan 10 17:13:54 compute-0 agitated_lalande[220300]:                 "ceph.osd_fsid": "e8e31518-65ae-476c-891c-e2fc550d0a1c",
Jan 10 17:13:54 compute-0 agitated_lalande[220300]:                 "ceph.osd_id": "1",
Jan 10 17:13:54 compute-0 agitated_lalande[220300]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 10 17:13:54 compute-0 agitated_lalande[220300]:                 "ceph.type": "block",
Jan 10 17:13:54 compute-0 agitated_lalande[220300]:                 "ceph.vdo": "0",
Jan 10 17:13:54 compute-0 agitated_lalande[220300]:                 "ceph.with_tpm": "0"
Jan 10 17:13:54 compute-0 agitated_lalande[220300]:             },
Jan 10 17:13:54 compute-0 agitated_lalande[220300]:             "type": "block",
Jan 10 17:13:54 compute-0 agitated_lalande[220300]:             "vg_name": "ceph_vg1"
Jan 10 17:13:54 compute-0 agitated_lalande[220300]:         }
Jan 10 17:13:54 compute-0 agitated_lalande[220300]:     ],
Jan 10 17:13:54 compute-0 agitated_lalande[220300]:     "2": [
Jan 10 17:13:54 compute-0 agitated_lalande[220300]:         {
Jan 10 17:13:54 compute-0 agitated_lalande[220300]:             "devices": [
Jan 10 17:13:54 compute-0 agitated_lalande[220300]:                 "/dev/loop5"
Jan 10 17:13:54 compute-0 agitated_lalande[220300]:             ],
Jan 10 17:13:54 compute-0 agitated_lalande[220300]:             "lv_name": "ceph_lv2",
Jan 10 17:13:54 compute-0 agitated_lalande[220300]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 10 17:13:54 compute-0 agitated_lalande[220300]:             "lv_size": "21470642176",
Jan 10 17:13:54 compute-0 agitated_lalande[220300]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=87473727-6468-4f68-8371-e0bf60edaa43,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 17:13:54 compute-0 agitated_lalande[220300]:             "lv_uuid": "47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL",
Jan 10 17:13:54 compute-0 agitated_lalande[220300]:             "name": "ceph_lv2",
Jan 10 17:13:54 compute-0 agitated_lalande[220300]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 10 17:13:54 compute-0 agitated_lalande[220300]:             "tags": {
Jan 10 17:13:54 compute-0 agitated_lalande[220300]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 10 17:13:54 compute-0 agitated_lalande[220300]:                 "ceph.block_uuid": "47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL",
Jan 10 17:13:54 compute-0 agitated_lalande[220300]:                 "ceph.cephx_lockbox_secret": "",
Jan 10 17:13:54 compute-0 agitated_lalande[220300]:                 "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 17:13:54 compute-0 agitated_lalande[220300]:                 "ceph.cluster_name": "ceph",
Jan 10 17:13:54 compute-0 agitated_lalande[220300]:                 "ceph.crush_device_class": "",
Jan 10 17:13:54 compute-0 agitated_lalande[220300]:                 "ceph.encrypted": "0",
Jan 10 17:13:54 compute-0 agitated_lalande[220300]:                 "ceph.objectstore": "bluestore",
Jan 10 17:13:54 compute-0 agitated_lalande[220300]:                 "ceph.osd_fsid": "87473727-6468-4f68-8371-e0bf60edaa43",
Jan 10 17:13:54 compute-0 agitated_lalande[220300]:                 "ceph.osd_id": "2",
Jan 10 17:13:54 compute-0 agitated_lalande[220300]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 10 17:13:54 compute-0 agitated_lalande[220300]:                 "ceph.type": "block",
Jan 10 17:13:54 compute-0 agitated_lalande[220300]:                 "ceph.vdo": "0",
Jan 10 17:13:54 compute-0 agitated_lalande[220300]:                 "ceph.with_tpm": "0"
Jan 10 17:13:54 compute-0 agitated_lalande[220300]:             },
Jan 10 17:13:54 compute-0 agitated_lalande[220300]:             "type": "block",
Jan 10 17:13:54 compute-0 agitated_lalande[220300]:             "vg_name": "ceph_vg2"
Jan 10 17:13:54 compute-0 agitated_lalande[220300]:         }
Jan 10 17:13:54 compute-0 agitated_lalande[220300]:     ]
Jan 10 17:13:54 compute-0 agitated_lalande[220300]: }
Jan 10 17:13:54 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:13:54 compute-0 systemd[1]: libpod-1a07f167c00aeb93e6661cb63022208ef0a677d84af6b0356f14a824c1d88699.scope: Deactivated successfully.
Jan 10 17:13:54 compute-0 podman[220255]: 2026-01-10 17:13:54.223086038 +0000 UTC m=+0.585116560 container died 1a07f167c00aeb93e6661cb63022208ef0a677d84af6b0356f14a824c1d88699 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_lalande, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 10 17:13:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-993d574818403955e598a357c1237f29a4157da428c1810d924ee5848703306f-merged.mount: Deactivated successfully.
Jan 10 17:13:54 compute-0 podman[220255]: 2026-01-10 17:13:54.278740156 +0000 UTC m=+0.640770628 container remove 1a07f167c00aeb93e6661cb63022208ef0a677d84af6b0356f14a824c1d88699 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_lalande, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 17:13:54 compute-0 systemd[1]: libpod-conmon-1a07f167c00aeb93e6661cb63022208ef0a677d84af6b0356f14a824c1d88699.scope: Deactivated successfully.
Jan 10 17:13:54 compute-0 sudo[220029]: pam_unix(sudo:session): session closed for user root
Jan 10 17:13:54 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v547: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:13:54 compute-0 sudo[220322]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 17:13:54 compute-0 sudo[220322]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:13:54 compute-0 sudo[220322]: pam_unix(sudo:session): session closed for user root
Jan 10 17:13:54 compute-0 sudo[220347]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 -- raw list --format json
Jan 10 17:13:54 compute-0 sudo[220347]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:13:54 compute-0 podman[220385]: 2026-01-10 17:13:54.847206819 +0000 UTC m=+0.057746910 container create a02cd41a6d53b47e73222c65183a113cbf57c1e631a907a783927297acd899a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_swartz, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 17:13:54 compute-0 systemd[1]: Started libpod-conmon-a02cd41a6d53b47e73222c65183a113cbf57c1e631a907a783927297acd899a7.scope.
Jan 10 17:13:54 compute-0 podman[220385]: 2026-01-10 17:13:54.816566203 +0000 UTC m=+0.027106364 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:13:54 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:13:54 compute-0 podman[220385]: 2026-01-10 17:13:54.953861637 +0000 UTC m=+0.164401788 container init a02cd41a6d53b47e73222c65183a113cbf57c1e631a907a783927297acd899a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_swartz, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 10 17:13:54 compute-0 podman[220385]: 2026-01-10 17:13:54.964774097 +0000 UTC m=+0.175314168 container start a02cd41a6d53b47e73222c65183a113cbf57c1e631a907a783927297acd899a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_swartz, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 10 17:13:54 compute-0 podman[220385]: 2026-01-10 17:13:54.969940538 +0000 UTC m=+0.180480699 container attach a02cd41a6d53b47e73222c65183a113cbf57c1e631a907a783927297acd899a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_swartz, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 10 17:13:54 compute-0 objective_swartz[220401]: 167 167
Jan 10 17:13:54 compute-0 systemd[1]: libpod-a02cd41a6d53b47e73222c65183a113cbf57c1e631a907a783927297acd899a7.scope: Deactivated successfully.
Jan 10 17:13:54 compute-0 podman[220385]: 2026-01-10 17:13:54.971209985 +0000 UTC m=+0.181750056 container died a02cd41a6d53b47e73222c65183a113cbf57c1e631a907a783927297acd899a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_swartz, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 10 17:13:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-ceefce939f86b21b838a1d169ded7351918188b8f57c7fd2ab91f54fe905530b-merged.mount: Deactivated successfully.
Jan 10 17:13:55 compute-0 podman[220385]: 2026-01-10 17:13:55.020906378 +0000 UTC m=+0.231446459 container remove a02cd41a6d53b47e73222c65183a113cbf57c1e631a907a783927297acd899a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_swartz, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 17:13:55 compute-0 systemd[1]: libpod-conmon-a02cd41a6d53b47e73222c65183a113cbf57c1e631a907a783927297acd899a7.scope: Deactivated successfully.
Jan 10 17:13:55 compute-0 podman[220425]: 2026-01-10 17:13:55.221813013 +0000 UTC m=+0.063409525 container create 05bad29c3c4072bc14bb6037393bab7dde20bee53bfa66688d36c5acbb04424d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_hawking, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 17:13:55 compute-0 systemd[1]: Started libpod-conmon-05bad29c3c4072bc14bb6037393bab7dde20bee53bfa66688d36c5acbb04424d.scope.
Jan 10 17:13:55 compute-0 podman[220425]: 2026-01-10 17:13:55.190643991 +0000 UTC m=+0.032240553 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:13:55 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:13:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07229d3983aedce6bc8ea32bf3ba5fb3bdb73c647ee6f5d42cf99ce7e0399ae5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 17:13:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07229d3983aedce6bc8ea32bf3ba5fb3bdb73c647ee6f5d42cf99ce7e0399ae5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 17:13:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07229d3983aedce6bc8ea32bf3ba5fb3bdb73c647ee6f5d42cf99ce7e0399ae5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 17:13:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07229d3983aedce6bc8ea32bf3ba5fb3bdb73c647ee6f5d42cf99ce7e0399ae5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 17:13:55 compute-0 podman[220425]: 2026-01-10 17:13:55.348177248 +0000 UTC m=+0.189773820 container init 05bad29c3c4072bc14bb6037393bab7dde20bee53bfa66688d36c5acbb04424d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_hawking, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 17:13:55 compute-0 podman[220425]: 2026-01-10 17:13:55.361172558 +0000 UTC m=+0.202769070 container start 05bad29c3c4072bc14bb6037393bab7dde20bee53bfa66688d36c5acbb04424d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_hawking, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 17:13:55 compute-0 podman[220425]: 2026-01-10 17:13:55.365298359 +0000 UTC m=+0.206894871 container attach 05bad29c3c4072bc14bb6037393bab7dde20bee53bfa66688d36c5acbb04424d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_hawking, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 17:13:55 compute-0 ceph-mon[75249]: pgmap v547: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:13:56 compute-0 lvm[220522]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 10 17:13:56 compute-0 lvm[220522]: VG ceph_vg0 finished
Jan 10 17:13:56 compute-0 lvm[220524]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 10 17:13:56 compute-0 lvm[220524]: VG ceph_vg1 finished
Jan 10 17:13:56 compute-0 lvm[220526]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 10 17:13:56 compute-0 lvm[220526]: VG ceph_vg2 finished
Jan 10 17:13:56 compute-0 lvm[220527]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 10 17:13:56 compute-0 lvm[220527]: VG ceph_vg0 finished
Jan 10 17:13:56 compute-0 eloquent_hawking[220442]: {}
Jan 10 17:13:56 compute-0 systemd[1]: libpod-05bad29c3c4072bc14bb6037393bab7dde20bee53bfa66688d36c5acbb04424d.scope: Deactivated successfully.
Jan 10 17:13:56 compute-0 systemd[1]: libpod-05bad29c3c4072bc14bb6037393bab7dde20bee53bfa66688d36c5acbb04424d.scope: Consumed 1.527s CPU time.
Jan 10 17:13:56 compute-0 podman[220425]: 2026-01-10 17:13:56.277981646 +0000 UTC m=+1.119578158 container died 05bad29c3c4072bc14bb6037393bab7dde20bee53bfa66688d36c5acbb04424d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_hawking, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 10 17:13:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-07229d3983aedce6bc8ea32bf3ba5fb3bdb73c647ee6f5d42cf99ce7e0399ae5-merged.mount: Deactivated successfully.
Jan 10 17:13:56 compute-0 podman[220425]: 2026-01-10 17:13:56.337454785 +0000 UTC m=+1.179051297 container remove 05bad29c3c4072bc14bb6037393bab7dde20bee53bfa66688d36c5acbb04424d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_hawking, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 10 17:13:56 compute-0 systemd[1]: libpod-conmon-05bad29c3c4072bc14bb6037393bab7dde20bee53bfa66688d36c5acbb04424d.scope: Deactivated successfully.
Jan 10 17:13:56 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v548: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:13:56 compute-0 sudo[220347]: pam_unix(sudo:session): session closed for user root
Jan 10 17:13:56 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 10 17:13:56 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:13:56 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 10 17:13:56 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:13:56 compute-0 systemd[1]: Reloading.
Jan 10 17:13:56 compute-0 systemd-rc-local-generator[220594]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 17:13:56 compute-0 systemd-sysv-generator[220597]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 10 17:13:56 compute-0 sudo[220543]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 10 17:13:56 compute-0 systemd[1]: Reloading.
Jan 10 17:13:56 compute-0 sudo[220543]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:13:56 compute-0 sudo[220543]: pam_unix(sudo:session): session closed for user root
Jan 10 17:13:56 compute-0 systemd-sysv-generator[220632]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 10 17:13:56 compute-0 systemd-rc-local-generator[220628]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 17:13:57 compute-0 systemd-logind[798]: Watching system buttons on /dev/input/event0 (Power Button)
Jan 10 17:13:57 compute-0 systemd-logind[798]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Jan 10 17:13:57 compute-0 ceph-mon[75249]: pgmap v548: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:13:57 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:13:57 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:13:57 compute-0 lvm[220679]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 10 17:13:57 compute-0 lvm[220679]: VG ceph_vg0 finished
Jan 10 17:13:57 compute-0 lvm[220677]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 10 17:13:57 compute-0 lvm[220677]: VG ceph_vg1 finished
Jan 10 17:13:57 compute-0 lvm[220673]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 10 17:13:57 compute-0 lvm[220673]: VG ceph_vg2 finished
Jan 10 17:13:57 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 10 17:13:57 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 10 17:13:57 compute-0 systemd[1]: Reloading.
Jan 10 17:13:57 compute-0 systemd-rc-local-generator[220730]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 17:13:57 compute-0 systemd-sysv-generator[220733]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 10 17:13:58 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 10 17:13:58 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v549: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:13:58 compute-0 sudo[220295]: pam_unix(sudo:session): session closed for user root
Jan 10 17:13:59 compute-0 sudo[221872]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gdwjnjxysvmifhpeeapvlvkiyqfpaqam ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065238.701375-399-136759127425486/AnsiballZ_systemd_service.py'
Jan 10 17:13:59 compute-0 sudo[221872]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:13:59 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:13:59 compute-0 python3.9[221894]: ansible-ansible.builtin.systemd_service Invoked with name=iscsid state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 10 17:13:59 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 10 17:13:59 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 10 17:13:59 compute-0 systemd[1]: man-db-cache-update.service: Consumed 2.070s CPU time.
Jan 10 17:13:59 compute-0 systemd[1]: run-rdde1c10fe66f48e89d9a73cd74f301f6.service: Deactivated successfully.
Jan 10 17:13:59 compute-0 systemd[1]: Stopping Open-iSCSI...
Jan 10 17:13:59 compute-0 iscsid[214895]: iscsid shutting down.
Jan 10 17:13:59 compute-0 systemd[1]: iscsid.service: Deactivated successfully.
Jan 10 17:13:59 compute-0 systemd[1]: Stopped Open-iSCSI.
Jan 10 17:13:59 compute-0 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Jan 10 17:13:59 compute-0 systemd[1]: Starting Open-iSCSI...
Jan 10 17:13:59 compute-0 ceph-mon[75249]: pgmap v549: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:13:59 compute-0 systemd[1]: Started Open-iSCSI.
Jan 10 17:13:59 compute-0 sudo[221872]: pam_unix(sudo:session): session closed for user root
Jan 10 17:14:00 compute-0 sudo[222186]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rkrorsedolccxbyauqfzvdpviyxfbhdt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065239.7343142-407-21985870497029/AnsiballZ_systemd_service.py'
Jan 10 17:14:00 compute-0 sudo[222186]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:14:00 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v550: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:14:00 compute-0 python3.9[222188]: ansible-ansible.builtin.systemd_service Invoked with name=multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 10 17:14:00 compute-0 systemd[1]: Stopping Device-Mapper Multipath Device Controller...
Jan 10 17:14:00 compute-0 multipathd[218844]: exit (signal)
Jan 10 17:14:00 compute-0 multipathd[218844]: --------shut down-------
Jan 10 17:14:00 compute-0 systemd[1]: multipathd.service: Deactivated successfully.
Jan 10 17:14:00 compute-0 systemd[1]: Stopped Device-Mapper Multipath Device Controller.
Jan 10 17:14:00 compute-0 systemd[1]: Starting Device-Mapper Multipath Device Controller...
Jan 10 17:14:00 compute-0 multipathd[222194]: --------start up--------
Jan 10 17:14:00 compute-0 multipathd[222194]: read /etc/multipath.conf
Jan 10 17:14:00 compute-0 multipathd[222194]: path checkers start up
Jan 10 17:14:00 compute-0 systemd[1]: Started Device-Mapper Multipath Device Controller.
Jan 10 17:14:00 compute-0 sudo[222186]: pam_unix(sudo:session): session closed for user root
Jan 10 17:14:01 compute-0 systemd[1]: virtnodedevd.service: Deactivated successfully.
Jan 10 17:14:01 compute-0 ceph-mon[75249]: pgmap v550: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:14:01 compute-0 python3.9[222351]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 10 17:14:02 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v551: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:14:02 compute-0 sudo[222506]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djkznuhggcljslovwpvjonsvuqbmhhyg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065242.1194599-425-62558696104913/AnsiballZ_file.py'
Jan 10 17:14:02 compute-0 sudo[222506]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:14:02 compute-0 python3.9[222508]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:14:02 compute-0 sudo[222506]: pam_unix(sudo:session): session closed for user root
Jan 10 17:14:02 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Jan 10 17:14:03 compute-0 ceph-mon[75249]: pgmap v551: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:14:03 compute-0 sudo[222659]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qmyuagapckkogryqcwfjvvvhfaupkzul ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065243.1436534-436-8681879697173/AnsiballZ_systemd_service.py'
Jan 10 17:14:03 compute-0 sudo[222659]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:14:03 compute-0 python3.9[222661]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 10 17:14:03 compute-0 systemd[1]: Reloading.
Jan 10 17:14:03 compute-0 systemd-rc-local-generator[222689]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 17:14:03 compute-0 systemd-sysv-generator[222692]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 10 17:14:04 compute-0 sudo[222659]: pam_unix(sudo:session): session closed for user root
Jan 10 17:14:04 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:14:04 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v552: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:14:04 compute-0 python3.9[222846]: ansible-ansible.builtin.service_facts Invoked
Jan 10 17:14:05 compute-0 network[222863]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 10 17:14:05 compute-0 network[222864]: 'network-scripts' will be removed from distribution in near future.
Jan 10 17:14:05 compute-0 network[222865]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 10 17:14:05 compute-0 ceph-mon[75249]: pgmap v552: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:14:06 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v553: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:14:07 compute-0 ceph-mon[75249]: pgmap v553: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:14:08 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v554: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:14:08 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:14:08 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:14:08 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:14:08 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:14:08 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:14:08 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:14:09 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:14:09 compute-0 ceph-mon[75249]: pgmap v554: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:14:09 compute-0 sudo[223136]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qfznkteqkxuhqobsgakufchtzcbgqtfr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065249.2150233-455-168435218893114/AnsiballZ_systemd_service.py'
Jan 10 17:14:09 compute-0 sudo[223136]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:14:09 compute-0 python3.9[223138]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 10 17:14:09 compute-0 sudo[223136]: pam_unix(sudo:session): session closed for user root
Jan 10 17:14:10 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v555: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:14:10 compute-0 sudo[223289]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ojcpswnyzlmidxbxggkxbbhkzfbrtoao ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065250.1428142-455-250945455967364/AnsiballZ_systemd_service.py'
Jan 10 17:14:10 compute-0 sudo[223289]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:14:10 compute-0 python3.9[223291]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 10 17:14:10 compute-0 sudo[223289]: pam_unix(sudo:session): session closed for user root
Jan 10 17:14:11 compute-0 sudo[223442]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-absktwlubujpbuiviyewuubhfukbcynl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065250.9575284-455-69992634804035/AnsiballZ_systemd_service.py'
Jan 10 17:14:11 compute-0 sudo[223442]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:14:11 compute-0 ceph-mon[75249]: pgmap v555: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:14:11 compute-0 python3.9[223444]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 10 17:14:11 compute-0 sudo[223442]: pam_unix(sudo:session): session closed for user root
Jan 10 17:14:12 compute-0 sudo[223595]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fpnrxcthdtsyvlvxmqawciobxjbnzmvz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065251.8572052-455-162850689177359/AnsiballZ_systemd_service.py'
Jan 10 17:14:12 compute-0 sudo[223595]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:14:12 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v556: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:14:12 compute-0 python3.9[223597]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 10 17:14:12 compute-0 sudo[223595]: pam_unix(sudo:session): session closed for user root
Jan 10 17:14:13 compute-0 sudo[223748]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bkvtleaqksjonchucgcvxcsceflyguvu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065252.6732497-455-149386231385051/AnsiballZ_systemd_service.py'
Jan 10 17:14:13 compute-0 sudo[223748]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:14:13 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Jan 10 17:14:13 compute-0 python3.9[223750]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 10 17:14:13 compute-0 systemd[1]: virtqemud.service: Deactivated successfully.
Jan 10 17:14:13 compute-0 sudo[223748]: pam_unix(sudo:session): session closed for user root
Jan 10 17:14:13 compute-0 ceph-mon[75249]: pgmap v556: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:14:14 compute-0 sudo[223903]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-etabruqrfuunitbnqimjndyzjaypjsel ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065253.6804125-455-30502430105095/AnsiballZ_systemd_service.py'
Jan 10 17:14:14 compute-0 sudo[223903]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:14:14 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:14:14 compute-0 python3.9[223905]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 10 17:14:14 compute-0 sudo[223903]: pam_unix(sudo:session): session closed for user root
Jan 10 17:14:14 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v557: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:14:14 compute-0 sudo[224056]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gdhtljmhwtbdbutgsrpmfmsygdvgortp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065254.4794278-455-246212644751799/AnsiballZ_systemd_service.py'
Jan 10 17:14:14 compute-0 sudo[224056]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:14:15 compute-0 python3.9[224058]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 10 17:14:15 compute-0 sudo[224056]: pam_unix(sudo:session): session closed for user root
Jan 10 17:14:15 compute-0 ceph-mon[75249]: pgmap v557: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:14:15 compute-0 sudo[224209]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wvktnjxtdakbycxknrgvuzdansvcazun ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065255.4033957-455-214468606311589/AnsiballZ_systemd_service.py'
Jan 10 17:14:15 compute-0 sudo[224209]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:14:16 compute-0 python3.9[224211]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 10 17:14:16 compute-0 sudo[224209]: pam_unix(sudo:session): session closed for user root
Jan 10 17:14:16 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v558: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:14:16 compute-0 sudo[224362]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mlltmawzvupyhhtqmclpnpkvsvegmteu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065256.5279279-514-2259021537087/AnsiballZ_file.py'
Jan 10 17:14:16 compute-0 sudo[224362]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:14:17 compute-0 python3.9[224364]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:14:17 compute-0 sudo[224362]: pam_unix(sudo:session): session closed for user root
Jan 10 17:14:17 compute-0 ceph-mon[75249]: pgmap v558: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:14:17 compute-0 sudo[224514]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hbdwycsdwzqwparandaztfmxdqqhhapj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065257.2793167-514-169059335951541/AnsiballZ_file.py'
Jan 10 17:14:17 compute-0 sudo[224514]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:14:17 compute-0 python3.9[224516]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:14:17 compute-0 sudo[224514]: pam_unix(sudo:session): session closed for user root
Jan 10 17:14:18 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v559: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:14:18 compute-0 sudo[224666]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hdnqnbhxeomhszugoonowgwivutsdhuz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065258.1520715-514-96486218181810/AnsiballZ_file.py'
Jan 10 17:14:18 compute-0 sudo[224666]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:14:18 compute-0 python3.9[224668]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:14:18 compute-0 sudo[224666]: pam_unix(sudo:session): session closed for user root
Jan 10 17:14:19 compute-0 podman[224729]: 2026-01-10 17:14:19.140338164 +0000 UTC m=+0.127448798 container health_status a2049b55215718ead6719d161f51721cb7ba61ad8a59a8a1174e05b17008fa6f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 10 17:14:19 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:14:19 compute-0 sudo[224842]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mczlhilkrzsgkiretolapqbfrnojcqeb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065258.9173093-514-203453954160424/AnsiballZ_file.py'
Jan 10 17:14:19 compute-0 sudo[224842]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:14:19 compute-0 python3.9[224844]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:14:19 compute-0 sudo[224842]: pam_unix(sudo:session): session closed for user root
Jan 10 17:14:19 compute-0 ceph-mon[75249]: pgmap v559: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:14:20 compute-0 sudo[224994]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-znxxgvmyizwhfplrfqzsqqxapkjtcfhg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065259.661792-514-273558620088480/AnsiballZ_file.py'
Jan 10 17:14:20 compute-0 sudo[224994]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:14:20 compute-0 python3.9[224996]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:14:20 compute-0 sudo[224994]: pam_unix(sudo:session): session closed for user root
Jan 10 17:14:20 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v560: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:14:20 compute-0 sudo[225146]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kzbdvpgeofwggejyycturnwgemiruqrl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065260.4197717-514-249475946522100/AnsiballZ_file.py'
Jan 10 17:14:20 compute-0 sudo[225146]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:14:20 compute-0 python3.9[225148]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:14:21 compute-0 sudo[225146]: pam_unix(sudo:session): session closed for user root
Jan 10 17:14:21 compute-0 sudo[225298]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mzrgoxpnujrsmmsadmfegqpukbxykvxy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065261.155439-514-277315822578993/AnsiballZ_file.py'
Jan 10 17:14:21 compute-0 sudo[225298]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:14:21 compute-0 ceph-mon[75249]: pgmap v560: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:14:21 compute-0 python3.9[225300]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:14:21 compute-0 sudo[225298]: pam_unix(sudo:session): session closed for user root
Jan 10 17:14:22 compute-0 sudo[225450]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hjgidgraeyidobsuiamzpkcrijayurax ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065261.836087-514-131021613635716/AnsiballZ_file.py'
Jan 10 17:14:22 compute-0 sudo[225450]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:14:22 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v561: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:14:22 compute-0 python3.9[225452]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:14:22 compute-0 sudo[225450]: pam_unix(sudo:session): session closed for user root
Jan 10 17:14:23 compute-0 sudo[225602]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jmtaesjtvxhshtflennysraotlcthjdp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065262.6385887-571-47684358190944/AnsiballZ_file.py'
Jan 10 17:14:23 compute-0 sudo[225602]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:14:23 compute-0 python3.9[225604]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:14:23 compute-0 sudo[225602]: pam_unix(sudo:session): session closed for user root
Jan 10 17:14:23 compute-0 ceph-mon[75249]: pgmap v561: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:14:23 compute-0 sudo[225765]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ekpolxfgamdlnqdljknmlnsoftnqkdxo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065263.4405901-571-45070950812922/AnsiballZ_file.py'
Jan 10 17:14:23 compute-0 sudo[225765]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:14:23 compute-0 podman[225728]: 2026-01-10 17:14:23.845298654 +0000 UTC m=+0.081317019 container health_status 4a66317a3a3660e903224f5e823ead0a57597ae17c6ac34919f8a3aeea25124f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 10 17:14:24 compute-0 python3.9[225777]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:14:24 compute-0 sudo[225765]: pam_unix(sudo:session): session closed for user root
Jan 10 17:14:24 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:14:24 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v562: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:14:24 compute-0 sudo[225927]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lyukmvxrvwmpdwucoejzjzmmwixpxiez ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065264.212925-571-126553354373086/AnsiballZ_file.py'
Jan 10 17:14:24 compute-0 sudo[225927]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:14:24 compute-0 python3.9[225929]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:14:24 compute-0 sudo[225927]: pam_unix(sudo:session): session closed for user root
Jan 10 17:14:25 compute-0 sudo[226079]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hmgoqhlcmdxyfgwgqaflmvzhfytdyamn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065264.9704356-571-281139731638452/AnsiballZ_file.py'
Jan 10 17:14:25 compute-0 sudo[226079]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:14:25 compute-0 python3.9[226081]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:14:25 compute-0 sudo[226079]: pam_unix(sudo:session): session closed for user root
Jan 10 17:14:25 compute-0 ceph-mon[75249]: pgmap v562: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:14:25 compute-0 sudo[226231]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gmimsnnofzsskhhcrrjuhwvkildliteh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065265.6743622-571-90585360825624/AnsiballZ_file.py'
Jan 10 17:14:25 compute-0 sudo[226231]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:14:26 compute-0 python3.9[226233]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:14:26 compute-0 sudo[226231]: pam_unix(sudo:session): session closed for user root
Jan 10 17:14:26 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v563: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:14:26 compute-0 sudo[226383]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tloabzsvpahzlbwdxhnsqozdqewoyxpq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065266.392144-571-133635071009771/AnsiballZ_file.py'
Jan 10 17:14:26 compute-0 sudo[226383]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:14:27 compute-0 python3.9[226385]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:14:27 compute-0 sudo[226383]: pam_unix(sudo:session): session closed for user root
Jan 10 17:14:27 compute-0 sudo[226535]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-htslbptlyeymeunqgggllvbgezlmkzot ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065267.23591-571-92392242716294/AnsiballZ_file.py'
Jan 10 17:14:27 compute-0 sudo[226535]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:14:27 compute-0 ceph-mon[75249]: pgmap v563: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:14:27 compute-0 python3.9[226537]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:14:27 compute-0 sudo[226535]: pam_unix(sudo:session): session closed for user root
Jan 10 17:14:28 compute-0 sudo[226687]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-juhdcinsnsgofnxghujsowqvhefcjyyh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065267.9319036-571-31972614452012/AnsiballZ_file.py'
Jan 10 17:14:28 compute-0 sudo[226687]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:14:28 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v564: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:14:28 compute-0 python3.9[226689]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:14:28 compute-0 sudo[226687]: pam_unix(sudo:session): session closed for user root
Jan 10 17:14:29 compute-0 sudo[226839]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qhicmphujfjtwpdqkxebbsgxjvtlvfyo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065268.7794464-629-25491944221864/AnsiballZ_command.py'
Jan 10 17:14:29 compute-0 sudo[226839]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:14:29 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:14:29 compute-0 python3.9[226841]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 17:14:29 compute-0 sudo[226839]: pam_unix(sudo:session): session closed for user root
Jan 10 17:14:29 compute-0 ceph-mon[75249]: pgmap v564: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:14:30 compute-0 python3.9[226993]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 10 17:14:30 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v565: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:14:30 compute-0 sudo[227143]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fikyqvvivklofxyuirgcbjegvkxmaqff ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065270.473119-647-37982947823488/AnsiballZ_systemd_service.py'
Jan 10 17:14:30 compute-0 sudo[227143]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:14:31 compute-0 python3.9[227145]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 10 17:14:31 compute-0 systemd[1]: Reloading.
Jan 10 17:14:31 compute-0 systemd-rc-local-generator[227177]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 17:14:31 compute-0 systemd-sysv-generator[227180]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 10 17:14:31 compute-0 ceph-mon[75249]: pgmap v565: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:14:31 compute-0 sudo[227143]: pam_unix(sudo:session): session closed for user root
Jan 10 17:14:32 compute-0 sudo[227331]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cxlefzijeyjndacskstofprtbgebturm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065272.036612-655-226433871404081/AnsiballZ_command.py'
Jan 10 17:14:32 compute-0 sudo[227331]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:14:32 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v566: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:14:32 compute-0 python3.9[227333]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 17:14:33 compute-0 sudo[227331]: pam_unix(sudo:session): session closed for user root
Jan 10 17:14:33 compute-0 ceph-mon[75249]: pgmap v566: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:14:34 compute-0 sudo[227484]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aeyvlpumynjidhpxbmsevxeyjmapanao ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065273.7524035-655-17486285133017/AnsiballZ_command.py'
Jan 10 17:14:34 compute-0 sudo[227484]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:14:34 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:14:34 compute-0 python3.9[227486]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 17:14:34 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v567: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:14:34 compute-0 sudo[227484]: pam_unix(sudo:session): session closed for user root
Jan 10 17:14:34 compute-0 sudo[227637]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tkcloipkeppjrhqrcruztpvibvdkzzii ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065274.588865-655-265724485254821/AnsiballZ_command.py'
Jan 10 17:14:34 compute-0 sudo[227637]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:14:35 compute-0 python3.9[227639]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 17:14:35 compute-0 sudo[227637]: pam_unix(sudo:session): session closed for user root
Jan 10 17:14:35 compute-0 ceph-mon[75249]: pgmap v567: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:14:35 compute-0 sudo[227790]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nkffgexyeeeczbladxmrkfkmartycper ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065275.3624392-655-25935046497538/AnsiballZ_command.py'
Jan 10 17:14:35 compute-0 sudo[227790]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:14:35 compute-0 python3.9[227792]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 17:14:36 compute-0 sudo[227790]: pam_unix(sudo:session): session closed for user root
Jan 10 17:14:36 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v568: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:14:36 compute-0 sudo[227943]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zkswmiuwinugresijieyobrcswfucbvg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065276.1617627-655-181306132942307/AnsiballZ_command.py'
Jan 10 17:14:36 compute-0 sudo[227943]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:14:36 compute-0 python3.9[227945]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 17:14:36 compute-0 sudo[227943]: pam_unix(sudo:session): session closed for user root
Jan 10 17:14:37 compute-0 sudo[228096]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fkqwtcpjhhldfetekedxclygyvjwpxgr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065277.0402048-655-213045616330075/AnsiballZ_command.py'
Jan 10 17:14:37 compute-0 sudo[228096]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:14:37 compute-0 python3.9[228098]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 17:14:37 compute-0 sudo[228096]: pam_unix(sudo:session): session closed for user root
Jan 10 17:14:37 compute-0 ceph-mon[75249]: pgmap v568: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:14:38 compute-0 ceph-mgr[75538]: [balancer INFO root] Optimize plan auto_2026-01-10_17:14:38
Jan 10 17:14:38 compute-0 ceph-mgr[75538]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 10 17:14:38 compute-0 ceph-mgr[75538]: [balancer INFO root] do_upmap
Jan 10 17:14:38 compute-0 ceph-mgr[75538]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'images', 'backups', 'volumes', 'cephfs.cephfs.data', '.mgr', 'vms']
Jan 10 17:14:38 compute-0 ceph-mgr[75538]: [balancer INFO root] prepared 0/10 upmap changes
Jan 10 17:14:38 compute-0 sudo[228249]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qmivpsgylejkjensoiyspcwoazvrcmrp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065277.8057349-655-258707971081076/AnsiballZ_command.py'
Jan 10 17:14:38 compute-0 sudo[228249]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:14:38 compute-0 python3.9[228251]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 17:14:38 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v569: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:14:38 compute-0 sudo[228249]: pam_unix(sudo:session): session closed for user root
Jan 10 17:14:38 compute-0 sudo[228402]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qqlxmhpixlkssjkxanjctzffvqolhvpo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065278.596013-655-178688720795221/AnsiballZ_command.py'
Jan 10 17:14:38 compute-0 sudo[228402]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:14:38 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:14:38 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:14:38 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:14:38 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:14:38 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:14:38 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:14:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 10 17:14:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 10 17:14:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 10 17:14:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 10 17:14:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 10 17:14:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 10 17:14:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 10 17:14:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 10 17:14:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 10 17:14:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 10 17:14:39 compute-0 python3.9[228404]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 17:14:39 compute-0 sudo[228402]: pam_unix(sudo:session): session closed for user root
Jan 10 17:14:39 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:14:39 compute-0 ceph-mon[75249]: pgmap v569: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:14:40 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v570: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:14:40 compute-0 sudo[228555]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cgahsntdlbeowetjihujzipsbiqlklnn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065280.0234993-734-183621980312403/AnsiballZ_file.py'
Jan 10 17:14:40 compute-0 sudo[228555]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:14:40 compute-0 python3.9[228557]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 10 17:14:40 compute-0 sudo[228555]: pam_unix(sudo:session): session closed for user root
Jan 10 17:14:41 compute-0 sudo[228707]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dmmjxiixfbigiwpyapgjytrngrqvuyga ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065280.7903152-734-72120897294831/AnsiballZ_file.py'
Jan 10 17:14:41 compute-0 sudo[228707]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:14:41 compute-0 python3.9[228709]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 10 17:14:41 compute-0 sudo[228707]: pam_unix(sudo:session): session closed for user root
Jan 10 17:14:41 compute-0 ceph-mon[75249]: pgmap v570: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:14:41 compute-0 sudo[228859]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lsnthcqvsxsajzuumrzvcsdcqskeobbc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065281.572285-734-188858992714070/AnsiballZ_file.py'
Jan 10 17:14:41 compute-0 sudo[228859]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:14:42 compute-0 python3.9[228861]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 10 17:14:42 compute-0 sudo[228859]: pam_unix(sudo:session): session closed for user root
Jan 10 17:14:42 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v571: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:14:42 compute-0 sudo[229011]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kxomrsmmvcqfswraxglpqmzgfqiyguyg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065282.4000068-756-30577210728722/AnsiballZ_file.py'
Jan 10 17:14:42 compute-0 sudo[229011]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:14:42 compute-0 python3.9[229013]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 10 17:14:42 compute-0 sudo[229011]: pam_unix(sudo:session): session closed for user root
Jan 10 17:14:43 compute-0 sudo[229163]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uwdqsolntesdpvnhkzizhspqtlativwb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065283.0810552-756-189027503674545/AnsiballZ_file.py'
Jan 10 17:14:43 compute-0 sudo[229163]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:14:43 compute-0 python3.9[229165]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 10 17:14:43 compute-0 sudo[229163]: pam_unix(sudo:session): session closed for user root
Jan 10 17:14:43 compute-0 ceph-mon[75249]: pgmap v571: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:14:44 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:14:44 compute-0 sudo[229315]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bqjuxhhggzcnctxxoknoumcpmulekypy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065283.8811052-756-166945720961621/AnsiballZ_file.py'
Jan 10 17:14:44 compute-0 sudo[229315]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:14:44 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v572: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:14:44 compute-0 python3.9[229317]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 10 17:14:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] _maybe_adjust
Jan 10 17:14:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:14:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 10 17:14:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:14:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 17:14:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:14:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 17:14:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:14:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 17:14:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:14:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 17:14:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:14:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 9.302004027771843e-07 of space, bias 4.0, pg target 0.0011162404833326212 quantized to 16 (current 16)
Jan 10 17:14:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:14:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 17:14:44 compute-0 sudo[229315]: pam_unix(sudo:session): session closed for user root
Jan 10 17:14:44 compute-0 sudo[229467]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qiodmrsgfnuahoknitomqdbkptpetfhu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065284.6395676-756-47652882709121/AnsiballZ_file.py'
Jan 10 17:14:44 compute-0 sudo[229467]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:14:45 compute-0 python3.9[229469]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 10 17:14:45 compute-0 sudo[229467]: pam_unix(sudo:session): session closed for user root
Jan 10 17:14:45 compute-0 ceph-mon[75249]: pgmap v572: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:14:45 compute-0 sudo[229619]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-quuggsciaycvcmsxjjmbgmkhvzszcgga ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065285.3803236-756-67582754651797/AnsiballZ_file.py'
Jan 10 17:14:45 compute-0 sudo[229619]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:14:45 compute-0 python3.9[229621]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 10 17:14:46 compute-0 sudo[229619]: pam_unix(sudo:session): session closed for user root
Jan 10 17:14:46 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v573: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:14:46 compute-0 sudo[229771]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eehiyfwhhbrbdsepovgxhluhgjqwsxyj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065286.1742425-756-98061086148668/AnsiballZ_file.py'
Jan 10 17:14:46 compute-0 sudo[229771]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:14:46 compute-0 python3.9[229773]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 10 17:14:46 compute-0 sudo[229771]: pam_unix(sudo:session): session closed for user root
Jan 10 17:14:47 compute-0 sudo[229923]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idxytvfdnlatzgohfpkjkuulqlppgyxr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065286.8831317-756-158330678199116/AnsiballZ_file.py'
Jan 10 17:14:47 compute-0 sudo[229923]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:14:47 compute-0 python3.9[229925]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 10 17:14:47 compute-0 sudo[229923]: pam_unix(sudo:session): session closed for user root
Jan 10 17:14:47 compute-0 ceph-mon[75249]: pgmap v573: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:14:48 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v574: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:14:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:14:48.915 152671 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 10 17:14:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:14:48.917 152671 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 10 17:14:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:14:48.917 152671 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 10 17:14:49 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:14:49 compute-0 ceph-mon[75249]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #30. Immutable memtables: 0.
Jan 10 17:14:49 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:14:49.213822) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 10 17:14:49 compute-0 ceph-mon[75249]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 30
Jan 10 17:14:49 compute-0 ceph-mon[75249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768065289214013, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 993, "num_deletes": 250, "total_data_size": 998251, "memory_usage": 1016448, "flush_reason": "Manual Compaction"}
Jan 10 17:14:49 compute-0 ceph-mon[75249]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #31: started
Jan 10 17:14:49 compute-0 ceph-mon[75249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768065289223221, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 31, "file_size": 605797, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 11160, "largest_seqno": 12152, "table_properties": {"data_size": 601985, "index_size": 1528, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 9676, "raw_average_key_size": 19, "raw_value_size": 593796, "raw_average_value_size": 1224, "num_data_blocks": 70, "num_entries": 485, "num_filter_entries": 485, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768065188, "oldest_key_time": 1768065188, "file_creation_time": 1768065289, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f71f9c2-f3c5-4fc3-bcd9-6ffe346ae9d4", "db_session_id": "VPFJD76VNV79HUMFHEYZ", "orig_file_number": 31, "seqno_to_time_mapping": "N/A"}}
Jan 10 17:14:49 compute-0 ceph-mon[75249]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 9472 microseconds, and 5860 cpu microseconds.
Jan 10 17:14:49 compute-0 ceph-mon[75249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 10 17:14:49 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:14:49.223316) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #31: 605797 bytes OK
Jan 10 17:14:49 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:14:49.223342) [db/memtable_list.cc:519] [default] Level-0 commit table #31 started
Jan 10 17:14:49 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:14:49.225397) [db/memtable_list.cc:722] [default] Level-0 commit table #31: memtable #1 done
Jan 10 17:14:49 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:14:49.225420) EVENT_LOG_v1 {"time_micros": 1768065289225413, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 10 17:14:49 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:14:49.225467) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 10 17:14:49 compute-0 ceph-mon[75249]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 993570, prev total WAL file size 993570, number of live WAL files 2.
Jan 10 17:14:49 compute-0 ceph-mon[75249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000027.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 10 17:14:49 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:14:49.226367) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400323531' seq:72057594037927935, type:22 .. '6D67727374617400353032' seq:0, type:0; will stop at (end)
Jan 10 17:14:49 compute-0 ceph-mon[75249]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 10 17:14:49 compute-0 ceph-mon[75249]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [31(591KB)], [29(5945KB)]
Jan 10 17:14:49 compute-0 ceph-mon[75249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768065289226496, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [31], "files_L6": [29], "score": -1, "input_data_size": 6694291, "oldest_snapshot_seqno": -1}
Jan 10 17:14:49 compute-0 ceph-mon[75249]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #32: 3262 keys, 4966885 bytes, temperature: kUnknown
Jan 10 17:14:49 compute-0 ceph-mon[75249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768065289282198, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 32, "file_size": 4966885, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 4943798, "index_size": 13826, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8197, "raw_key_size": 75533, "raw_average_key_size": 23, "raw_value_size": 4884055, "raw_average_value_size": 1497, "num_data_blocks": 614, "num_entries": 3262, "num_filter_entries": 3262, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768064235, "oldest_key_time": 0, "file_creation_time": 1768065289, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f71f9c2-f3c5-4fc3-bcd9-6ffe346ae9d4", "db_session_id": "VPFJD76VNV79HUMFHEYZ", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Jan 10 17:14:49 compute-0 ceph-mon[75249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 10 17:14:49 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:14:49.282492) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 4966885 bytes
Jan 10 17:14:49 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:14:49.284123) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 120.0 rd, 89.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 5.8 +0.0 blob) out(4.7 +0.0 blob), read-write-amplify(19.2) write-amplify(8.2) OK, records in: 3729, records dropped: 467 output_compression: NoCompression
Jan 10 17:14:49 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:14:49.284145) EVENT_LOG_v1 {"time_micros": 1768065289284134, "job": 12, "event": "compaction_finished", "compaction_time_micros": 55806, "compaction_time_cpu_micros": 31389, "output_level": 6, "num_output_files": 1, "total_output_size": 4966885, "num_input_records": 3729, "num_output_records": 3262, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 10 17:14:49 compute-0 ceph-mon[75249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000031.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 10 17:14:49 compute-0 ceph-mon[75249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768065289284427, "job": 12, "event": "table_file_deletion", "file_number": 31}
Jan 10 17:14:49 compute-0 ceph-mon[75249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 10 17:14:49 compute-0 ceph-mon[75249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768065289285820, "job": 12, "event": "table_file_deletion", "file_number": 29}
Jan 10 17:14:49 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:14:49.226262) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 10 17:14:49 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:14:49.285925) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 10 17:14:49 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:14:49.285933) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 10 17:14:49 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:14:49.285936) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 10 17:14:49 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:14:49.285939) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 10 17:14:49 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:14:49.285942) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 10 17:14:49 compute-0 ceph-mon[75249]: pgmap v574: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:14:50 compute-0 podman[229950]: 2026-01-10 17:14:50.199287807 +0000 UTC m=+0.189907754 container health_status a2049b55215718ead6719d161f51721cb7ba61ad8a59a8a1174e05b17008fa6f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 10 17:14:50 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v575: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:14:51 compute-0 ceph-mon[75249]: pgmap v575: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:14:52 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v576: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:14:53 compute-0 sudo[230104]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mxjagcwtfgeanmurwmegcfmmklytehou ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065292.5691652-945-261586177823134/AnsiballZ_getent.py'
Jan 10 17:14:53 compute-0 sudo[230104]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:14:53 compute-0 python3.9[230106]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Jan 10 17:14:53 compute-0 sudo[230104]: pam_unix(sudo:session): session closed for user root
Jan 10 17:14:53 compute-0 ceph-mon[75249]: pgmap v576: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:14:54 compute-0 podman[230207]: 2026-01-10 17:14:54.131189568 +0000 UTC m=+0.129120213 container health_status 4a66317a3a3660e903224f5e823ead0a57597ae17c6ac34919f8a3aeea25124f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Jan 10 17:14:54 compute-0 sudo[230277]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zfkknytlcfbfdoacadhgoexmxkyuexfl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065293.6282978-953-213182223317883/AnsiballZ_group.py'
Jan 10 17:14:54 compute-0 sudo[230277]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:14:54 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:14:54 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v577: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:14:54 compute-0 python3.9[230279]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 10 17:14:54 compute-0 groupadd[230280]: group added to /etc/group: name=nova, GID=42436
Jan 10 17:14:54 compute-0 sshd-session[230037]: Connection closed by authenticating user root 216.36.124.133 port 49942 [preauth]
Jan 10 17:14:54 compute-0 groupadd[230280]: group added to /etc/gshadow: name=nova
Jan 10 17:14:54 compute-0 groupadd[230280]: new group: name=nova, GID=42436
Jan 10 17:14:54 compute-0 sudo[230277]: pam_unix(sudo:session): session closed for user root
Jan 10 17:14:55 compute-0 sudo[230435]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rqznnqwiyzqfyubgfbktppbtucostfiq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065294.7062416-961-121474900273864/AnsiballZ_user.py'
Jan 10 17:14:55 compute-0 sudo[230435]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:14:55 compute-0 python3.9[230437]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 10 17:14:55 compute-0 useradd[230439]: new user: name=nova, UID=42436, GID=42436, home=/home/nova, shell=/bin/sh, from=/dev/pts/0
Jan 10 17:14:55 compute-0 useradd[230439]: add 'nova' to group 'libvirt'
Jan 10 17:14:55 compute-0 useradd[230439]: add 'nova' to shadow group 'libvirt'
Jan 10 17:14:55 compute-0 sudo[230435]: pam_unix(sudo:session): session closed for user root
Jan 10 17:14:55 compute-0 ceph-mon[75249]: pgmap v577: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:14:56 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v578: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:14:56 compute-0 sshd-session[230470]: Accepted publickey for zuul from 192.168.122.30 port 58822 ssh2: ECDSA SHA256:YYROLJW/JwZAyyZtyl+88gzuUs1GqrQIhGb+AzXg9yc
Jan 10 17:14:56 compute-0 systemd-logind[798]: New session 51 of user zuul.
Jan 10 17:14:56 compute-0 systemd[1]: Started Session 51 of User zuul.
Jan 10 17:14:56 compute-0 sshd-session[230470]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 10 17:14:56 compute-0 sshd-session[230473]: Received disconnect from 192.168.122.30 port 58822:11: disconnected by user
Jan 10 17:14:56 compute-0 sshd-session[230473]: Disconnected from user zuul 192.168.122.30 port 58822
Jan 10 17:14:56 compute-0 sshd-session[230470]: pam_unix(sshd:session): session closed for user zuul
Jan 10 17:14:56 compute-0 systemd[1]: session-51.scope: Deactivated successfully.
Jan 10 17:14:56 compute-0 systemd-logind[798]: Session 51 logged out. Waiting for processes to exit.
Jan 10 17:14:56 compute-0 systemd-logind[798]: Removed session 51.
Jan 10 17:14:56 compute-0 sudo[230498]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 17:14:56 compute-0 sudo[230498]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:14:56 compute-0 sudo[230498]: pam_unix(sudo:session): session closed for user root
Jan 10 17:14:57 compute-0 sudo[230523]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 10 17:14:57 compute-0 sudo[230523]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:14:57 compute-0 sudo[230523]: pam_unix(sudo:session): session closed for user root
Jan 10 17:14:57 compute-0 python3.9[230690]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 17:14:57 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 17:14:57 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 17:14:57 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 10 17:14:57 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 10 17:14:57 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 10 17:14:57 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:14:57 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 10 17:14:57 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 10 17:14:57 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 10 17:14:57 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 10 17:14:57 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 17:14:57 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 17:14:57 compute-0 ceph-mon[75249]: pgmap v578: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:14:57 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 17:14:57 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 10 17:14:57 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:14:57 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 10 17:14:57 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 10 17:14:57 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 17:14:57 compute-0 sudo[230709]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 17:14:57 compute-0 sudo[230709]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:14:57 compute-0 sudo[230709]: pam_unix(sudo:session): session closed for user root
Jan 10 17:14:57 compute-0 sudo[230760]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 10 17:14:57 compute-0 sudo[230760]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:14:58 compute-0 podman[230888]: 2026-01-10 17:14:58.382334008 +0000 UTC m=+0.064276285 container create cca717fbbf94af8ebb36e107f84b7ac78f362744a54f83a746832f5614626ec6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_pare, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 10 17:14:58 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v579: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:14:58 compute-0 python3.9[230875]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1768065297.1307986-986-204881653828104/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 10 17:14:58 compute-0 systemd[1]: Started libpod-conmon-cca717fbbf94af8ebb36e107f84b7ac78f362744a54f83a746832f5614626ec6.scope.
Jan 10 17:14:58 compute-0 podman[230888]: 2026-01-10 17:14:58.355486546 +0000 UTC m=+0.037428823 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:14:58 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:14:58 compute-0 podman[230888]: 2026-01-10 17:14:58.485490902 +0000 UTC m=+0.167433239 container init cca717fbbf94af8ebb36e107f84b7ac78f362744a54f83a746832f5614626ec6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_pare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 10 17:14:58 compute-0 podman[230888]: 2026-01-10 17:14:58.497535991 +0000 UTC m=+0.179478278 container start cca717fbbf94af8ebb36e107f84b7ac78f362744a54f83a746832f5614626ec6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_pare, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 17:14:58 compute-0 podman[230888]: 2026-01-10 17:14:58.502280306 +0000 UTC m=+0.184222633 container attach cca717fbbf94af8ebb36e107f84b7ac78f362744a54f83a746832f5614626ec6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_pare, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 17:14:58 compute-0 competent_pare[230905]: 167 167
Jan 10 17:14:58 compute-0 systemd[1]: libpod-cca717fbbf94af8ebb36e107f84b7ac78f362744a54f83a746832f5614626ec6.scope: Deactivated successfully.
Jan 10 17:14:58 compute-0 podman[230888]: 2026-01-10 17:14:58.50693709 +0000 UTC m=+0.188879377 container died cca717fbbf94af8ebb36e107f84b7ac78f362744a54f83a746832f5614626ec6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_pare, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 17:14:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-9eeedf1fd6a422b82f33973e9e697a6c0ced01dbb2581b6099ea2ce8561d8577-merged.mount: Deactivated successfully.
Jan 10 17:14:58 compute-0 podman[230888]: 2026-01-10 17:14:58.558785804 +0000 UTC m=+0.240728091 container remove cca717fbbf94af8ebb36e107f84b7ac78f362744a54f83a746832f5614626ec6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_pare, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 10 17:14:58 compute-0 systemd[1]: libpod-conmon-cca717fbbf94af8ebb36e107f84b7ac78f362744a54f83a746832f5614626ec6.scope: Deactivated successfully.
Jan 10 17:14:58 compute-0 podman[230999]: 2026-01-10 17:14:58.791970613 +0000 UTC m=+0.067565491 container create 5998d546caecc37c8150a75bfa934e6ff5484abfd94fe8bfe27b664dbff8b3fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_jennings, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 17:14:58 compute-0 systemd[1]: Started libpod-conmon-5998d546caecc37c8150a75bfa934e6ff5484abfd94fe8bfe27b664dbff8b3fc.scope.
Jan 10 17:14:58 compute-0 podman[230999]: 2026-01-10 17:14:58.76732517 +0000 UTC m=+0.042920088 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:14:58 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:14:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d5eb5200c9ad0fcf875f351a191747a96e39cb1c0c56ebe6c8fe51a555a2510/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 17:14:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d5eb5200c9ad0fcf875f351a191747a96e39cb1c0c56ebe6c8fe51a555a2510/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 17:14:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d5eb5200c9ad0fcf875f351a191747a96e39cb1c0c56ebe6c8fe51a555a2510/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 17:14:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d5eb5200c9ad0fcf875f351a191747a96e39cb1c0c56ebe6c8fe51a555a2510/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 17:14:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d5eb5200c9ad0fcf875f351a191747a96e39cb1c0c56ebe6c8fe51a555a2510/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 10 17:14:58 compute-0 podman[230999]: 2026-01-10 17:14:58.90956014 +0000 UTC m=+0.185155078 container init 5998d546caecc37c8150a75bfa934e6ff5484abfd94fe8bfe27b664dbff8b3fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_jennings, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 17:14:58 compute-0 podman[230999]: 2026-01-10 17:14:58.924112875 +0000 UTC m=+0.199707743 container start 5998d546caecc37c8150a75bfa934e6ff5484abfd94fe8bfe27b664dbff8b3fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_jennings, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 17:14:58 compute-0 podman[230999]: 2026-01-10 17:14:58.928880012 +0000 UTC m=+0.204474880 container attach 5998d546caecc37c8150a75bfa934e6ff5484abfd94fe8bfe27b664dbff8b3fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_jennings, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True)
Jan 10 17:14:59 compute-0 python3.9[231100]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 17:14:59 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:14:59 compute-0 eloquent_jennings[231045]: --> passed data devices: 0 physical, 3 LVM
Jan 10 17:14:59 compute-0 eloquent_jennings[231045]: --> All data devices are unavailable
Jan 10 17:14:59 compute-0 systemd[1]: libpod-5998d546caecc37c8150a75bfa934e6ff5484abfd94fe8bfe27b664dbff8b3fc.scope: Deactivated successfully.
Jan 10 17:14:59 compute-0 podman[230999]: 2026-01-10 17:14:59.5808504 +0000 UTC m=+0.856445318 container died 5998d546caecc37c8150a75bfa934e6ff5484abfd94fe8bfe27b664dbff8b3fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_jennings, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 10 17:14:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-1d5eb5200c9ad0fcf875f351a191747a96e39cb1c0c56ebe6c8fe51a555a2510-merged.mount: Deactivated successfully.
Jan 10 17:14:59 compute-0 podman[230999]: 2026-01-10 17:14:59.63821029 +0000 UTC m=+0.913805168 container remove 5998d546caecc37c8150a75bfa934e6ff5484abfd94fe8bfe27b664dbff8b3fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_jennings, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 10 17:14:59 compute-0 systemd[1]: libpod-conmon-5998d546caecc37c8150a75bfa934e6ff5484abfd94fe8bfe27b664dbff8b3fc.scope: Deactivated successfully.
Jan 10 17:14:59 compute-0 sudo[230760]: pam_unix(sudo:session): session closed for user root
Jan 10 17:14:59 compute-0 python3.9[231191]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 10 17:14:59 compute-0 sudo[231205]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 17:14:59 compute-0 sudo[231205]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:14:59 compute-0 sudo[231205]: pam_unix(sudo:session): session closed for user root
Jan 10 17:14:59 compute-0 ceph-mon[75249]: pgmap v579: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:14:59 compute-0 sudo[231247]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 -- lvm list --format json
Jan 10 17:14:59 compute-0 sudo[231247]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:15:00 compute-0 podman[231367]: 2026-01-10 17:15:00.255326644 +0000 UTC m=+0.063592566 container create 4d1642705ddcfaadfad910793239522edb7df4659c3ef1995e3b6d738c3c88f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_kalam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 10 17:15:00 compute-0 systemd[1]: Started libpod-conmon-4d1642705ddcfaadfad910793239522edb7df4659c3ef1995e3b6d738c3c88f5.scope.
Jan 10 17:15:00 compute-0 podman[231367]: 2026-01-10 17:15:00.219804323 +0000 UTC m=+0.028070295 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:15:00 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:15:00 compute-0 podman[231367]: 2026-01-10 17:15:00.354861252 +0000 UTC m=+0.163127224 container init 4d1642705ddcfaadfad910793239522edb7df4659c3ef1995e3b6d738c3c88f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_kalam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 17:15:00 compute-0 podman[231367]: 2026-01-10 17:15:00.36572255 +0000 UTC m=+0.173988442 container start 4d1642705ddcfaadfad910793239522edb7df4659c3ef1995e3b6d738c3c88f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_kalam, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 17:15:00 compute-0 podman[231367]: 2026-01-10 17:15:00.369046938 +0000 UTC m=+0.177312930 container attach 4d1642705ddcfaadfad910793239522edb7df4659c3ef1995e3b6d738c3c88f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_kalam, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 17:15:00 compute-0 blissful_kalam[231426]: 167 167
Jan 10 17:15:00 compute-0 systemd[1]: libpod-4d1642705ddcfaadfad910793239522edb7df4659c3ef1995e3b6d738c3c88f5.scope: Deactivated successfully.
Jan 10 17:15:00 compute-0 podman[231367]: 2026-01-10 17:15:00.373972568 +0000 UTC m=+0.182238490 container died 4d1642705ddcfaadfad910793239522edb7df4659c3ef1995e3b6d738c3c88f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_kalam, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 10 17:15:00 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v580: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:15:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-b21ee734c4ca872e3292b141ce87fd3eb015116b8ad7f4ac04da673ab25ae083-merged.mount: Deactivated successfully.
Jan 10 17:15:00 compute-0 podman[231367]: 2026-01-10 17:15:00.429558901 +0000 UTC m=+0.237824833 container remove 4d1642705ddcfaadfad910793239522edb7df4659c3ef1995e3b6d738c3c88f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_kalam, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 17:15:00 compute-0 systemd[1]: libpod-conmon-4d1642705ddcfaadfad910793239522edb7df4659c3ef1995e3b6d738c3c88f5.scope: Deactivated successfully.
Jan 10 17:15:00 compute-0 python3.9[231435]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 17:15:00 compute-0 podman[231457]: 2026-01-10 17:15:00.644787254 +0000 UTC m=+0.044570822 container create 4c27d1a58186d8dcddb286e018b8cb4d5ab8c113bf4608ccb54932f3e346d216 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_cartwright, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 17:15:00 compute-0 systemd[1]: Started libpod-conmon-4c27d1a58186d8dcddb286e018b8cb4d5ab8c113bf4608ccb54932f3e346d216.scope.
Jan 10 17:15:00 compute-0 podman[231457]: 2026-01-10 17:15:00.624128327 +0000 UTC m=+0.023911945 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:15:00 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:15:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31063605dc7d8721ceb7b262a52af319c16a11c2aaf41863ddbde71669a300f0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 17:15:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31063605dc7d8721ceb7b262a52af319c16a11c2aaf41863ddbde71669a300f0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 17:15:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31063605dc7d8721ceb7b262a52af319c16a11c2aaf41863ddbde71669a300f0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 17:15:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31063605dc7d8721ceb7b262a52af319c16a11c2aaf41863ddbde71669a300f0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 17:15:00 compute-0 podman[231457]: 2026-01-10 17:15:00.771521733 +0000 UTC m=+0.171305321 container init 4c27d1a58186d8dcddb286e018b8cb4d5ab8c113bf4608ccb54932f3e346d216 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_cartwright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 17:15:00 compute-0 podman[231457]: 2026-01-10 17:15:00.782110583 +0000 UTC m=+0.181894141 container start 4c27d1a58186d8dcddb286e018b8cb4d5ab8c113bf4608ccb54932f3e346d216 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_cartwright, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 17:15:00 compute-0 podman[231457]: 2026-01-10 17:15:00.785926405 +0000 UTC m=+0.185710033 container attach 4c27d1a58186d8dcddb286e018b8cb4d5ab8c113bf4608ccb54932f3e346d216 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_cartwright, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default)
Jan 10 17:15:01 compute-0 gifted_cartwright[231510]: {
Jan 10 17:15:01 compute-0 gifted_cartwright[231510]:     "0": [
Jan 10 17:15:01 compute-0 gifted_cartwright[231510]:         {
Jan 10 17:15:01 compute-0 gifted_cartwright[231510]:             "devices": [
Jan 10 17:15:01 compute-0 gifted_cartwright[231510]:                 "/dev/loop3"
Jan 10 17:15:01 compute-0 gifted_cartwright[231510]:             ],
Jan 10 17:15:01 compute-0 gifted_cartwright[231510]:             "lv_name": "ceph_lv0",
Jan 10 17:15:01 compute-0 gifted_cartwright[231510]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 10 17:15:01 compute-0 gifted_cartwright[231510]:             "lv_size": "21470642176",
Jan 10 17:15:01 compute-0 gifted_cartwright[231510]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9aa1dcc9-88f4-49c0-be40-744313964d3e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 17:15:01 compute-0 gifted_cartwright[231510]:             "lv_uuid": "fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2",
Jan 10 17:15:01 compute-0 gifted_cartwright[231510]:             "name": "ceph_lv0",
Jan 10 17:15:01 compute-0 gifted_cartwright[231510]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 10 17:15:01 compute-0 gifted_cartwright[231510]:             "tags": {
Jan 10 17:15:01 compute-0 gifted_cartwright[231510]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 10 17:15:01 compute-0 gifted_cartwright[231510]:                 "ceph.block_uuid": "fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2",
Jan 10 17:15:01 compute-0 gifted_cartwright[231510]:                 "ceph.cephx_lockbox_secret": "",
Jan 10 17:15:01 compute-0 gifted_cartwright[231510]:                 "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 17:15:01 compute-0 gifted_cartwright[231510]:                 "ceph.cluster_name": "ceph",
Jan 10 17:15:01 compute-0 gifted_cartwright[231510]:                 "ceph.crush_device_class": "",
Jan 10 17:15:01 compute-0 gifted_cartwright[231510]:                 "ceph.encrypted": "0",
Jan 10 17:15:01 compute-0 gifted_cartwright[231510]:                 "ceph.objectstore": "bluestore",
Jan 10 17:15:01 compute-0 gifted_cartwright[231510]:                 "ceph.osd_fsid": "9aa1dcc9-88f4-49c0-be40-744313964d3e",
Jan 10 17:15:01 compute-0 gifted_cartwright[231510]:                 "ceph.osd_id": "0",
Jan 10 17:15:01 compute-0 gifted_cartwright[231510]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 10 17:15:01 compute-0 gifted_cartwright[231510]:                 "ceph.type": "block",
Jan 10 17:15:01 compute-0 gifted_cartwright[231510]:                 "ceph.vdo": "0",
Jan 10 17:15:01 compute-0 gifted_cartwright[231510]:                 "ceph.with_tpm": "0"
Jan 10 17:15:01 compute-0 gifted_cartwright[231510]:             },
Jan 10 17:15:01 compute-0 gifted_cartwright[231510]:             "type": "block",
Jan 10 17:15:01 compute-0 gifted_cartwright[231510]:             "vg_name": "ceph_vg0"
Jan 10 17:15:01 compute-0 gifted_cartwright[231510]:         }
Jan 10 17:15:01 compute-0 gifted_cartwright[231510]:     ],
Jan 10 17:15:01 compute-0 gifted_cartwright[231510]:     "1": [
Jan 10 17:15:01 compute-0 gifted_cartwright[231510]:         {
Jan 10 17:15:01 compute-0 gifted_cartwright[231510]:             "devices": [
Jan 10 17:15:01 compute-0 gifted_cartwright[231510]:                 "/dev/loop4"
Jan 10 17:15:01 compute-0 gifted_cartwright[231510]:             ],
Jan 10 17:15:01 compute-0 gifted_cartwright[231510]:             "lv_name": "ceph_lv1",
Jan 10 17:15:01 compute-0 gifted_cartwright[231510]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 10 17:15:01 compute-0 gifted_cartwright[231510]:             "lv_size": "21470642176",
Jan 10 17:15:01 compute-0 gifted_cartwright[231510]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e8e31518-65ae-476c-891c-e2fc550d0a1c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 17:15:01 compute-0 gifted_cartwright[231510]:             "lv_uuid": "jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE",
Jan 10 17:15:01 compute-0 gifted_cartwright[231510]:             "name": "ceph_lv1",
Jan 10 17:15:01 compute-0 gifted_cartwright[231510]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 10 17:15:01 compute-0 gifted_cartwright[231510]:             "tags": {
Jan 10 17:15:01 compute-0 gifted_cartwright[231510]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 10 17:15:01 compute-0 gifted_cartwright[231510]:                 "ceph.block_uuid": "jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE",
Jan 10 17:15:01 compute-0 gifted_cartwright[231510]:                 "ceph.cephx_lockbox_secret": "",
Jan 10 17:15:01 compute-0 gifted_cartwright[231510]:                 "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 17:15:01 compute-0 gifted_cartwright[231510]:                 "ceph.cluster_name": "ceph",
Jan 10 17:15:01 compute-0 gifted_cartwright[231510]:                 "ceph.crush_device_class": "",
Jan 10 17:15:01 compute-0 gifted_cartwright[231510]:                 "ceph.encrypted": "0",
Jan 10 17:15:01 compute-0 gifted_cartwright[231510]:                 "ceph.objectstore": "bluestore",
Jan 10 17:15:01 compute-0 gifted_cartwright[231510]:                 "ceph.osd_fsid": "e8e31518-65ae-476c-891c-e2fc550d0a1c",
Jan 10 17:15:01 compute-0 gifted_cartwright[231510]:                 "ceph.osd_id": "1",
Jan 10 17:15:01 compute-0 gifted_cartwright[231510]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 10 17:15:01 compute-0 gifted_cartwright[231510]:                 "ceph.type": "block",
Jan 10 17:15:01 compute-0 gifted_cartwright[231510]:                 "ceph.vdo": "0",
Jan 10 17:15:01 compute-0 gifted_cartwright[231510]:                 "ceph.with_tpm": "0"
Jan 10 17:15:01 compute-0 gifted_cartwright[231510]:             },
Jan 10 17:15:01 compute-0 gifted_cartwright[231510]:             "type": "block",
Jan 10 17:15:01 compute-0 gifted_cartwright[231510]:             "vg_name": "ceph_vg1"
Jan 10 17:15:01 compute-0 gifted_cartwright[231510]:         }
Jan 10 17:15:01 compute-0 gifted_cartwright[231510]:     ],
Jan 10 17:15:01 compute-0 gifted_cartwright[231510]:     "2": [
Jan 10 17:15:01 compute-0 gifted_cartwright[231510]:         {
Jan 10 17:15:01 compute-0 gifted_cartwright[231510]:             "devices": [
Jan 10 17:15:01 compute-0 gifted_cartwright[231510]:                 "/dev/loop5"
Jan 10 17:15:01 compute-0 gifted_cartwright[231510]:             ],
Jan 10 17:15:01 compute-0 gifted_cartwright[231510]:             "lv_name": "ceph_lv2",
Jan 10 17:15:01 compute-0 gifted_cartwright[231510]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 10 17:15:01 compute-0 gifted_cartwright[231510]:             "lv_size": "21470642176",
Jan 10 17:15:01 compute-0 gifted_cartwright[231510]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=87473727-6468-4f68-8371-e0bf60edaa43,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 17:15:01 compute-0 gifted_cartwright[231510]:             "lv_uuid": "47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL",
Jan 10 17:15:01 compute-0 gifted_cartwright[231510]:             "name": "ceph_lv2",
Jan 10 17:15:01 compute-0 gifted_cartwright[231510]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 10 17:15:01 compute-0 gifted_cartwright[231510]:             "tags": {
Jan 10 17:15:01 compute-0 gifted_cartwright[231510]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 10 17:15:01 compute-0 gifted_cartwright[231510]:                 "ceph.block_uuid": "47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL",
Jan 10 17:15:01 compute-0 gifted_cartwright[231510]:                 "ceph.cephx_lockbox_secret": "",
Jan 10 17:15:01 compute-0 gifted_cartwright[231510]:                 "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 17:15:01 compute-0 gifted_cartwright[231510]:                 "ceph.cluster_name": "ceph",
Jan 10 17:15:01 compute-0 gifted_cartwright[231510]:                 "ceph.crush_device_class": "",
Jan 10 17:15:01 compute-0 gifted_cartwright[231510]:                 "ceph.encrypted": "0",
Jan 10 17:15:01 compute-0 gifted_cartwright[231510]:                 "ceph.objectstore": "bluestore",
Jan 10 17:15:01 compute-0 gifted_cartwright[231510]:                 "ceph.osd_fsid": "87473727-6468-4f68-8371-e0bf60edaa43",
Jan 10 17:15:01 compute-0 gifted_cartwright[231510]:                 "ceph.osd_id": "2",
Jan 10 17:15:01 compute-0 gifted_cartwright[231510]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 10 17:15:01 compute-0 gifted_cartwright[231510]:                 "ceph.type": "block",
Jan 10 17:15:01 compute-0 gifted_cartwright[231510]:                 "ceph.vdo": "0",
Jan 10 17:15:01 compute-0 gifted_cartwright[231510]:                 "ceph.with_tpm": "0"
Jan 10 17:15:01 compute-0 gifted_cartwright[231510]:             },
Jan 10 17:15:01 compute-0 gifted_cartwright[231510]:             "type": "block",
Jan 10 17:15:01 compute-0 gifted_cartwright[231510]:             "vg_name": "ceph_vg2"
Jan 10 17:15:01 compute-0 gifted_cartwright[231510]:         }
Jan 10 17:15:01 compute-0 gifted_cartwright[231510]:     ]
Jan 10 17:15:01 compute-0 gifted_cartwright[231510]: }
Jan 10 17:15:01 compute-0 systemd[1]: libpod-4c27d1a58186d8dcddb286e018b8cb4d5ab8c113bf4608ccb54932f3e346d216.scope: Deactivated successfully.
Jan 10 17:15:01 compute-0 podman[231604]: 2026-01-10 17:15:01.15780412 +0000 UTC m=+0.025476176 container died 4c27d1a58186d8dcddb286e018b8cb4d5ab8c113bf4608ccb54932f3e346d216 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_cartwright, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 10 17:15:01 compute-0 python3.9[231600]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1768065299.9343207-986-270210907857195/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 10 17:15:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-31063605dc7d8721ceb7b262a52af319c16a11c2aaf41863ddbde71669a300f0-merged.mount: Deactivated successfully.
Jan 10 17:15:01 compute-0 podman[231604]: 2026-01-10 17:15:01.191261146 +0000 UTC m=+0.058933192 container remove 4c27d1a58186d8dcddb286e018b8cb4d5ab8c113bf4608ccb54932f3e346d216 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_cartwright, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 10 17:15:01 compute-0 systemd[1]: libpod-conmon-4c27d1a58186d8dcddb286e018b8cb4d5ab8c113bf4608ccb54932f3e346d216.scope: Deactivated successfully.
Jan 10 17:15:01 compute-0 sudo[231247]: pam_unix(sudo:session): session closed for user root
Jan 10 17:15:01 compute-0 sudo[231643]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 17:15:01 compute-0 sudo[231643]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:15:01 compute-0 sudo[231643]: pam_unix(sudo:session): session closed for user root
Jan 10 17:15:01 compute-0 sudo[231691]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 -- raw list --format json
Jan 10 17:15:01 compute-0 sudo[231691]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:15:01 compute-0 podman[231830]: 2026-01-10 17:15:01.646004848 +0000 UTC m=+0.047633944 container create 03c2911332b7637af9afdc6686148da7b82c528f32adabe0a8873861e9976443 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_archimedes, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 17:15:01 compute-0 systemd[1]: Started libpod-conmon-03c2911332b7637af9afdc6686148da7b82c528f32adabe0a8873861e9976443.scope.
Jan 10 17:15:01 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:15:01 compute-0 podman[231830]: 2026-01-10 17:15:01.624916719 +0000 UTC m=+0.026545775 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:15:01 compute-0 podman[231830]: 2026-01-10 17:15:01.745104244 +0000 UTC m=+0.146733360 container init 03c2911332b7637af9afdc6686148da7b82c528f32adabe0a8873861e9976443 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_archimedes, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 10 17:15:01 compute-0 podman[231830]: 2026-01-10 17:15:01.75252442 +0000 UTC m=+0.154153526 container start 03c2911332b7637af9afdc6686148da7b82c528f32adabe0a8873861e9976443 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_archimedes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 10 17:15:01 compute-0 podman[231830]: 2026-01-10 17:15:01.756317151 +0000 UTC m=+0.157946277 container attach 03c2911332b7637af9afdc6686148da7b82c528f32adabe0a8873861e9976443 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_archimedes, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 17:15:01 compute-0 loving_archimedes[231847]: 167 167
Jan 10 17:15:01 compute-0 systemd[1]: libpod-03c2911332b7637af9afdc6686148da7b82c528f32adabe0a8873861e9976443.scope: Deactivated successfully.
Jan 10 17:15:01 compute-0 podman[231830]: 2026-01-10 17:15:01.761761035 +0000 UTC m=+0.163390091 container died 03c2911332b7637af9afdc6686148da7b82c528f32adabe0a8873861e9976443 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_archimedes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 10 17:15:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-0f4be5cf6c2104ccc666cabd35ecec5fef40f687bba64505a8380c80f0185608-merged.mount: Deactivated successfully.
Jan 10 17:15:01 compute-0 python3.9[231832]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 17:15:01 compute-0 podman[231830]: 2026-01-10 17:15:01.805198136 +0000 UTC m=+0.206827192 container remove 03c2911332b7637af9afdc6686148da7b82c528f32adabe0a8873861e9976443 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_archimedes, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 17:15:01 compute-0 systemd[1]: libpod-conmon-03c2911332b7637af9afdc6686148da7b82c528f32adabe0a8873861e9976443.scope: Deactivated successfully.
Jan 10 17:15:01 compute-0 ceph-mon[75249]: pgmap v580: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:15:01 compute-0 podman[231915]: 2026-01-10 17:15:01.983568573 +0000 UTC m=+0.049033800 container create 97c15b90c2c09b5855d4e56daf2cd8e013cc6c9fd8630c86596de24361c1fdbd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_montalcini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 10 17:15:02 compute-0 systemd[1]: Started libpod-conmon-97c15b90c2c09b5855d4e56daf2cd8e013cc6c9fd8630c86596de24361c1fdbd.scope.
Jan 10 17:15:02 compute-0 podman[231915]: 2026-01-10 17:15:01.963099171 +0000 UTC m=+0.028564418 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:15:02 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:15:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39e5e8a223f4f4cb1a3de8d0d43176f215c1a359b8b46e45f8675a57479827b8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 17:15:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39e5e8a223f4f4cb1a3de8d0d43176f215c1a359b8b46e45f8675a57479827b8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 17:15:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39e5e8a223f4f4cb1a3de8d0d43176f215c1a359b8b46e45f8675a57479827b8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 17:15:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39e5e8a223f4f4cb1a3de8d0d43176f215c1a359b8b46e45f8675a57479827b8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 17:15:02 compute-0 podman[231915]: 2026-01-10 17:15:02.114940405 +0000 UTC m=+0.180405682 container init 97c15b90c2c09b5855d4e56daf2cd8e013cc6c9fd8630c86596de24361c1fdbd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_montalcini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 10 17:15:02 compute-0 podman[231915]: 2026-01-10 17:15:02.127634061 +0000 UTC m=+0.193099298 container start 97c15b90c2c09b5855d4e56daf2cd8e013cc6c9fd8630c86596de24361c1fdbd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_montalcini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 17:15:02 compute-0 podman[231915]: 2026-01-10 17:15:02.132210083 +0000 UTC m=+0.197675330 container attach 97c15b90c2c09b5855d4e56daf2cd8e013cc6c9fd8630c86596de24361c1fdbd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_montalcini, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 17:15:02 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v581: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:15:02 compute-0 python3.9[232014]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1768065301.3100128-986-55411747427617/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=1feba546d0beacad9258164ab79b8a747685ccc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 10 17:15:02 compute-0 lvm[232237]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 10 17:15:02 compute-0 lvm[232237]: VG ceph_vg1 finished
Jan 10 17:15:02 compute-0 lvm[232233]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 10 17:15:02 compute-0 lvm[232240]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 10 17:15:02 compute-0 lvm[232240]: VG ceph_vg2 finished
Jan 10 17:15:02 compute-0 lvm[232233]: VG ceph_vg0 finished
Jan 10 17:15:02 compute-0 clever_montalcini[231960]: {}
Jan 10 17:15:02 compute-0 systemd[1]: libpod-97c15b90c2c09b5855d4e56daf2cd8e013cc6c9fd8630c86596de24361c1fdbd.scope: Deactivated successfully.
Jan 10 17:15:02 compute-0 systemd[1]: libpod-97c15b90c2c09b5855d4e56daf2cd8e013cc6c9fd8630c86596de24361c1fdbd.scope: Consumed 1.471s CPU time.
Jan 10 17:15:03 compute-0 podman[231915]: 2026-01-10 17:15:03.000892954 +0000 UTC m=+1.066358181 container died 97c15b90c2c09b5855d4e56daf2cd8e013cc6c9fd8630c86596de24361c1fdbd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_montalcini, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 17:15:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-39e5e8a223f4f4cb1a3de8d0d43176f215c1a359b8b46e45f8675a57479827b8-merged.mount: Deactivated successfully.
Jan 10 17:15:03 compute-0 podman[231915]: 2026-01-10 17:15:03.057549095 +0000 UTC m=+1.123014302 container remove 97c15b90c2c09b5855d4e56daf2cd8e013cc6c9fd8630c86596de24361c1fdbd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_montalcini, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 10 17:15:03 compute-0 systemd[1]: libpod-conmon-97c15b90c2c09b5855d4e56daf2cd8e013cc6c9fd8630c86596de24361c1fdbd.scope: Deactivated successfully.
Jan 10 17:15:03 compute-0 sudo[231691]: pam_unix(sudo:session): session closed for user root
Jan 10 17:15:03 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 10 17:15:03 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:15:03 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 10 17:15:03 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:15:03 compute-0 python3.9[232241]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 17:15:03 compute-0 sudo[232255]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 10 17:15:03 compute-0 sudo[232255]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:15:03 compute-0 sudo[232255]: pam_unix(sudo:session): session closed for user root
Jan 10 17:15:03 compute-0 python3.9[232400]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1768065302.5831914-986-40725975583737/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 10 17:15:03 compute-0 ceph-mon[75249]: pgmap v581: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:15:03 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:15:03 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:15:04 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:15:04 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v582: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:15:04 compute-0 python3.9[232550]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 17:15:05 compute-0 python3.9[232671]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1768065303.8301475-986-40380494422430/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 10 17:15:05 compute-0 sudo[232821]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rpzgvqkybfldgmkvymgtdhvqhbeotsek ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065305.4026244-1069-140172651517377/AnsiballZ_file.py'
Jan 10 17:15:05 compute-0 sudo[232821]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:15:05 compute-0 ceph-mon[75249]: pgmap v582: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:15:05 compute-0 python3.9[232823]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:15:05 compute-0 sudo[232821]: pam_unix(sudo:session): session closed for user root
Jan 10 17:15:06 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v583: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:15:06 compute-0 sudo[232973]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cmouyehbtnjanglrrxuchkhtwnvgviex ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065306.1915615-1077-182286245385715/AnsiballZ_copy.py'
Jan 10 17:15:06 compute-0 sudo[232973]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:15:06 compute-0 python3.9[232975]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:15:06 compute-0 sudo[232973]: pam_unix(sudo:session): session closed for user root
Jan 10 17:15:07 compute-0 sudo[233125]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jzlqranwfzrkigxnruiysidqaguxoibp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065307.0869532-1085-235788768435066/AnsiballZ_stat.py'
Jan 10 17:15:07 compute-0 sudo[233125]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:15:07 compute-0 python3.9[233127]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 10 17:15:07 compute-0 sudo[233125]: pam_unix(sudo:session): session closed for user root
Jan 10 17:15:07 compute-0 ceph-mon[75249]: pgmap v583: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:15:08 compute-0 sudo[233277]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aiaglcldpbrgcskumcanelnlwdlwnqkq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065307.9722266-1093-106154326139438/AnsiballZ_stat.py'
Jan 10 17:15:08 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v584: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:15:08 compute-0 sudo[233277]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:15:08 compute-0 python3.9[233279]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 17:15:08 compute-0 sudo[233277]: pam_unix(sudo:session): session closed for user root
Jan 10 17:15:08 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:15:08 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:15:08 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:15:08 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:15:08 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:15:08 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:15:09 compute-0 sudo[233400]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dpiweczlsjdecvjdsyufhzcoqygxvati ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065307.9722266-1093-106154326139438/AnsiballZ_copy.py'
Jan 10 17:15:09 compute-0 sudo[233400]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:15:09 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:15:09 compute-0 python3.9[233402]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1768065307.9722266-1093-106154326139438/.source _original_basename=.jixremqj follow=False checksum=b4fbc2aa16e07d05be4a21268488f740d2389779 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Jan 10 17:15:09 compute-0 sudo[233400]: pam_unix(sudo:session): session closed for user root
Jan 10 17:15:09 compute-0 ceph-mon[75249]: pgmap v584: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:15:10 compute-0 python3.9[233554]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 10 17:15:10 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v585: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:15:10 compute-0 python3.9[233706]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 17:15:11 compute-0 python3.9[233827]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1768065310.3299131-1119-207001563854561/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=aff5546b44cf4461a7541a94e4cce1332c9b58b0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 10 17:15:11 compute-0 ceph-mon[75249]: pgmap v585: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:15:12 compute-0 python3.9[233977]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 17:15:12 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v586: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:15:12 compute-0 python3.9[234098]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1768065311.7035794-1134-245197114537769/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=60b024e6db49dc6e700fc0d50263944d98d4c034 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 10 17:15:13 compute-0 ceph-mon[75249]: pgmap v586: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:15:13 compute-0 sudo[234248]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cmiqwafbssonrvyxdiwwwhokqclyymtp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065313.3575976-1151-117550477283358/AnsiballZ_container_config_data.py'
Jan 10 17:15:13 compute-0 sudo[234248]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:15:14 compute-0 python3.9[234250]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Jan 10 17:15:14 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:15:14 compute-0 sudo[234248]: pam_unix(sudo:session): session closed for user root
Jan 10 17:15:14 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v587: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:15:15 compute-0 sudo[234400]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zbucenepmbiwqguufuzqavitfgmniteu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065314.6082861-1162-175165080500814/AnsiballZ_container_config_hash.py'
Jan 10 17:15:15 compute-0 sudo[234400]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:15:15 compute-0 python3.9[234402]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 10 17:15:15 compute-0 sudo[234400]: pam_unix(sudo:session): session closed for user root
Jan 10 17:15:15 compute-0 ceph-mon[75249]: pgmap v587: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:15:16 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v588: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:15:16 compute-0 sudo[234552]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rwuicqxudxuycvwteyltzcxchrvxgomf ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1768065315.9601865-1172-134316962680944/AnsiballZ_edpm_container_manage.py'
Jan 10 17:15:16 compute-0 sudo[234552]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:15:16 compute-0 python3[234554]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json containers=[] log_base_path=/var/log/containers/stdouts debug=False
Jan 10 17:15:16 compute-0 ceph-mon[75249]: pgmap v588: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:15:18 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v589: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:15:19 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:15:19 compute-0 ceph-mon[75249]: pgmap v589: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:15:20 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v590: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:15:21 compute-0 ceph-mon[75249]: pgmap v590: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:15:22 compute-0 podman[234610]: 2026-01-10 17:15:22.072901747 +0000 UTC m=+1.071834055 container health_status a2049b55215718ead6719d161f51721cb7ba61ad8a59a8a1174e05b17008fa6f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, config_id=ovn_controller)
Jan 10 17:15:22 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v591: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:15:23 compute-0 ceph-mon[75249]: pgmap v591: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:15:24 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:15:24 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v592: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:15:26 compute-0 ceph-mon[75249]: pgmap v592: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:15:26 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v593: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:15:26 compute-0 podman[234653]: 2026-01-10 17:15:26.456359562 +0000 UTC m=+1.442544688 container health_status 4a66317a3a3660e903224f5e823ead0a57597ae17c6ac34919f8a3aeea25124f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, tcib_managed=true, config_id=ovn_metadata_agent)
Jan 10 17:15:26 compute-0 podman[234569]: 2026-01-10 17:15:26.510945459 +0000 UTC m=+9.699653459 image pull e3166cc074f328e3b121ff82d56ed43a2542af699baffe6874520fe3837c2b18 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Jan 10 17:15:26 compute-0 podman[234697]: 2026-01-10 17:15:26.760360259 +0000 UTC m=+0.083433222 container create 829794c073326f89be46fc607171dd9fff823b74d404292c89250303cc4e08fd (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=edpm, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=nova_compute_init, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 10 17:15:26 compute-0 podman[234697]: 2026-01-10 17:15:26.725475964 +0000 UTC m=+0.048548957 image pull e3166cc074f328e3b121ff82d56ed43a2542af699baffe6874520fe3837c2b18 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Jan 10 17:15:26 compute-0 python3[234554]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Jan 10 17:15:26 compute-0 sudo[234552]: pam_unix(sudo:session): session closed for user root
Jan 10 17:15:27 compute-0 ceph-mon[75249]: pgmap v593: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:15:27 compute-0 sudo[234885]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bkqdhxecgjmsrpriuwqhycxvjapdybsy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065327.2422137-1180-5550869547081/AnsiballZ_stat.py'
Jan 10 17:15:27 compute-0 sudo[234885]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:15:27 compute-0 python3.9[234887]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 10 17:15:27 compute-0 sudo[234885]: pam_unix(sudo:session): session closed for user root
Jan 10 17:15:28 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v594: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:15:28 compute-0 sudo[235039]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lxwodzrvjsyrahqsbesjuagbktotniuz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065328.3731022-1192-149859224254109/AnsiballZ_container_config_data.py'
Jan 10 17:15:28 compute-0 sudo[235039]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:15:29 compute-0 python3.9[235041]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Jan 10 17:15:29 compute-0 sudo[235039]: pam_unix(sudo:session): session closed for user root
Jan 10 17:15:29 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:15:29 compute-0 ceph-mon[75249]: pgmap v594: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:15:29 compute-0 sudo[235191]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uknubptzziqpebhjlpmgfbiuwscomcfk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065329.5156708-1203-242763613127537/AnsiballZ_container_config_hash.py'
Jan 10 17:15:29 compute-0 sudo[235191]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:15:30 compute-0 python3.9[235193]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 10 17:15:30 compute-0 sudo[235191]: pam_unix(sudo:session): session closed for user root
Jan 10 17:15:30 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v595: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:15:31 compute-0 sudo[235343]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wxtqyozivbasiscthdyjfsyhfwhjlofx ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1768065330.6269393-1213-44300312167540/AnsiballZ_edpm_container_manage.py'
Jan 10 17:15:31 compute-0 sudo[235343]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:15:31 compute-0 ceph-mon[75249]: pgmap v595: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:15:31 compute-0 python3[235345]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json containers=[] log_base_path=/var/log/containers/stdouts debug=False
Jan 10 17:15:31 compute-0 podman[235384]: 2026-01-10 17:15:31.782417808 +0000 UTC m=+0.075844451 container create 8f8874914a56179fcc5831574e1cc112fdac465b9ddd5d3ee5069e9a44f58d02 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=nova_compute, tcib_managed=true, org.label-schema.build-date=20251202, config_id=edpm, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 10 17:15:31 compute-0 podman[235384]: 2026-01-10 17:15:31.746770154 +0000 UTC m=+0.040196847 image pull e3166cc074f328e3b121ff82d56ed43a2542af699baffe6874520fe3837c2b18 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Jan 10 17:15:31 compute-0 python3[235345]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath --volume /etc/multipath.conf:/etc/multipath.conf:ro,Z --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified kolla_start
Jan 10 17:15:31 compute-0 sudo[235343]: pam_unix(sudo:session): session closed for user root
Jan 10 17:15:32 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v596: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:15:32 compute-0 sudo[235572]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dcofolhsdlvnyvuhpyohhrahfvpioopu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065332.2390141-1221-103444474248199/AnsiballZ_stat.py'
Jan 10 17:15:32 compute-0 sudo[235572]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:15:32 compute-0 python3.9[235574]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 10 17:15:32 compute-0 sudo[235572]: pam_unix(sudo:session): session closed for user root
Jan 10 17:15:33 compute-0 ceph-mon[75249]: pgmap v596: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:15:33 compute-0 sudo[235726]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rjyowctyufgjbtjmbmwhpfugchmnlspk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065333.210089-1230-140095928635761/AnsiballZ_file.py'
Jan 10 17:15:33 compute-0 sudo[235726]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:15:33 compute-0 python3.9[235728]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:15:33 compute-0 sudo[235726]: pam_unix(sudo:session): session closed for user root
Jan 10 17:15:34 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:15:34 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v597: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:15:34 compute-0 sudo[235877]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cptznqgokqhwrsebizdlsfctofmofjgt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065333.9501176-1230-255364035124942/AnsiballZ_copy.py'
Jan 10 17:15:34 compute-0 sudo[235877]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:15:34 compute-0 python3.9[235879]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1768065333.9501176-1230-255364035124942/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 17:15:34 compute-0 sudo[235877]: pam_unix(sudo:session): session closed for user root
Jan 10 17:15:35 compute-0 sudo[235953]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-erzxmfljhjyztdnyuczytbthcexgeqwz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065333.9501176-1230-255364035124942/AnsiballZ_systemd.py'
Jan 10 17:15:35 compute-0 sudo[235953]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:15:35 compute-0 python3.9[235955]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 10 17:15:35 compute-0 systemd[1]: Reloading.
Jan 10 17:15:35 compute-0 ceph-mon[75249]: pgmap v597: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:15:35 compute-0 systemd-sysv-generator[235985]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 10 17:15:35 compute-0 systemd-rc-local-generator[235980]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 17:15:35 compute-0 sudo[235953]: pam_unix(sudo:session): session closed for user root
Jan 10 17:15:36 compute-0 sudo[236064]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xubxasirrvdeaxasouxzivpkhpmvmqbu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065333.9501176-1230-255364035124942/AnsiballZ_systemd.py'
Jan 10 17:15:36 compute-0 sudo[236064]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:15:36 compute-0 python3.9[236066]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 10 17:15:36 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v598: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:15:37 compute-0 systemd[1]: Reloading.
Jan 10 17:15:37 compute-0 ceph-mon[75249]: pgmap v598: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:15:37 compute-0 systemd-sysv-generator[236094]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 10 17:15:37 compute-0 systemd-rc-local-generator[236090]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 17:15:37 compute-0 systemd[1]: Starting nova_compute container...
Jan 10 17:15:37 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:15:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52eb7d54ee1a3effee233654d289e0ab9b595d43483ba376afc253a0cb5086a7/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Jan 10 17:15:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52eb7d54ee1a3effee233654d289e0ab9b595d43483ba376afc253a0cb5086a7/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Jan 10 17:15:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52eb7d54ee1a3effee233654d289e0ab9b595d43483ba376afc253a0cb5086a7/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Jan 10 17:15:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52eb7d54ee1a3effee233654d289e0ab9b595d43483ba376afc253a0cb5086a7/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Jan 10 17:15:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52eb7d54ee1a3effee233654d289e0ab9b595d43483ba376afc253a0cb5086a7/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Jan 10 17:15:37 compute-0 podman[236106]: 2026-01-10 17:15:37.924506789 +0000 UTC m=+0.141826590 container init 8f8874914a56179fcc5831574e1cc112fdac465b9ddd5d3ee5069e9a44f58d02 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm)
Jan 10 17:15:37 compute-0 podman[236106]: 2026-01-10 17:15:37.974398391 +0000 UTC m=+0.191718062 container start 8f8874914a56179fcc5831574e1cc112fdac465b9ddd5d3ee5069e9a44f58d02 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=nova_compute, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, io.buildah.version=1.41.3)
Jan 10 17:15:37 compute-0 podman[236106]: nova_compute
Jan 10 17:15:37 compute-0 nova_compute[236122]: + sudo -E kolla_set_configs
Jan 10 17:15:37 compute-0 systemd[1]: Started nova_compute container.
Jan 10 17:15:38 compute-0 sudo[236064]: pam_unix(sudo:session): session closed for user root
Jan 10 17:15:38 compute-0 ceph-mgr[75538]: [balancer INFO root] Optimize plan auto_2026-01-10_17:15:38
Jan 10 17:15:38 compute-0 ceph-mgr[75538]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 10 17:15:38 compute-0 ceph-mgr[75538]: [balancer INFO root] do_upmap
Jan 10 17:15:38 compute-0 ceph-mgr[75538]: [balancer INFO root] pools ['cephfs.cephfs.data', 'backups', 'images', 'vms', 'volumes', '.mgr', 'cephfs.cephfs.meta']
Jan 10 17:15:38 compute-0 ceph-mgr[75538]: [balancer INFO root] prepared 0/10 upmap changes
Jan 10 17:15:38 compute-0 nova_compute[236122]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 10 17:15:38 compute-0 nova_compute[236122]: INFO:__main__:Validating config file
Jan 10 17:15:38 compute-0 nova_compute[236122]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 10 17:15:38 compute-0 nova_compute[236122]: INFO:__main__:Copying service configuration files
Jan 10 17:15:38 compute-0 nova_compute[236122]: INFO:__main__:Deleting /etc/nova/nova.conf
Jan 10 17:15:38 compute-0 nova_compute[236122]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Jan 10 17:15:38 compute-0 nova_compute[236122]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Jan 10 17:15:38 compute-0 nova_compute[236122]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Jan 10 17:15:38 compute-0 nova_compute[236122]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Jan 10 17:15:38 compute-0 nova_compute[236122]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 10 17:15:38 compute-0 nova_compute[236122]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 10 17:15:38 compute-0 nova_compute[236122]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 10 17:15:38 compute-0 nova_compute[236122]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 10 17:15:38 compute-0 nova_compute[236122]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Jan 10 17:15:38 compute-0 nova_compute[236122]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Jan 10 17:15:38 compute-0 nova_compute[236122]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 10 17:15:38 compute-0 nova_compute[236122]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 10 17:15:38 compute-0 nova_compute[236122]: INFO:__main__:Deleting /etc/ceph
Jan 10 17:15:38 compute-0 nova_compute[236122]: INFO:__main__:Creating directory /etc/ceph
Jan 10 17:15:38 compute-0 nova_compute[236122]: INFO:__main__:Setting permission for /etc/ceph
Jan 10 17:15:38 compute-0 nova_compute[236122]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Jan 10 17:15:38 compute-0 nova_compute[236122]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 10 17:15:38 compute-0 nova_compute[236122]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Jan 10 17:15:38 compute-0 nova_compute[236122]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 10 17:15:38 compute-0 nova_compute[236122]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Jan 10 17:15:38 compute-0 nova_compute[236122]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 10 17:15:38 compute-0 nova_compute[236122]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Jan 10 17:15:38 compute-0 nova_compute[236122]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 10 17:15:38 compute-0 nova_compute[236122]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Jan 10 17:15:38 compute-0 nova_compute[236122]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Jan 10 17:15:38 compute-0 nova_compute[236122]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Jan 10 17:15:38 compute-0 nova_compute[236122]: INFO:__main__:Writing out command to execute
Jan 10 17:15:38 compute-0 nova_compute[236122]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 10 17:15:38 compute-0 nova_compute[236122]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 10 17:15:38 compute-0 nova_compute[236122]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Jan 10 17:15:38 compute-0 nova_compute[236122]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 10 17:15:38 compute-0 nova_compute[236122]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 10 17:15:38 compute-0 nova_compute[236122]: ++ cat /run_command
Jan 10 17:15:38 compute-0 nova_compute[236122]: + CMD=nova-compute
Jan 10 17:15:38 compute-0 nova_compute[236122]: + ARGS=
Jan 10 17:15:38 compute-0 nova_compute[236122]: + sudo kolla_copy_cacerts
Jan 10 17:15:38 compute-0 nova_compute[236122]: + [[ ! -n '' ]]
Jan 10 17:15:38 compute-0 nova_compute[236122]: + . kolla_extend_start
Jan 10 17:15:38 compute-0 nova_compute[236122]: Running command: 'nova-compute'
Jan 10 17:15:38 compute-0 nova_compute[236122]: + echo 'Running command: '\''nova-compute'\'''
Jan 10 17:15:38 compute-0 nova_compute[236122]: + umask 0022
Jan 10 17:15:38 compute-0 nova_compute[236122]: + exec nova-compute
Jan 10 17:15:38 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v599: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:15:38 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:15:38 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:15:38 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:15:38 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:15:38 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:15:38 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:15:39 compute-0 python3.9[236283]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 10 17:15:39 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:15:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 10 17:15:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 10 17:15:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 10 17:15:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 10 17:15:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 10 17:15:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 10 17:15:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 10 17:15:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 10 17:15:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 10 17:15:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 10 17:15:39 compute-0 ceph-mon[75249]: pgmap v599: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:15:39 compute-0 python3.9[236434]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 10 17:15:40 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v600: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:15:40 compute-0 nova_compute[236122]: 2026-01-10 17:15:40.611 236126 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 10 17:15:40 compute-0 nova_compute[236122]: 2026-01-10 17:15:40.611 236126 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 10 17:15:40 compute-0 nova_compute[236122]: 2026-01-10 17:15:40.611 236126 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 10 17:15:40 compute-0 nova_compute[236122]: 2026-01-10 17:15:40.612 236126 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Jan 10 17:15:40 compute-0 nova_compute[236122]: 2026-01-10 17:15:40.749 236126 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 10 17:15:40 compute-0 nova_compute[236122]: 2026-01-10 17:15:40.780 236126 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.032s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 10 17:15:40 compute-0 nova_compute[236122]: 2026-01-10 17:15:40.781 236126 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Jan 10 17:15:40 compute-0 python3.9[236586]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.465 236126 INFO nova.virt.driver [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Jan 10 17:15:41 compute-0 ceph-mon[75249]: pgmap v600: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:15:41 compute-0 sudo[236738]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wooxwyrbqznmslwpsmrkxfsdnlexkcbq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065341.1629844-1290-10982082503276/AnsiballZ_podman_container.py'
Jan 10 17:15:41 compute-0 sudo[236738]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.716 236126 INFO nova.compute.provider_config [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.731 236126 DEBUG oslo_concurrency.lockutils [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.731 236126 DEBUG oslo_concurrency.lockutils [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.732 236126 DEBUG oslo_concurrency.lockutils [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.732 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.732 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.732 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.732 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.733 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.733 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.733 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.733 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.733 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.733 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.734 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.734 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.734 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.734 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.734 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.734 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.735 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.735 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.735 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.735 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.735 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.736 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.736 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.736 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.736 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.736 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.737 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.737 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.737 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.737 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.737 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.738 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.738 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.738 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.738 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.739 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.739 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.739 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.739 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.739 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.740 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.740 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.740 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.740 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.740 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.740 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.740 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.741 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.741 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.741 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.741 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.741 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.741 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.742 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.742 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.742 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.742 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.742 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.743 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.743 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.743 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.743 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.743 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.743 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.743 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.743 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.744 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.744 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.744 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.744 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.744 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.744 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.745 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.745 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.745 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.745 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.745 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.745 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.745 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.746 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.746 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.746 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.746 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.746 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.746 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.747 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.747 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.747 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.747 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.747 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.747 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.747 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.748 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.748 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.748 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.748 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.748 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.748 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.749 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.749 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.749 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.749 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.749 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.749 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.749 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.750 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.750 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.750 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.750 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.750 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.750 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.750 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.751 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.751 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.751 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.751 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.751 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.751 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.751 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.752 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.752 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.752 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.752 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.752 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.752 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.752 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.753 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.753 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.753 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.753 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.753 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.753 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.754 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.754 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.754 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.754 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.754 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.754 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.754 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.755 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.755 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.755 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.755 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.755 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.755 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.756 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.756 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.756 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.756 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.756 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.757 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.757 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.757 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.757 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.757 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.758 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.758 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.758 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.758 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.759 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.759 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.759 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.759 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.759 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.759 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.760 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.760 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.760 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.760 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.760 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.761 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.761 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.761 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.761 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.761 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.762 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.762 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.762 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.762 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.762 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.763 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.763 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.763 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.763 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.763 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.764 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.764 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.764 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.764 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.764 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.765 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.765 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.765 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.765 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.765 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.765 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.766 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.766 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.766 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.766 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.766 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.766 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.766 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.767 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.767 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.767 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.767 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.767 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.767 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.767 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.768 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.768 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.768 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.768 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.768 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.768 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.769 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.769 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.769 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.769 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.769 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.769 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.769 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.770 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.770 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.770 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.770 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.770 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.770 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.770 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.771 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.771 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.771 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.771 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.771 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.771 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.772 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.772 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.772 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.772 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.772 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.772 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.773 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.773 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.773 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.773 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.773 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.773 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.773 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.774 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.774 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.774 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.774 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.774 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.774 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.774 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.775 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.775 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.775 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.775 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.775 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.775 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.775 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.776 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.776 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.776 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.776 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.776 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.776 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.776 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.777 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.777 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.777 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.777 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.777 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.778 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.778 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.778 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.778 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.778 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.778 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.778 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.779 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.779 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.779 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.779 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.779 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.779 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.779 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.780 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.780 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.780 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.780 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.780 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.780 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.781 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.781 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.781 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.781 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.781 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.781 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.782 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.782 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.782 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.782 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.782 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.783 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.783 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.783 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.783 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.783 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.784 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.784 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.784 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.784 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.784 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.785 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.785 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.785 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.785 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.785 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.785 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.786 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.786 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.786 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.786 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.786 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.787 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.787 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.787 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.787 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.787 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.788 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.788 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.788 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.788 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.788 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.788 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.789 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.789 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.789 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.789 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.790 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.790 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.790 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.790 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.790 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.791 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.791 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.791 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.791 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.791 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.791 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.791 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.792 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.792 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.792 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.792 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.792 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.793 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.793 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.793 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.793 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.794 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.794 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.794 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.794 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.794 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.794 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.795 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.795 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.795 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.795 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.795 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.795 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.796 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.796 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.796 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.796 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.796 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.796 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.796 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.797 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.797 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.797 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.797 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.797 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.797 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.797 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.798 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.798 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.798 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.798 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.798 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.798 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.799 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.799 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.799 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.799 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.799 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.799 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.799 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.800 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.800 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.800 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.800 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.800 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.800 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.800 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.801 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.801 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.801 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.801 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.801 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.801 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.801 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.802 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.802 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.802 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.802 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.802 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.802 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.802 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.803 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.803 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.803 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.803 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.803 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.803 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.804 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.804 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.804 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.804 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.804 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.804 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.804 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.805 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.805 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.805 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.805 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.805 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.805 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.805 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.806 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.806 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.806 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.806 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.806 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.807 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.807 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.807 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.807 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.807 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.807 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.808 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.808 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.808 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.808 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.808 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.808 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.808 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.809 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.809 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.809 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.809 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.809 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.809 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.810 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.810 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.810 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.810 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.810 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.810 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.810 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.811 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.811 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.811 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.811 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.811 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.811 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.811 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.812 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.812 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.812 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.812 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.812 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.812 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.813 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.813 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.813 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.813 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.814 236126 WARNING oslo_config.cfg [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Jan 10 17:15:41 compute-0 nova_compute[236122]: live_migration_uri is deprecated for removal in favor of two other options that
Jan 10 17:15:41 compute-0 nova_compute[236122]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Jan 10 17:15:41 compute-0 nova_compute[236122]: and ``live_migration_inbound_addr`` respectively.
Jan 10 17:15:41 compute-0 nova_compute[236122]: ).  Its value may be silently ignored in the future.
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.814 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.814 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.814 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.814 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.814 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.815 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.815 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.815 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.815 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.815 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.815 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.816 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.816 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.816 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.816 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.816 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.816 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.817 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.817 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.rbd_secret_uuid        = a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.817 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.817 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.817 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.817 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.817 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.818 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.818 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.818 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.818 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.818 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.818 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.819 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.819 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.819 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.819 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.819 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.819 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.819 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.820 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.820 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.820 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.820 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.820 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.820 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.820 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.821 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.821 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.821 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.821 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.821 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.821 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.822 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.822 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.822 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.822 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.822 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.822 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.823 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.823 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.823 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.823 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.823 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.823 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.824 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.824 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.824 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.824 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.824 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.824 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.824 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.825 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.825 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.825 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.825 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.825 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.825 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.826 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.826 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.826 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.826 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.826 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.826 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.826 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.827 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.827 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.827 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.827 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.827 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.827 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.828 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.828 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.828 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.828 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.828 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.828 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.828 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.829 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.829 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.829 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.829 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.829 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.829 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.829 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.830 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.830 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.830 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.830 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.830 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.830 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.830 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.831 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.831 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.831 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.831 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.831 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.831 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.831 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.832 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.832 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.832 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.832 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.832 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.832 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.833 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.833 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.833 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.833 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.833 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.833 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.833 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.834 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.834 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.834 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.834 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.834 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.834 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.834 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.835 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.835 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.835 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.835 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.835 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.836 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.836 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.836 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.836 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.836 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.836 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.837 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.837 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.837 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.837 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.837 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.837 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.837 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.838 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.838 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.838 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.838 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.838 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.838 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.839 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.839 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.839 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.839 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.839 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.839 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.839 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.840 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.840 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.840 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.840 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.840 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.840 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.841 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.841 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.841 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.841 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.842 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.843 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.843 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.844 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.844 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.845 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.845 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.846 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.846 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.846 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.847 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.847 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.847 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.847 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.847 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.847 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.848 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.848 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.848 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.848 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.848 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.848 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.849 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.849 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.849 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.849 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.849 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.850 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.850 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.850 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.850 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.850 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.850 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.850 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.851 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.851 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.851 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.851 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.851 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.851 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.851 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.852 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.852 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.852 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.852 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.852 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.852 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.853 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.853 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.853 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.853 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.853 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.853 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.854 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.854 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.854 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.854 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.854 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.854 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.854 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.855 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.855 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.855 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.855 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.855 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.855 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.855 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.856 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.856 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.856 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.856 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.856 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.857 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.857 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.857 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.857 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.857 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.857 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.858 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.858 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.858 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.858 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.858 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.858 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.858 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.859 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.859 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.859 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.859 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.859 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.859 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.859 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.860 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.860 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.860 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.860 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.860 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.860 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.861 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.861 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.861 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.861 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.861 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.861 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.862 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.862 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.862 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.862 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.862 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.862 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.862 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.863 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.863 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.863 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.863 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.863 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.863 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.864 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.864 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.864 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.864 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.864 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.864 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.865 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.865 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.865 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.865 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.865 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.865 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.865 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.866 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.866 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.866 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.866 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.866 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.866 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.867 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.867 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.867 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.867 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.867 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.867 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.868 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.868 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.868 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.868 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.868 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.868 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.869 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.869 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.869 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.869 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.869 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.869 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.870 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.870 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.870 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.870 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.870 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.870 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.871 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.871 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.871 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.871 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.871 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.871 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.872 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.872 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.872 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.872 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.872 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.873 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.873 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.873 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.873 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.873 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.873 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.879 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.879 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.879 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.880 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.880 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.880 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.880 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.880 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.880 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.881 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.881 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.881 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.881 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.881 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.881 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.882 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.882 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.882 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.882 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.882 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.882 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.882 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.883 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.883 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.883 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.883 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.883 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.883 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.884 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.884 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.884 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.884 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.884 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.884 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.885 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.885 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.885 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.885 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.885 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.886 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.886 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.886 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.886 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.886 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.886 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.886 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.887 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.887 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.887 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.887 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.887 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.887 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.888 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.888 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.888 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.888 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.888 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.888 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.888 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.889 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.889 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.889 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.889 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.889 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.889 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.889 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.890 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.890 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.890 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.890 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.891 236126 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.913 236126 DEBUG nova.virt.libvirt.host [None req-f80cf4d5-4687-42f0-a2f3-9029deeb90f8 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.914 236126 DEBUG nova.virt.libvirt.host [None req-f80cf4d5-4687-42f0-a2f3-9029deeb90f8 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.914 236126 DEBUG nova.virt.libvirt.host [None req-f80cf4d5-4687-42f0-a2f3-9029deeb90f8 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Jan 10 17:15:41 compute-0 nova_compute[236122]: 2026-01-10 17:15:41.915 236126 DEBUG nova.virt.libvirt.host [None req-f80cf4d5-4687-42f0-a2f3-9029deeb90f8 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Jan 10 17:15:41 compute-0 python3.9[236740]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Jan 10 17:15:41 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Jan 10 17:15:41 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 10 17:15:41 compute-0 systemd[1]: Started libvirt QEMU daemon.
Jan 10 17:15:42 compute-0 nova_compute[236122]: 2026-01-10 17:15:42.006 236126 DEBUG nova.virt.libvirt.host [None req-f80cf4d5-4687-42f0-a2f3-9029deeb90f8 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f65352c0ee0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Jan 10 17:15:42 compute-0 nova_compute[236122]: 2026-01-10 17:15:42.008 236126 DEBUG nova.virt.libvirt.host [None req-f80cf4d5-4687-42f0-a2f3-9029deeb90f8 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f65352c0ee0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Jan 10 17:15:42 compute-0 nova_compute[236122]: 2026-01-10 17:15:42.010 236126 INFO nova.virt.libvirt.driver [None req-f80cf4d5-4687-42f0-a2f3-9029deeb90f8 - - - - - -] Connection event '1' reason 'None'
Jan 10 17:15:42 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 10 17:15:42 compute-0 nova_compute[236122]: 2026-01-10 17:15:42.043 236126 WARNING nova.virt.libvirt.driver [None req-f80cf4d5-4687-42f0-a2f3-9029deeb90f8 - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Jan 10 17:15:42 compute-0 nova_compute[236122]: 2026-01-10 17:15:42.043 236126 DEBUG nova.virt.libvirt.volume.mount [None req-f80cf4d5-4687-42f0-a2f3-9029deeb90f8 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Jan 10 17:15:42 compute-0 sudo[236738]: pam_unix(sudo:session): session closed for user root
Jan 10 17:15:42 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v601: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:15:42 compute-0 sudo[236971]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-njgktavqxdfaxtmiambdzvjbkatkogya ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065342.265781-1298-141156856774690/AnsiballZ_systemd.py'
Jan 10 17:15:42 compute-0 sudo[236971]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:15:42 compute-0 nova_compute[236122]: 2026-01-10 17:15:42.969 236126 INFO nova.virt.libvirt.host [None req-f80cf4d5-4687-42f0-a2f3-9029deeb90f8 - - - - - -] Libvirt host capabilities <capabilities>
Jan 10 17:15:42 compute-0 nova_compute[236122]: 
Jan 10 17:15:42 compute-0 nova_compute[236122]:   <host>
Jan 10 17:15:42 compute-0 nova_compute[236122]:     <uuid>a9d7d544-72dd-4b08-9e5e-495057bde287</uuid>
Jan 10 17:15:42 compute-0 nova_compute[236122]:     <cpu>
Jan 10 17:15:42 compute-0 nova_compute[236122]:       <arch>x86_64</arch>
Jan 10 17:15:42 compute-0 nova_compute[236122]:       <model>EPYC-Rome-v4</model>
Jan 10 17:15:42 compute-0 nova_compute[236122]:       <vendor>AMD</vendor>
Jan 10 17:15:42 compute-0 nova_compute[236122]:       <microcode version='16777317'/>
Jan 10 17:15:42 compute-0 nova_compute[236122]:       <signature family='23' model='49' stepping='0'/>
Jan 10 17:15:42 compute-0 nova_compute[236122]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Jan 10 17:15:42 compute-0 nova_compute[236122]:       <maxphysaddr mode='emulate' bits='40'/>
Jan 10 17:15:42 compute-0 nova_compute[236122]:       <feature name='x2apic'/>
Jan 10 17:15:42 compute-0 nova_compute[236122]:       <feature name='tsc-deadline'/>
Jan 10 17:15:42 compute-0 nova_compute[236122]:       <feature name='osxsave'/>
Jan 10 17:15:42 compute-0 nova_compute[236122]:       <feature name='hypervisor'/>
Jan 10 17:15:42 compute-0 nova_compute[236122]:       <feature name='tsc_adjust'/>
Jan 10 17:15:42 compute-0 nova_compute[236122]:       <feature name='spec-ctrl'/>
Jan 10 17:15:42 compute-0 nova_compute[236122]:       <feature name='stibp'/>
Jan 10 17:15:42 compute-0 nova_compute[236122]:       <feature name='arch-capabilities'/>
Jan 10 17:15:42 compute-0 nova_compute[236122]:       <feature name='ssbd'/>
Jan 10 17:15:42 compute-0 nova_compute[236122]:       <feature name='cmp_legacy'/>
Jan 10 17:15:42 compute-0 nova_compute[236122]:       <feature name='topoext'/>
Jan 10 17:15:42 compute-0 nova_compute[236122]:       <feature name='virt-ssbd'/>
Jan 10 17:15:42 compute-0 nova_compute[236122]:       <feature name='lbrv'/>
Jan 10 17:15:42 compute-0 nova_compute[236122]:       <feature name='tsc-scale'/>
Jan 10 17:15:42 compute-0 nova_compute[236122]:       <feature name='vmcb-clean'/>
Jan 10 17:15:42 compute-0 nova_compute[236122]:       <feature name='pause-filter'/>
Jan 10 17:15:42 compute-0 nova_compute[236122]:       <feature name='pfthreshold'/>
Jan 10 17:15:42 compute-0 nova_compute[236122]:       <feature name='svme-addr-chk'/>
Jan 10 17:15:42 compute-0 nova_compute[236122]:       <feature name='rdctl-no'/>
Jan 10 17:15:42 compute-0 nova_compute[236122]:       <feature name='skip-l1dfl-vmentry'/>
Jan 10 17:15:42 compute-0 nova_compute[236122]:       <feature name='mds-no'/>
Jan 10 17:15:42 compute-0 nova_compute[236122]:       <feature name='pschange-mc-no'/>
Jan 10 17:15:42 compute-0 nova_compute[236122]:       <pages unit='KiB' size='4'/>
Jan 10 17:15:42 compute-0 nova_compute[236122]:       <pages unit='KiB' size='2048'/>
Jan 10 17:15:42 compute-0 nova_compute[236122]:       <pages unit='KiB' size='1048576'/>
Jan 10 17:15:42 compute-0 nova_compute[236122]:     </cpu>
Jan 10 17:15:42 compute-0 nova_compute[236122]:     <power_management>
Jan 10 17:15:42 compute-0 nova_compute[236122]:       <suspend_mem/>
Jan 10 17:15:42 compute-0 nova_compute[236122]:     </power_management>
Jan 10 17:15:42 compute-0 nova_compute[236122]:     <iommu support='no'/>
Jan 10 17:15:42 compute-0 nova_compute[236122]:     <migration_features>
Jan 10 17:15:42 compute-0 nova_compute[236122]:       <live/>
Jan 10 17:15:42 compute-0 nova_compute[236122]:       <uri_transports>
Jan 10 17:15:42 compute-0 nova_compute[236122]:         <uri_transport>tcp</uri_transport>
Jan 10 17:15:42 compute-0 nova_compute[236122]:         <uri_transport>rdma</uri_transport>
Jan 10 17:15:42 compute-0 nova_compute[236122]:       </uri_transports>
Jan 10 17:15:42 compute-0 nova_compute[236122]:     </migration_features>
Jan 10 17:15:42 compute-0 nova_compute[236122]:     <topology>
Jan 10 17:15:42 compute-0 nova_compute[236122]:       <cells num='1'>
Jan 10 17:15:42 compute-0 nova_compute[236122]:         <cell id='0'>
Jan 10 17:15:42 compute-0 nova_compute[236122]:           <memory unit='KiB'>7864312</memory>
Jan 10 17:15:42 compute-0 nova_compute[236122]:           <pages unit='KiB' size='4'>1966078</pages>
Jan 10 17:15:42 compute-0 nova_compute[236122]:           <pages unit='KiB' size='2048'>0</pages>
Jan 10 17:15:42 compute-0 nova_compute[236122]:           <pages unit='KiB' size='1048576'>0</pages>
Jan 10 17:15:42 compute-0 nova_compute[236122]:           <distances>
Jan 10 17:15:42 compute-0 nova_compute[236122]:             <sibling id='0' value='10'/>
Jan 10 17:15:42 compute-0 nova_compute[236122]:           </distances>
Jan 10 17:15:42 compute-0 nova_compute[236122]:           <cpus num='8'>
Jan 10 17:15:42 compute-0 nova_compute[236122]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Jan 10 17:15:42 compute-0 nova_compute[236122]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Jan 10 17:15:42 compute-0 nova_compute[236122]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Jan 10 17:15:42 compute-0 nova_compute[236122]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Jan 10 17:15:42 compute-0 nova_compute[236122]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Jan 10 17:15:42 compute-0 nova_compute[236122]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Jan 10 17:15:42 compute-0 nova_compute[236122]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Jan 10 17:15:42 compute-0 nova_compute[236122]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Jan 10 17:15:42 compute-0 nova_compute[236122]:           </cpus>
Jan 10 17:15:42 compute-0 nova_compute[236122]:         </cell>
Jan 10 17:15:42 compute-0 nova_compute[236122]:       </cells>
Jan 10 17:15:42 compute-0 nova_compute[236122]:     </topology>
Jan 10 17:15:42 compute-0 nova_compute[236122]:     <cache>
Jan 10 17:15:42 compute-0 nova_compute[236122]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Jan 10 17:15:42 compute-0 nova_compute[236122]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Jan 10 17:15:42 compute-0 nova_compute[236122]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Jan 10 17:15:42 compute-0 nova_compute[236122]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Jan 10 17:15:42 compute-0 nova_compute[236122]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Jan 10 17:15:42 compute-0 nova_compute[236122]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Jan 10 17:15:42 compute-0 nova_compute[236122]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Jan 10 17:15:42 compute-0 nova_compute[236122]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Jan 10 17:15:42 compute-0 nova_compute[236122]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Jan 10 17:15:42 compute-0 nova_compute[236122]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Jan 10 17:15:42 compute-0 nova_compute[236122]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Jan 10 17:15:42 compute-0 nova_compute[236122]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Jan 10 17:15:42 compute-0 nova_compute[236122]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Jan 10 17:15:42 compute-0 nova_compute[236122]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Jan 10 17:15:42 compute-0 nova_compute[236122]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Jan 10 17:15:42 compute-0 nova_compute[236122]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Jan 10 17:15:42 compute-0 nova_compute[236122]:     </cache>
Jan 10 17:15:42 compute-0 nova_compute[236122]:     <secmodel>
Jan 10 17:15:42 compute-0 nova_compute[236122]:       <model>selinux</model>
Jan 10 17:15:42 compute-0 nova_compute[236122]:       <doi>0</doi>
Jan 10 17:15:42 compute-0 nova_compute[236122]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Jan 10 17:15:42 compute-0 nova_compute[236122]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Jan 10 17:15:42 compute-0 nova_compute[236122]:     </secmodel>
Jan 10 17:15:42 compute-0 nova_compute[236122]:     <secmodel>
Jan 10 17:15:42 compute-0 nova_compute[236122]:       <model>dac</model>
Jan 10 17:15:42 compute-0 nova_compute[236122]:       <doi>0</doi>
Jan 10 17:15:42 compute-0 nova_compute[236122]:       <baselabel type='kvm'>+107:+107</baselabel>
Jan 10 17:15:42 compute-0 nova_compute[236122]:       <baselabel type='qemu'>+107:+107</baselabel>
Jan 10 17:15:42 compute-0 nova_compute[236122]:     </secmodel>
Jan 10 17:15:42 compute-0 nova_compute[236122]:   </host>
Jan 10 17:15:42 compute-0 nova_compute[236122]: 
Jan 10 17:15:42 compute-0 nova_compute[236122]:   <guest>
Jan 10 17:15:42 compute-0 nova_compute[236122]:     <os_type>hvm</os_type>
Jan 10 17:15:42 compute-0 nova_compute[236122]:     <arch name='i686'>
Jan 10 17:15:42 compute-0 nova_compute[236122]:       <wordsize>32</wordsize>
Jan 10 17:15:42 compute-0 nova_compute[236122]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 10 17:15:42 compute-0 nova_compute[236122]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Jan 10 17:15:42 compute-0 nova_compute[236122]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Jan 10 17:15:42 compute-0 nova_compute[236122]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Jan 10 17:15:42 compute-0 nova_compute[236122]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Jan 10 17:15:42 compute-0 nova_compute[236122]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Jan 10 17:15:42 compute-0 nova_compute[236122]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Jan 10 17:15:42 compute-0 nova_compute[236122]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Jan 10 17:15:42 compute-0 nova_compute[236122]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Jan 10 17:15:42 compute-0 nova_compute[236122]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Jan 10 17:15:42 compute-0 nova_compute[236122]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Jan 10 17:15:42 compute-0 nova_compute[236122]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Jan 10 17:15:42 compute-0 nova_compute[236122]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Jan 10 17:15:42 compute-0 nova_compute[236122]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Jan 10 17:15:42 compute-0 nova_compute[236122]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Jan 10 17:15:42 compute-0 nova_compute[236122]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Jan 10 17:15:42 compute-0 nova_compute[236122]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Jan 10 17:15:42 compute-0 nova_compute[236122]:       <domain type='qemu'/>
Jan 10 17:15:42 compute-0 nova_compute[236122]:       <domain type='kvm'/>
Jan 10 17:15:42 compute-0 nova_compute[236122]:     </arch>
Jan 10 17:15:42 compute-0 nova_compute[236122]:     <features>
Jan 10 17:15:42 compute-0 nova_compute[236122]:       <pae/>
Jan 10 17:15:42 compute-0 nova_compute[236122]:       <nonpae/>
Jan 10 17:15:42 compute-0 nova_compute[236122]:       <acpi default='on' toggle='yes'/>
Jan 10 17:15:42 compute-0 nova_compute[236122]:       <apic default='on' toggle='no'/>
Jan 10 17:15:42 compute-0 nova_compute[236122]:       <cpuselection/>
Jan 10 17:15:42 compute-0 nova_compute[236122]:       <deviceboot/>
Jan 10 17:15:42 compute-0 nova_compute[236122]:       <disksnapshot default='on' toggle='no'/>
Jan 10 17:15:42 compute-0 nova_compute[236122]:       <externalSnapshot/>
Jan 10 17:15:42 compute-0 nova_compute[236122]:     </features>
Jan 10 17:15:42 compute-0 nova_compute[236122]:   </guest>
Jan 10 17:15:42 compute-0 nova_compute[236122]: 
Jan 10 17:15:42 compute-0 nova_compute[236122]:   <guest>
Jan 10 17:15:42 compute-0 nova_compute[236122]:     <os_type>hvm</os_type>
Jan 10 17:15:42 compute-0 nova_compute[236122]:     <arch name='x86_64'>
Jan 10 17:15:42 compute-0 nova_compute[236122]:       <wordsize>64</wordsize>
Jan 10 17:15:42 compute-0 nova_compute[236122]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 10 17:15:42 compute-0 nova_compute[236122]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Jan 10 17:15:42 compute-0 nova_compute[236122]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Jan 10 17:15:42 compute-0 nova_compute[236122]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Jan 10 17:15:42 compute-0 nova_compute[236122]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Jan 10 17:15:42 compute-0 nova_compute[236122]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Jan 10 17:15:42 compute-0 nova_compute[236122]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Jan 10 17:15:42 compute-0 nova_compute[236122]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Jan 10 17:15:42 compute-0 nova_compute[236122]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Jan 10 17:15:42 compute-0 nova_compute[236122]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Jan 10 17:15:42 compute-0 nova_compute[236122]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Jan 10 17:15:42 compute-0 nova_compute[236122]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Jan 10 17:15:42 compute-0 nova_compute[236122]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Jan 10 17:15:42 compute-0 nova_compute[236122]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Jan 10 17:15:42 compute-0 nova_compute[236122]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Jan 10 17:15:42 compute-0 nova_compute[236122]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Jan 10 17:15:42 compute-0 nova_compute[236122]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Jan 10 17:15:42 compute-0 nova_compute[236122]:       <domain type='qemu'/>
Jan 10 17:15:42 compute-0 nova_compute[236122]:       <domain type='kvm'/>
Jan 10 17:15:42 compute-0 nova_compute[236122]:     </arch>
Jan 10 17:15:42 compute-0 nova_compute[236122]:     <features>
Jan 10 17:15:42 compute-0 nova_compute[236122]:       <acpi default='on' toggle='yes'/>
Jan 10 17:15:42 compute-0 nova_compute[236122]:       <apic default='on' toggle='no'/>
Jan 10 17:15:42 compute-0 nova_compute[236122]:       <cpuselection/>
Jan 10 17:15:42 compute-0 nova_compute[236122]:       <deviceboot/>
Jan 10 17:15:42 compute-0 nova_compute[236122]:       <disksnapshot default='on' toggle='no'/>
Jan 10 17:15:42 compute-0 nova_compute[236122]:       <externalSnapshot/>
Jan 10 17:15:42 compute-0 nova_compute[236122]:     </features>
Jan 10 17:15:42 compute-0 nova_compute[236122]:   </guest>
Jan 10 17:15:42 compute-0 nova_compute[236122]: 
Jan 10 17:15:42 compute-0 nova_compute[236122]: </capabilities>
Jan 10 17:15:42 compute-0 nova_compute[236122]: 
Jan 10 17:15:42 compute-0 python3.9[236973]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 10 17:15:42 compute-0 nova_compute[236122]: 2026-01-10 17:15:42.983 236126 DEBUG nova.virt.libvirt.host [None req-f80cf4d5-4687-42f0-a2f3-9029deeb90f8 - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Jan 10 17:15:43 compute-0 nova_compute[236122]: 2026-01-10 17:15:43.008 236126 DEBUG nova.virt.libvirt.host [None req-f80cf4d5-4687-42f0-a2f3-9029deeb90f8 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Jan 10 17:15:43 compute-0 nova_compute[236122]: <domainCapabilities>
Jan 10 17:15:43 compute-0 nova_compute[236122]:   <path>/usr/libexec/qemu-kvm</path>
Jan 10 17:15:43 compute-0 nova_compute[236122]:   <domain>kvm</domain>
Jan 10 17:15:43 compute-0 nova_compute[236122]:   <machine>pc-q35-rhel9.8.0</machine>
Jan 10 17:15:43 compute-0 nova_compute[236122]:   <arch>i686</arch>
Jan 10 17:15:43 compute-0 nova_compute[236122]:   <vcpu max='4096'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:   <iothreads supported='yes'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:   <os supported='yes'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     <enum name='firmware'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     <loader supported='yes'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <enum name='type'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>rom</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>pflash</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </enum>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <enum name='readonly'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>yes</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>no</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </enum>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <enum name='secure'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>no</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </enum>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     </loader>
Jan 10 17:15:43 compute-0 nova_compute[236122]:   </os>
Jan 10 17:15:43 compute-0 nova_compute[236122]:   <cpu>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     <mode name='host-passthrough' supported='yes'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <enum name='hostPassthroughMigratable'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>on</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>off</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </enum>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     </mode>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     <mode name='maximum' supported='yes'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <enum name='maximumMigratable'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>on</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>off</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </enum>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     </mode>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     <mode name='host-model' supported='yes'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <vendor>AMD</vendor>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <feature policy='require' name='x2apic'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <feature policy='require' name='tsc-deadline'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <feature policy='require' name='hypervisor'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <feature policy='require' name='tsc_adjust'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <feature policy='require' name='spec-ctrl'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <feature policy='require' name='stibp'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <feature policy='require' name='ssbd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <feature policy='require' name='cmp_legacy'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <feature policy='require' name='overflow-recov'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <feature policy='require' name='succor'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <feature policy='require' name='ibrs'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <feature policy='require' name='amd-ssbd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <feature policy='require' name='virt-ssbd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <feature policy='require' name='lbrv'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <feature policy='require' name='tsc-scale'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <feature policy='require' name='vmcb-clean'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <feature policy='require' name='flushbyasid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <feature policy='require' name='pause-filter'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <feature policy='require' name='pfthreshold'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <feature policy='require' name='svme-addr-chk'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <feature policy='disable' name='xsaves'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     </mode>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     <mode name='custom' supported='yes'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Broadwell'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='hle'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='rtm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Broadwell-IBRS'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='hle'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='rtm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Broadwell-noTSX'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Broadwell-v1'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='hle'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='rtm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Broadwell-v2'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Broadwell-v3'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='hle'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='rtm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Broadwell-v4'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Cascadelake-Server'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bw'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512cd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512dq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512f'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vl'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vnni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='hle'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pku'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='rtm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bw'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512cd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512dq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512f'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vl'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vnni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='ibrs-all'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pku'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Cascadelake-Server-v1'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bw'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512cd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512dq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512f'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vl'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vnni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='hle'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pku'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='rtm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Cascadelake-Server-v2'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bw'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512cd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512dq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512f'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vl'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vnni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='hle'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='ibrs-all'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pku'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='rtm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Cascadelake-Server-v3'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bw'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512cd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512dq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512f'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vl'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vnni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='ibrs-all'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pku'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Cascadelake-Server-v4'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bw'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512cd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512dq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512f'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vl'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vnni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='ibrs-all'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pku'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Cascadelake-Server-v5'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bw'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512cd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512dq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512f'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vl'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vnni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='ibrs-all'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pku'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='xsaves'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Cooperlake'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512-bf16'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bw'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512cd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512dq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512f'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vl'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vnni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='hle'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='ibrs-all'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pku'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='rtm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='taa-no'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Cooperlake-v1'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512-bf16'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bw'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512cd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512dq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512f'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vl'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vnni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='hle'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='ibrs-all'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pku'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='rtm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='taa-no'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Cooperlake-v2'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512-bf16'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bw'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512cd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512dq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512f'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vl'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vnni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='hle'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='ibrs-all'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pku'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='rtm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='taa-no'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='xsaves'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Denverton'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='mpx'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Denverton-v1'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='mpx'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Denverton-v2'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Denverton-v3'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='xsaves'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Dhyana-v2'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='xsaves'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='EPYC-Genoa'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='amd-psfd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='auto-ibrs'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512-bf16'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512-vpopcntdq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bitalg'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bw'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512cd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512dq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512f'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512ifma'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vbmi'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vbmi2'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vl'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vnni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fsrm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='gfni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='la57'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='no-nested-data-bp'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='null-sel-clr-base'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pku'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='stibp-always-on'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='vaes'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='vpclmulqdq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='xsaves'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='EPYC-Genoa-v1'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='amd-psfd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='auto-ibrs'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512-bf16'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512-vpopcntdq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bitalg'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bw'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512cd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512dq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512f'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512ifma'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vbmi'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vbmi2'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vl'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vnni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fsrm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='gfni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='la57'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='no-nested-data-bp'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='null-sel-clr-base'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pku'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='stibp-always-on'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='vaes'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='vpclmulqdq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='xsaves'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='EPYC-Milan'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fsrm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pku'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='xsaves'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='EPYC-Milan-v1'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fsrm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pku'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='xsaves'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='EPYC-Milan-v2'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='amd-psfd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fsrm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='no-nested-data-bp'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='null-sel-clr-base'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pku'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='stibp-always-on'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='vaes'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='vpclmulqdq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='xsaves'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='EPYC-Rome'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='xsaves'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='EPYC-Rome-v1'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='xsaves'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='EPYC-Rome-v2'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='xsaves'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='EPYC-Rome-v3'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='xsaves'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='EPYC-v3'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='xsaves'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='EPYC-v4'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='xsaves'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='GraniteRapids'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='amx-bf16'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='amx-fp16'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='amx-int8'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='amx-tile'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx-vnni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512-bf16'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512-fp16'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512-vpopcntdq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bitalg'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bw'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512cd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512dq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512f'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512ifma'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vbmi'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vbmi2'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vl'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vnni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='bus-lock-detect'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fbsdp-no'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fsrc'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fsrm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fsrs'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fzrm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='gfni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='hle'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='ibrs-all'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='la57'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='mcdt-no'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pbrsb-no'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pku'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='prefetchiti'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='psdp-no'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='rtm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='sbdr-ssdp-no'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='serialize'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='taa-no'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='tsx-ldtrk'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='vaes'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='vpclmulqdq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='xfd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='xsaves'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='GraniteRapids-v1'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='amx-bf16'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='amx-fp16'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='amx-int8'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='amx-tile'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx-vnni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512-bf16'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512-fp16'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512-vpopcntdq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bitalg'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bw'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512cd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512dq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512f'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512ifma'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vbmi'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vbmi2'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vl'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vnni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='bus-lock-detect'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fbsdp-no'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fsrc'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fsrm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fsrs'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fzrm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='gfni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='hle'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='ibrs-all'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='la57'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='mcdt-no'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pbrsb-no'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pku'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='prefetchiti'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='psdp-no'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='rtm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='sbdr-ssdp-no'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='serialize'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='taa-no'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='tsx-ldtrk'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='vaes'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='vpclmulqdq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='xfd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='xsaves'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='GraniteRapids-v2'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='amx-bf16'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='amx-fp16'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='amx-int8'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='amx-tile'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx-vnni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx10'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx10-128'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx10-256'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx10-512'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512-bf16'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512-fp16'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512-vpopcntdq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bitalg'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bw'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512cd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512dq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512f'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512ifma'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vbmi'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vbmi2'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vl'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vnni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='bus-lock-detect'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='cldemote'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fbsdp-no'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fsrc'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fsrm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fsrs'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fzrm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='gfni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='hle'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='ibrs-all'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='la57'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='mcdt-no'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='movdir64b'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='movdiri'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pbrsb-no'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pku'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='prefetchiti'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='psdp-no'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='rtm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='sbdr-ssdp-no'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='serialize'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='ss'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='taa-no'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='tsx-ldtrk'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='vaes'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='vpclmulqdq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='xfd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='xsaves'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Haswell'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='hle'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='rtm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Haswell-IBRS'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='hle'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='rtm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Haswell-noTSX'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Haswell-v1'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='hle'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='rtm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Haswell-v2'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Haswell-v3'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='hle'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='rtm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Haswell-v4'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Icelake-Server'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512-vpopcntdq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bitalg'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bw'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512cd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512dq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512f'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vbmi'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vbmi2'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vl'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vnni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='gfni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='hle'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='la57'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pku'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='rtm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='vaes'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='vpclmulqdq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Icelake-Server-noTSX'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512-vpopcntdq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bitalg'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bw'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512cd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512dq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512f'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vbmi'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vbmi2'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vl'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vnni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='gfni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='la57'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pku'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='vaes'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='vpclmulqdq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Icelake-Server-v1'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512-vpopcntdq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bitalg'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bw'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512cd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512dq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512f'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vbmi'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vbmi2'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vl'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vnni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='gfni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='hle'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='la57'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pku'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='rtm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='vaes'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='vpclmulqdq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Icelake-Server-v2'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512-vpopcntdq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bitalg'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bw'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512cd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512dq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512f'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vbmi'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vbmi2'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vl'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vnni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='gfni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='la57'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pku'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='vaes'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='vpclmulqdq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Icelake-Server-v3'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512-vpopcntdq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bitalg'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bw'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512cd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512dq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512f'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vbmi'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vbmi2'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vl'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vnni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='gfni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='ibrs-all'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='la57'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pku'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='taa-no'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='vaes'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='vpclmulqdq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Icelake-Server-v4'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512-vpopcntdq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bitalg'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bw'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512cd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512dq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512f'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512ifma'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vbmi'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vbmi2'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vl'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vnni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fsrm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='gfni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='ibrs-all'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='la57'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pku'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='taa-no'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='vaes'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='vpclmulqdq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Icelake-Server-v5'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512-vpopcntdq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bitalg'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bw'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512cd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512dq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512f'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512ifma'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vbmi'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vbmi2'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vl'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vnni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fsrm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='gfni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='ibrs-all'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='la57'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pku'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='taa-no'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='vaes'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='vpclmulqdq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='xsaves'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Icelake-Server-v6'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512-vpopcntdq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bitalg'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bw'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512cd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512dq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512f'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512ifma'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vbmi'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vbmi2'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vl'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vnni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fsrm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='gfni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='ibrs-all'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='la57'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pku'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='taa-no'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='vaes'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='vpclmulqdq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='xsaves'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Icelake-Server-v7'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512-vpopcntdq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bitalg'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bw'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512cd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512dq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512f'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512ifma'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vbmi'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vbmi2'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vl'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vnni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fsrm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='gfni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='hle'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='ibrs-all'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='la57'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pku'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='rtm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='taa-no'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='vaes'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='vpclmulqdq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='xsaves'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='IvyBridge'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='IvyBridge-IBRS'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='IvyBridge-v1'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='IvyBridge-v2'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='KnightsMill'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512-4fmaps'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512-4vnniw'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512-vpopcntdq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512cd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512er'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512f'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512pf'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='ss'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='KnightsMill-v1'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512-4fmaps'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512-4vnniw'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512-vpopcntdq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512cd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512er'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512f'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512pf'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='ss'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 10 17:15:43 compute-0 systemd[1]: Stopping nova_compute container...
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Opteron_G4'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fma4'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='xop'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Opteron_G4-v1'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fma4'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='xop'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Opteron_G5'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fma4'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='tbm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='xop'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Opteron_G5-v1'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fma4'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='tbm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='xop'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='SapphireRapids'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='amx-bf16'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='amx-int8'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='amx-tile'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx-vnni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512-bf16'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512-fp16'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512-vpopcntdq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bitalg'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bw'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512cd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512dq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512f'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512ifma'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vbmi'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vbmi2'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vl'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vnni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='bus-lock-detect'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fsrc'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fsrm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fsrs'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fzrm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='gfni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='hle'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='ibrs-all'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='la57'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pku'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='rtm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='serialize'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='taa-no'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='tsx-ldtrk'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='vaes'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='vpclmulqdq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='xfd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='xsaves'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='SapphireRapids-v1'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='amx-bf16'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='amx-int8'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='amx-tile'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx-vnni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512-bf16'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512-fp16'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512-vpopcntdq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bitalg'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bw'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512cd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512dq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512f'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512ifma'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vbmi'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vbmi2'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vl'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vnni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='bus-lock-detect'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fsrc'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fsrm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fsrs'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fzrm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='gfni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='hle'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='ibrs-all'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='la57'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pku'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='rtm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='serialize'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='taa-no'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='tsx-ldtrk'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='vaes'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='vpclmulqdq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='xfd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='xsaves'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='SapphireRapids-v2'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='amx-bf16'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='amx-int8'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='amx-tile'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx-vnni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512-bf16'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512-fp16'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512-vpopcntdq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bitalg'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bw'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512cd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512dq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512f'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512ifma'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vbmi'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vbmi2'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vl'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vnni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='bus-lock-detect'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fbsdp-no'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fsrc'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fsrm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fsrs'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fzrm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='gfni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='hle'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='ibrs-all'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='la57'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pku'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='psdp-no'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='rtm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='sbdr-ssdp-no'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='serialize'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='taa-no'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='tsx-ldtrk'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='vaes'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='vpclmulqdq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='xfd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='xsaves'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='SapphireRapids-v3'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='amx-bf16'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='amx-int8'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='amx-tile'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx-vnni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512-bf16'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512-fp16'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512-vpopcntdq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bitalg'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bw'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512cd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512dq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512f'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512ifma'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vbmi'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vbmi2'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vl'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vnni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='bus-lock-detect'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='cldemote'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fbsdp-no'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fsrc'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fsrm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fsrs'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fzrm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='gfni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='hle'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='ibrs-all'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='la57'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='movdir64b'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='movdiri'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pku'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='psdp-no'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='rtm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='sbdr-ssdp-no'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='serialize'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='ss'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='taa-no'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='tsx-ldtrk'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='vaes'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='vpclmulqdq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='xfd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='xsaves'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='SierraForest'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx-ifma'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx-ne-convert'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx-vnni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx-vnni-int8'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='bus-lock-detect'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='cmpccxadd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fbsdp-no'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fsrm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fsrs'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='gfni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='ibrs-all'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='mcdt-no'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pbrsb-no'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pku'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='psdp-no'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='sbdr-ssdp-no'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='serialize'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='vaes'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='vpclmulqdq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='xsaves'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='SierraForest-v1'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx-ifma'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx-ne-convert'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx-vnni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx-vnni-int8'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='bus-lock-detect'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='cmpccxadd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fbsdp-no'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fsrm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fsrs'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='gfni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='ibrs-all'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='mcdt-no'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pbrsb-no'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pku'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='psdp-no'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='sbdr-ssdp-no'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='serialize'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='vaes'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='vpclmulqdq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='xsaves'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Skylake-Client'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='hle'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='rtm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Skylake-Client-IBRS'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='hle'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='rtm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Skylake-Client-v1'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='hle'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='rtm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Skylake-Client-v2'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='hle'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='rtm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Skylake-Client-v3'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Skylake-Client-v4'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='xsaves'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Skylake-Server'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bw'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512cd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512dq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512f'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vl'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='hle'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pku'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='rtm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Skylake-Server-IBRS'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bw'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512cd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512dq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512f'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vl'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='hle'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pku'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='rtm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bw'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512cd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512dq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512f'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vl'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pku'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Skylake-Server-v1'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bw'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512cd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512dq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512f'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vl'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='hle'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pku'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='rtm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Skylake-Server-v2'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bw'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512cd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512dq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512f'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vl'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='hle'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pku'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='rtm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Skylake-Server-v3'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bw'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512cd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512dq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512f'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vl'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pku'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Skylake-Server-v4'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bw'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512cd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512dq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512f'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vl'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pku'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Skylake-Server-v5'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bw'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512cd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512dq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512f'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vl'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pku'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='xsaves'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Snowridge'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='cldemote'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='core-capability'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='gfni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='movdir64b'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='movdiri'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='mpx'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='split-lock-detect'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Snowridge-v1'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='cldemote'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='core-capability'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='gfni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='movdir64b'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='movdiri'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='mpx'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='split-lock-detect'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Snowridge-v2'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='cldemote'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='core-capability'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='gfni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='movdir64b'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='movdiri'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='split-lock-detect'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Snowridge-v3'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='cldemote'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='core-capability'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='gfni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='movdir64b'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='movdiri'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='split-lock-detect'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='xsaves'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Snowridge-v4'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='cldemote'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='gfni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='movdir64b'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='movdiri'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='xsaves'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='athlon'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='3dnow'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='3dnowext'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='athlon-v1'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='3dnow'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='3dnowext'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='core2duo'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='ss'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='core2duo-v1'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='ss'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='coreduo'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='ss'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='coreduo-v1'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='ss'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='n270'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='ss'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='n270-v1'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='ss'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='phenom'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='3dnow'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='3dnowext'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='phenom-v1'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='3dnow'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='3dnowext'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     </mode>
Jan 10 17:15:43 compute-0 nova_compute[236122]:   </cpu>
Jan 10 17:15:43 compute-0 nova_compute[236122]:   <memoryBacking supported='yes'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     <enum name='sourceType'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <value>file</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <value>anonymous</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <value>memfd</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     </enum>
Jan 10 17:15:43 compute-0 nova_compute[236122]:   </memoryBacking>
Jan 10 17:15:43 compute-0 nova_compute[236122]:   <devices>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     <disk supported='yes'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <enum name='diskDevice'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>disk</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>cdrom</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>floppy</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>lun</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </enum>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <enum name='bus'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>fdc</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>scsi</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>virtio</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>usb</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>sata</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </enum>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <enum name='model'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>virtio</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>virtio-transitional</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>virtio-non-transitional</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </enum>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     </disk>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     <graphics supported='yes'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <enum name='type'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>vnc</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>egl-headless</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>dbus</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </enum>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     </graphics>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     <video supported='yes'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <enum name='modelType'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>vga</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>cirrus</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>virtio</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>none</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>bochs</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>ramfb</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </enum>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     </video>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     <hostdev supported='yes'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <enum name='mode'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>subsystem</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </enum>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <enum name='startupPolicy'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>default</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>mandatory</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>requisite</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>optional</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </enum>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <enum name='subsysType'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>usb</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>pci</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>scsi</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </enum>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <enum name='capsType'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <enum name='pciBackend'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     </hostdev>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     <rng supported='yes'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <enum name='model'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>virtio</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>virtio-transitional</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>virtio-non-transitional</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </enum>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <enum name='backendModel'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>random</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>egd</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>builtin</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </enum>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     </rng>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     <filesystem supported='yes'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <enum name='driverType'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>path</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>handle</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>virtiofs</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </enum>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     </filesystem>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     <tpm supported='yes'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <enum name='model'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>tpm-tis</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>tpm-crb</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </enum>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <enum name='backendModel'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>emulator</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>external</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </enum>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <enum name='backendVersion'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>2.0</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </enum>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     </tpm>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     <redirdev supported='yes'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <enum name='bus'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>usb</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </enum>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     </redirdev>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     <channel supported='yes'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <enum name='type'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>pty</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>unix</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </enum>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     </channel>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     <crypto supported='yes'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <enum name='model'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <enum name='type'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>qemu</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </enum>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <enum name='backendModel'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>builtin</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </enum>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     </crypto>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     <interface supported='yes'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <enum name='backendType'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>default</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>passt</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </enum>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     </interface>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     <panic supported='yes'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <enum name='model'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>isa</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>hyperv</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </enum>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     </panic>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     <console supported='yes'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <enum name='type'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>null</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>vc</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>pty</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>dev</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>file</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>pipe</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>stdio</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>udp</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>tcp</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>unix</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>qemu-vdagent</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>dbus</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </enum>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     </console>
Jan 10 17:15:43 compute-0 nova_compute[236122]:   </devices>
Jan 10 17:15:43 compute-0 nova_compute[236122]:   <features>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     <gic supported='no'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     <vmcoreinfo supported='yes'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     <genid supported='yes'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     <backingStoreInput supported='yes'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     <backup supported='yes'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     <async-teardown supported='yes'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     <ps2 supported='yes'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     <sev supported='no'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     <sgx supported='no'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     <hyperv supported='yes'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <enum name='features'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>relaxed</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>vapic</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>spinlocks</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>vpindex</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>runtime</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>synic</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>stimer</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>reset</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>vendor_id</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>frequencies</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>reenlightenment</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>tlbflush</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>ipi</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>avic</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>emsr_bitmap</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>xmm_input</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </enum>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <defaults>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <spinlocks>4095</spinlocks>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <stimer_direct>on</stimer_direct>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <tlbflush_direct>on</tlbflush_direct>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <tlbflush_extended>on</tlbflush_extended>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </defaults>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     </hyperv>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     <launchSecurity supported='yes'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <enum name='sectype'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>tdx</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </enum>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     </launchSecurity>
Jan 10 17:15:43 compute-0 nova_compute[236122]:   </features>
Jan 10 17:15:43 compute-0 nova_compute[236122]: </domainCapabilities>
Jan 10 17:15:43 compute-0 nova_compute[236122]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 10 17:15:43 compute-0 nova_compute[236122]: 2026-01-10 17:15:43.014 236126 DEBUG nova.virt.libvirt.host [None req-f80cf4d5-4687-42f0-a2f3-9029deeb90f8 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Jan 10 17:15:43 compute-0 nova_compute[236122]: <domainCapabilities>
Jan 10 17:15:43 compute-0 nova_compute[236122]:   <path>/usr/libexec/qemu-kvm</path>
Jan 10 17:15:43 compute-0 nova_compute[236122]:   <domain>kvm</domain>
Jan 10 17:15:43 compute-0 nova_compute[236122]:   <machine>pc-i440fx-rhel7.6.0</machine>
Jan 10 17:15:43 compute-0 nova_compute[236122]:   <arch>i686</arch>
Jan 10 17:15:43 compute-0 nova_compute[236122]:   <vcpu max='240'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:   <iothreads supported='yes'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:   <os supported='yes'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     <enum name='firmware'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     <loader supported='yes'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <enum name='type'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>rom</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>pflash</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </enum>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <enum name='readonly'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>yes</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>no</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </enum>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <enum name='secure'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>no</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </enum>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     </loader>
Jan 10 17:15:43 compute-0 nova_compute[236122]:   </os>
Jan 10 17:15:43 compute-0 nova_compute[236122]:   <cpu>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     <mode name='host-passthrough' supported='yes'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <enum name='hostPassthroughMigratable'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>on</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>off</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </enum>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     </mode>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     <mode name='maximum' supported='yes'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <enum name='maximumMigratable'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>on</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>off</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </enum>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     </mode>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     <mode name='host-model' supported='yes'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <vendor>AMD</vendor>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <feature policy='require' name='x2apic'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <feature policy='require' name='tsc-deadline'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <feature policy='require' name='hypervisor'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <feature policy='require' name='tsc_adjust'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <feature policy='require' name='spec-ctrl'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <feature policy='require' name='stibp'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <feature policy='require' name='ssbd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <feature policy='require' name='cmp_legacy'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <feature policy='require' name='overflow-recov'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <feature policy='require' name='succor'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <feature policy='require' name='ibrs'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <feature policy='require' name='amd-ssbd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <feature policy='require' name='virt-ssbd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <feature policy='require' name='lbrv'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <feature policy='require' name='tsc-scale'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <feature policy='require' name='vmcb-clean'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <feature policy='require' name='flushbyasid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <feature policy='require' name='pause-filter'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <feature policy='require' name='pfthreshold'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <feature policy='require' name='svme-addr-chk'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <feature policy='disable' name='xsaves'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     </mode>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     <mode name='custom' supported='yes'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Broadwell'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='hle'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='rtm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Broadwell-IBRS'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='hle'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='rtm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Broadwell-noTSX'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Broadwell-v1'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='hle'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='rtm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Broadwell-v2'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Broadwell-v3'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='hle'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='rtm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Broadwell-v4'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Cascadelake-Server'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bw'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512cd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512dq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512f'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vl'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vnni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='hle'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pku'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='rtm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bw'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512cd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512dq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512f'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vl'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vnni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='ibrs-all'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pku'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Cascadelake-Server-v1'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bw'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512cd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512dq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512f'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vl'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vnni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='hle'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pku'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='rtm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Cascadelake-Server-v2'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bw'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512cd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512dq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512f'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vl'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vnni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='hle'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='ibrs-all'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pku'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='rtm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Cascadelake-Server-v3'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bw'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512cd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512dq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512f'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vl'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vnni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='ibrs-all'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pku'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Cascadelake-Server-v4'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bw'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512cd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512dq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512f'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vl'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vnni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='ibrs-all'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pku'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Cascadelake-Server-v5'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bw'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512cd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512dq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512f'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vl'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vnni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='ibrs-all'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pku'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='xsaves'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Cooperlake'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512-bf16'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bw'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512cd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512dq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512f'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vl'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vnni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='hle'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='ibrs-all'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pku'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='rtm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='taa-no'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Cooperlake-v1'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512-bf16'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bw'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512cd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512dq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512f'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vl'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vnni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='hle'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='ibrs-all'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pku'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='rtm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='taa-no'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Cooperlake-v2'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512-bf16'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bw'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512cd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512dq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512f'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vl'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vnni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='hle'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='ibrs-all'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pku'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='rtm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='taa-no'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='xsaves'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Denverton'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='mpx'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Denverton-v1'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='mpx'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Denverton-v2'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Denverton-v3'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='xsaves'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Dhyana-v2'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='xsaves'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='EPYC-Genoa'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='amd-psfd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='auto-ibrs'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512-bf16'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512-vpopcntdq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bitalg'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bw'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512cd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512dq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512f'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512ifma'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vbmi'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vbmi2'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vl'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vnni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fsrm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='gfni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='la57'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='no-nested-data-bp'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='null-sel-clr-base'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pku'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='stibp-always-on'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='vaes'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='vpclmulqdq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='xsaves'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='EPYC-Genoa-v1'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='amd-psfd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='auto-ibrs'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512-bf16'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512-vpopcntdq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bitalg'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bw'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512cd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512dq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512f'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512ifma'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vbmi'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vbmi2'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vl'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vnni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fsrm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='gfni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='la57'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='no-nested-data-bp'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='null-sel-clr-base'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pku'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='stibp-always-on'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='vaes'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='vpclmulqdq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='xsaves'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='EPYC-Milan'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fsrm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pku'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='xsaves'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='EPYC-Milan-v1'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fsrm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pku'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='xsaves'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='EPYC-Milan-v2'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='amd-psfd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fsrm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='no-nested-data-bp'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='null-sel-clr-base'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pku'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='stibp-always-on'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='vaes'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='vpclmulqdq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='xsaves'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='EPYC-Rome'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='xsaves'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='EPYC-Rome-v1'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='xsaves'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='EPYC-Rome-v2'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='xsaves'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='EPYC-Rome-v3'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='xsaves'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='EPYC-v3'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='xsaves'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='EPYC-v4'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='xsaves'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='GraniteRapids'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='amx-bf16'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='amx-fp16'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='amx-int8'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='amx-tile'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx-vnni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512-bf16'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512-fp16'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512-vpopcntdq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bitalg'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bw'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512cd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512dq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512f'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512ifma'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vbmi'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vbmi2'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vl'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vnni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='bus-lock-detect'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fbsdp-no'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fsrc'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fsrm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fsrs'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fzrm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='gfni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='hle'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='ibrs-all'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='la57'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='mcdt-no'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pbrsb-no'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pku'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='prefetchiti'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='psdp-no'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='rtm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='sbdr-ssdp-no'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='serialize'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='taa-no'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='tsx-ldtrk'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='vaes'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='vpclmulqdq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='xfd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='xsaves'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='GraniteRapids-v1'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='amx-bf16'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='amx-fp16'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='amx-int8'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='amx-tile'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx-vnni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512-bf16'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512-fp16'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512-vpopcntdq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bitalg'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bw'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512cd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512dq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512f'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512ifma'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vbmi'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vbmi2'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vl'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vnni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='bus-lock-detect'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fbsdp-no'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fsrc'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fsrm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fsrs'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fzrm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='gfni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='hle'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='ibrs-all'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='la57'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='mcdt-no'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pbrsb-no'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pku'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='prefetchiti'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='psdp-no'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='rtm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='sbdr-ssdp-no'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='serialize'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='taa-no'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='tsx-ldtrk'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='vaes'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='vpclmulqdq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='xfd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='xsaves'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='GraniteRapids-v2'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='amx-bf16'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='amx-fp16'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='amx-int8'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='amx-tile'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx-vnni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx10'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx10-128'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx10-256'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx10-512'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512-bf16'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512-fp16'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512-vpopcntdq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bitalg'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bw'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512cd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512dq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512f'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512ifma'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vbmi'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vbmi2'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vl'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vnni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='bus-lock-detect'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='cldemote'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fbsdp-no'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fsrc'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fsrm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fsrs'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fzrm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='gfni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='hle'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='ibrs-all'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='la57'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='mcdt-no'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='movdir64b'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='movdiri'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pbrsb-no'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pku'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='prefetchiti'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='psdp-no'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='rtm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='sbdr-ssdp-no'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='serialize'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='ss'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='taa-no'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='tsx-ldtrk'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='vaes'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='vpclmulqdq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='xfd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='xsaves'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Haswell'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='hle'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='rtm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Haswell-IBRS'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='hle'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='rtm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Haswell-noTSX'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Haswell-v1'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='hle'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='rtm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Haswell-v2'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Haswell-v3'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='hle'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='rtm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Haswell-v4'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Icelake-Server'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512-vpopcntdq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bitalg'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bw'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512cd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512dq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512f'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vbmi'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vbmi2'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vl'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vnni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='gfni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='hle'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='la57'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pku'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='rtm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='vaes'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='vpclmulqdq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Icelake-Server-noTSX'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512-vpopcntdq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bitalg'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bw'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512cd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512dq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512f'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vbmi'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vbmi2'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vl'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vnni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='gfni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='la57'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pku'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='vaes'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='vpclmulqdq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Icelake-Server-v1'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512-vpopcntdq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bitalg'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bw'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512cd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512dq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512f'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vbmi'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vbmi2'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vl'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vnni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='gfni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='hle'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='la57'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pku'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='rtm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='vaes'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='vpclmulqdq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Icelake-Server-v2'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512-vpopcntdq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bitalg'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bw'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512cd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512dq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512f'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vbmi'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vbmi2'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vl'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vnni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='gfni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='la57'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pku'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='vaes'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='vpclmulqdq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Icelake-Server-v3'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512-vpopcntdq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bitalg'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bw'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512cd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512dq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512f'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vbmi'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vbmi2'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vl'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vnni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='gfni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='ibrs-all'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='la57'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pku'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='taa-no'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='vaes'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='vpclmulqdq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Icelake-Server-v4'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512-vpopcntdq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bitalg'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bw'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512cd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512dq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512f'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512ifma'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vbmi'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vbmi2'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vl'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vnni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fsrm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='gfni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='ibrs-all'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='la57'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pku'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='taa-no'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='vaes'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='vpclmulqdq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Icelake-Server-v5'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512-vpopcntdq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bitalg'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bw'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512cd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512dq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512f'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512ifma'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vbmi'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vbmi2'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vl'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vnni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fsrm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='gfni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='ibrs-all'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='la57'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pku'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='taa-no'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='vaes'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='vpclmulqdq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='xsaves'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Icelake-Server-v6'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512-vpopcntdq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bitalg'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bw'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512cd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512dq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512f'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512ifma'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vbmi'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vbmi2'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vl'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vnni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fsrm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='gfni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='ibrs-all'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='la57'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pku'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='taa-no'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='vaes'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='vpclmulqdq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='xsaves'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Icelake-Server-v7'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512-vpopcntdq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bitalg'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bw'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512cd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512dq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512f'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512ifma'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vbmi'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vbmi2'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vl'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vnni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fsrm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='gfni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='hle'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='ibrs-all'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='la57'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pku'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='rtm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='taa-no'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='vaes'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='vpclmulqdq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='xsaves'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='IvyBridge'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='IvyBridge-IBRS'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='IvyBridge-v1'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='IvyBridge-v2'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='KnightsMill'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512-4fmaps'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512-4vnniw'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512-vpopcntdq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512cd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512er'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512f'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512pf'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='ss'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='KnightsMill-v1'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512-4fmaps'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512-4vnniw'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512-vpopcntdq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512cd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512er'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512f'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512pf'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='ss'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Opteron_G4'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fma4'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='xop'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Opteron_G4-v1'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fma4'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='xop'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Opteron_G5'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fma4'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='tbm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='xop'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Opteron_G5-v1'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fma4'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='tbm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='xop'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='SapphireRapids'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='amx-bf16'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='amx-int8'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='amx-tile'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx-vnni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512-bf16'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512-fp16'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512-vpopcntdq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bitalg'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bw'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512cd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512dq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512f'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512ifma'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vbmi'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vbmi2'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vl'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vnni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='bus-lock-detect'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fsrc'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fsrm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fsrs'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fzrm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='gfni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='hle'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='ibrs-all'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='la57'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pku'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='rtm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='serialize'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='taa-no'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='tsx-ldtrk'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='vaes'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='vpclmulqdq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='xfd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='xsaves'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='SapphireRapids-v1'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='amx-bf16'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='amx-int8'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='amx-tile'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx-vnni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512-bf16'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512-fp16'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512-vpopcntdq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bitalg'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bw'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512cd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512dq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512f'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512ifma'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vbmi'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vbmi2'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vl'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vnni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='bus-lock-detect'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fsrc'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fsrm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fsrs'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fzrm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='gfni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='hle'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='ibrs-all'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='la57'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pku'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='rtm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='serialize'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='taa-no'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='tsx-ldtrk'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='vaes'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='vpclmulqdq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='xfd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='xsaves'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='SapphireRapids-v2'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='amx-bf16'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='amx-int8'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='amx-tile'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx-vnni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512-bf16'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512-fp16'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512-vpopcntdq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bitalg'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bw'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512cd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512dq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512f'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512ifma'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vbmi'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vbmi2'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vl'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vnni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='bus-lock-detect'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fbsdp-no'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fsrc'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fsrm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fsrs'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fzrm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='gfni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='hle'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='ibrs-all'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='la57'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pku'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='psdp-no'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='rtm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='sbdr-ssdp-no'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='serialize'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='taa-no'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='tsx-ldtrk'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='vaes'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='vpclmulqdq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='xfd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='xsaves'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='SapphireRapids-v3'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='amx-bf16'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='amx-int8'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='amx-tile'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx-vnni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512-bf16'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512-fp16'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512-vpopcntdq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bitalg'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bw'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512cd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512dq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512f'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512ifma'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vbmi'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vbmi2'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vl'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vnni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='bus-lock-detect'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='cldemote'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fbsdp-no'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fsrc'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fsrm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fsrs'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fzrm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='gfni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='hle'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='ibrs-all'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='la57'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='movdir64b'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='movdiri'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pku'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='psdp-no'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='rtm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='sbdr-ssdp-no'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='serialize'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='ss'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='taa-no'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='tsx-ldtrk'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='vaes'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='vpclmulqdq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='xfd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='xsaves'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='SierraForest'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx-ifma'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx-ne-convert'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx-vnni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx-vnni-int8'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='bus-lock-detect'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='cmpccxadd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fbsdp-no'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fsrm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fsrs'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='gfni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='ibrs-all'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='mcdt-no'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pbrsb-no'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pku'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='psdp-no'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='sbdr-ssdp-no'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='serialize'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='vaes'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='vpclmulqdq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='xsaves'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='SierraForest-v1'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx-ifma'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx-ne-convert'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx-vnni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx-vnni-int8'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='bus-lock-detect'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='cmpccxadd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fbsdp-no'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fsrm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fsrs'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='gfni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='ibrs-all'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='mcdt-no'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pbrsb-no'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pku'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='psdp-no'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='sbdr-ssdp-no'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='serialize'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='vaes'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='vpclmulqdq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='xsaves'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Skylake-Client'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='hle'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='rtm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Skylake-Client-IBRS'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='hle'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='rtm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Skylake-Client-v1'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='hle'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='rtm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Skylake-Client-v2'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='hle'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='rtm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Skylake-Client-v3'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Skylake-Client-v4'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='xsaves'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Skylake-Server'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bw'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512cd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512dq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512f'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vl'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='hle'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pku'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='rtm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Skylake-Server-IBRS'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bw'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512cd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512dq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512f'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vl'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='hle'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pku'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='rtm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bw'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512cd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512dq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512f'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vl'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pku'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Skylake-Server-v1'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bw'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512cd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512dq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512f'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vl'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='hle'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pku'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='rtm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Skylake-Server-v2'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bw'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512cd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512dq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512f'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vl'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='hle'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pku'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='rtm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Skylake-Server-v3'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bw'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512cd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512dq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512f'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vl'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pku'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Skylake-Server-v4'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bw'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512cd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512dq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512f'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vl'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pku'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Skylake-Server-v5'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bw'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512cd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512dq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512f'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vl'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pku'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='xsaves'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Snowridge'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='cldemote'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='core-capability'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='gfni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='movdir64b'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='movdiri'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='mpx'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='split-lock-detect'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Snowridge-v1'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='cldemote'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='core-capability'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='gfni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='movdir64b'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='movdiri'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='mpx'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='split-lock-detect'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Snowridge-v2'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='cldemote'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='core-capability'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='gfni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='movdir64b'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='movdiri'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='split-lock-detect'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Snowridge-v3'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='cldemote'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='core-capability'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='gfni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='movdir64b'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='movdiri'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='split-lock-detect'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='xsaves'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Snowridge-v4'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='cldemote'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='gfni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='movdir64b'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='movdiri'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='xsaves'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='athlon'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='3dnow'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='3dnowext'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='athlon-v1'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='3dnow'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='3dnowext'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='core2duo'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='ss'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='core2duo-v1'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='ss'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='coreduo'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='ss'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='coreduo-v1'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='ss'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='n270'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='ss'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='n270-v1'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='ss'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='phenom'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='3dnow'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='3dnowext'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='phenom-v1'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='3dnow'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='3dnowext'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     </mode>
Jan 10 17:15:43 compute-0 nova_compute[236122]:   </cpu>
Jan 10 17:15:43 compute-0 nova_compute[236122]:   <memoryBacking supported='yes'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     <enum name='sourceType'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <value>file</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <value>anonymous</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <value>memfd</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     </enum>
Jan 10 17:15:43 compute-0 nova_compute[236122]:   </memoryBacking>
Jan 10 17:15:43 compute-0 nova_compute[236122]:   <devices>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     <disk supported='yes'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <enum name='diskDevice'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>disk</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>cdrom</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>floppy</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>lun</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </enum>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <enum name='bus'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>ide</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>fdc</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>scsi</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>virtio</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>usb</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>sata</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </enum>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <enum name='model'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>virtio</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>virtio-transitional</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>virtio-non-transitional</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </enum>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     </disk>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     <graphics supported='yes'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <enum name='type'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>vnc</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>egl-headless</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>dbus</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </enum>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     </graphics>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     <video supported='yes'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <enum name='modelType'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>vga</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>cirrus</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>virtio</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>none</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>bochs</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>ramfb</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </enum>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     </video>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     <hostdev supported='yes'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <enum name='mode'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>subsystem</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </enum>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <enum name='startupPolicy'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>default</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>mandatory</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>requisite</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>optional</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </enum>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <enum name='subsysType'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>usb</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>pci</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>scsi</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </enum>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <enum name='capsType'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <enum name='pciBackend'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     </hostdev>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     <rng supported='yes'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <enum name='model'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>virtio</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>virtio-transitional</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>virtio-non-transitional</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </enum>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <enum name='backendModel'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>random</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>egd</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>builtin</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </enum>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     </rng>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     <filesystem supported='yes'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <enum name='driverType'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>path</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>handle</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>virtiofs</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </enum>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     </filesystem>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     <tpm supported='yes'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <enum name='model'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>tpm-tis</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>tpm-crb</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </enum>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <enum name='backendModel'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>emulator</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>external</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </enum>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <enum name='backendVersion'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>2.0</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </enum>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     </tpm>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     <redirdev supported='yes'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <enum name='bus'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>usb</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </enum>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     </redirdev>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     <channel supported='yes'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <enum name='type'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>pty</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>unix</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </enum>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     </channel>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     <crypto supported='yes'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <enum name='model'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <enum name='type'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>qemu</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </enum>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <enum name='backendModel'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>builtin</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </enum>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     </crypto>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     <interface supported='yes'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <enum name='backendType'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>default</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>passt</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </enum>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     </interface>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     <panic supported='yes'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <enum name='model'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>isa</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>hyperv</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </enum>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     </panic>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     <console supported='yes'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <enum name='type'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>null</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>vc</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>pty</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>dev</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>file</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>pipe</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>stdio</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>udp</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>tcp</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>unix</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>qemu-vdagent</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>dbus</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </enum>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     </console>
Jan 10 17:15:43 compute-0 nova_compute[236122]:   </devices>
Jan 10 17:15:43 compute-0 nova_compute[236122]:   <features>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     <gic supported='no'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     <vmcoreinfo supported='yes'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     <genid supported='yes'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     <backingStoreInput supported='yes'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     <backup supported='yes'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     <async-teardown supported='yes'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     <ps2 supported='yes'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     <sev supported='no'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     <sgx supported='no'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     <hyperv supported='yes'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <enum name='features'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>relaxed</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>vapic</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>spinlocks</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>vpindex</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>runtime</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>synic</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>stimer</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>reset</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>vendor_id</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>frequencies</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>reenlightenment</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>tlbflush</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>ipi</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>avic</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>emsr_bitmap</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>xmm_input</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </enum>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <defaults>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <spinlocks>4095</spinlocks>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <stimer_direct>on</stimer_direct>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <tlbflush_direct>on</tlbflush_direct>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <tlbflush_extended>on</tlbflush_extended>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </defaults>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     </hyperv>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     <launchSecurity supported='yes'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <enum name='sectype'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>tdx</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </enum>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     </launchSecurity>
Jan 10 17:15:43 compute-0 nova_compute[236122]:   </features>
Jan 10 17:15:43 compute-0 nova_compute[236122]: </domainCapabilities>
Jan 10 17:15:43 compute-0 nova_compute[236122]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 10 17:15:43 compute-0 nova_compute[236122]: 2026-01-10 17:15:43.054 236126 DEBUG nova.virt.libvirt.host [None req-f80cf4d5-4687-42f0-a2f3-9029deeb90f8 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Jan 10 17:15:43 compute-0 nova_compute[236122]: 2026-01-10 17:15:43.058 236126 DEBUG nova.virt.libvirt.host [None req-f80cf4d5-4687-42f0-a2f3-9029deeb90f8 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Jan 10 17:15:43 compute-0 nova_compute[236122]: <domainCapabilities>
Jan 10 17:15:43 compute-0 nova_compute[236122]:   <path>/usr/libexec/qemu-kvm</path>
Jan 10 17:15:43 compute-0 nova_compute[236122]:   <domain>kvm</domain>
Jan 10 17:15:43 compute-0 nova_compute[236122]:   <machine>pc-q35-rhel9.8.0</machine>
Jan 10 17:15:43 compute-0 nova_compute[236122]:   <arch>x86_64</arch>
Jan 10 17:15:43 compute-0 nova_compute[236122]:   <vcpu max='4096'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:   <iothreads supported='yes'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:   <os supported='yes'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     <enum name='firmware'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <value>efi</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     </enum>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     <loader supported='yes'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <enum name='type'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>rom</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>pflash</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </enum>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <enum name='readonly'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>yes</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>no</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </enum>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <enum name='secure'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>yes</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>no</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </enum>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     </loader>
Jan 10 17:15:43 compute-0 nova_compute[236122]:   </os>
Jan 10 17:15:43 compute-0 nova_compute[236122]:   <cpu>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     <mode name='host-passthrough' supported='yes'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <enum name='hostPassthroughMigratable'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>on</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>off</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </enum>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     </mode>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     <mode name='maximum' supported='yes'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <enum name='maximumMigratable'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>on</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>off</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </enum>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     </mode>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     <mode name='host-model' supported='yes'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <vendor>AMD</vendor>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <feature policy='require' name='x2apic'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <feature policy='require' name='tsc-deadline'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <feature policy='require' name='hypervisor'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <feature policy='require' name='tsc_adjust'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <feature policy='require' name='spec-ctrl'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <feature policy='require' name='stibp'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <feature policy='require' name='ssbd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <feature policy='require' name='cmp_legacy'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <feature policy='require' name='overflow-recov'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <feature policy='require' name='succor'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <feature policy='require' name='ibrs'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <feature policy='require' name='amd-ssbd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <feature policy='require' name='virt-ssbd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <feature policy='require' name='lbrv'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <feature policy='require' name='tsc-scale'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <feature policy='require' name='vmcb-clean'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <feature policy='require' name='flushbyasid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <feature policy='require' name='pause-filter'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <feature policy='require' name='pfthreshold'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <feature policy='require' name='svme-addr-chk'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <feature policy='disable' name='xsaves'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     </mode>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     <mode name='custom' supported='yes'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Broadwell'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='hle'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='rtm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Broadwell-IBRS'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='hle'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='rtm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Broadwell-noTSX'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Broadwell-v1'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='hle'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='rtm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Broadwell-v2'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Broadwell-v3'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='hle'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='rtm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Broadwell-v4'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Cascadelake-Server'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bw'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512cd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512dq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512f'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vl'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vnni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='hle'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pku'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='rtm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bw'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512cd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512dq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512f'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vl'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vnni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='ibrs-all'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pku'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Cascadelake-Server-v1'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bw'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512cd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512dq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512f'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vl'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vnni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='hle'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pku'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='rtm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Cascadelake-Server-v2'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bw'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512cd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512dq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512f'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vl'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vnni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='hle'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='ibrs-all'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pku'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='rtm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Cascadelake-Server-v3'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bw'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512cd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512dq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512f'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vl'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vnni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='ibrs-all'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pku'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Cascadelake-Server-v4'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bw'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512cd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512dq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512f'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vl'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vnni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='ibrs-all'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pku'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Cascadelake-Server-v5'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bw'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512cd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512dq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512f'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vl'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vnni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='ibrs-all'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pku'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='xsaves'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Cooperlake'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512-bf16'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bw'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512cd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512dq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512f'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vl'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vnni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='hle'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='ibrs-all'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pku'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='rtm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='taa-no'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Cooperlake-v1'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512-bf16'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bw'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512cd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512dq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512f'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vl'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vnni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='hle'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='ibrs-all'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pku'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='rtm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='taa-no'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Cooperlake-v2'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512-bf16'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bw'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512cd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512dq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512f'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vl'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vnni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='hle'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='ibrs-all'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pku'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='rtm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='taa-no'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='xsaves'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Denverton'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='mpx'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Denverton-v1'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='mpx'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Denverton-v2'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Denverton-v3'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='xsaves'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Dhyana-v2'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='xsaves'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='EPYC-Genoa'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='amd-psfd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='auto-ibrs'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512-bf16'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512-vpopcntdq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bitalg'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bw'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512cd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512dq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512f'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512ifma'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vbmi'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vbmi2'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vl'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vnni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fsrm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='gfni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='la57'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='no-nested-data-bp'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='null-sel-clr-base'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pku'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='stibp-always-on'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='vaes'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='vpclmulqdq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='xsaves'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='EPYC-Genoa-v1'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='amd-psfd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='auto-ibrs'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512-bf16'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512-vpopcntdq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bitalg'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bw'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512cd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512dq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512f'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512ifma'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vbmi'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vbmi2'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vl'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vnni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fsrm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='gfni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='la57'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='no-nested-data-bp'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='null-sel-clr-base'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pku'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='stibp-always-on'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='vaes'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='vpclmulqdq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='xsaves'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='EPYC-Milan'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fsrm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pku'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='xsaves'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='EPYC-Milan-v1'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fsrm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pku'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='xsaves'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='EPYC-Milan-v2'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='amd-psfd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fsrm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='no-nested-data-bp'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='null-sel-clr-base'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pku'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='stibp-always-on'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='vaes'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='vpclmulqdq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='xsaves'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='EPYC-Rome'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='xsaves'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='EPYC-Rome-v1'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='xsaves'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='EPYC-Rome-v2'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='xsaves'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='EPYC-Rome-v3'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='xsaves'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='EPYC-v3'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='xsaves'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='EPYC-v4'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='xsaves'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='GraniteRapids'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='amx-bf16'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='amx-fp16'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='amx-int8'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='amx-tile'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx-vnni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512-bf16'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512-fp16'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512-vpopcntdq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bitalg'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bw'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512cd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512dq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512f'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512ifma'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vbmi'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vbmi2'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vl'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vnni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='bus-lock-detect'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fbsdp-no'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fsrc'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fsrm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fsrs'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fzrm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='gfni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='hle'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='ibrs-all'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='la57'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='mcdt-no'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pbrsb-no'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pku'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='prefetchiti'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='psdp-no'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='rtm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='sbdr-ssdp-no'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='serialize'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='taa-no'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='tsx-ldtrk'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='vaes'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='vpclmulqdq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='xfd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='xsaves'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='GraniteRapids-v1'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='amx-bf16'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='amx-fp16'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='amx-int8'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='amx-tile'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx-vnni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512-bf16'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512-fp16'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512-vpopcntdq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bitalg'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bw'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512cd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512dq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512f'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512ifma'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vbmi'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vbmi2'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vl'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vnni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='bus-lock-detect'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fbsdp-no'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fsrc'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fsrm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fsrs'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fzrm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='gfni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='hle'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='ibrs-all'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='la57'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='mcdt-no'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pbrsb-no'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pku'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='prefetchiti'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='psdp-no'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='rtm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='sbdr-ssdp-no'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='serialize'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='taa-no'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='tsx-ldtrk'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='vaes'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='vpclmulqdq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='xfd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='xsaves'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='GraniteRapids-v2'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='amx-bf16'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='amx-fp16'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='amx-int8'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='amx-tile'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx-vnni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx10'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx10-128'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx10-256'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx10-512'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512-bf16'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512-fp16'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512-vpopcntdq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bitalg'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bw'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512cd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512dq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512f'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512ifma'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vbmi'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vbmi2'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vl'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vnni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='bus-lock-detect'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='cldemote'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fbsdp-no'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fsrc'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fsrm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fsrs'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fzrm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='gfni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='hle'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='ibrs-all'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='la57'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='mcdt-no'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='movdir64b'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='movdiri'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pbrsb-no'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pku'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='prefetchiti'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='psdp-no'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='rtm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='sbdr-ssdp-no'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='serialize'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='ss'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='taa-no'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='tsx-ldtrk'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='vaes'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='vpclmulqdq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='xfd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='xsaves'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Haswell'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='hle'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='rtm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Haswell-IBRS'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='hle'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='rtm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Haswell-noTSX'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Haswell-v1'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='hle'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='rtm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Haswell-v2'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Haswell-v3'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='hle'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='rtm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Haswell-v4'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Icelake-Server'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512-vpopcntdq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bitalg'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bw'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512cd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512dq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512f'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vbmi'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vbmi2'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vl'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vnni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='gfni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='hle'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='la57'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pku'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='rtm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='vaes'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='vpclmulqdq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Icelake-Server-noTSX'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512-vpopcntdq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bitalg'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bw'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512cd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512dq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512f'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vbmi'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vbmi2'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vl'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vnni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='gfni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='la57'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pku'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='vaes'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='vpclmulqdq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Icelake-Server-v1'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512-vpopcntdq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bitalg'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bw'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512cd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512dq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512f'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vbmi'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vbmi2'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vl'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vnni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='gfni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='hle'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='la57'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pku'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='rtm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='vaes'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='vpclmulqdq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Icelake-Server-v2'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512-vpopcntdq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bitalg'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bw'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512cd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512dq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512f'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vbmi'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vbmi2'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vl'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vnni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='gfni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='la57'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pku'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='vaes'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='vpclmulqdq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Icelake-Server-v3'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512-vpopcntdq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bitalg'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bw'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512cd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512dq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512f'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vbmi'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vbmi2'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vl'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vnni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='gfni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='ibrs-all'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='la57'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pku'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='taa-no'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='vaes'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='vpclmulqdq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Icelake-Server-v4'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512-vpopcntdq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bitalg'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bw'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512cd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512dq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512f'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512ifma'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vbmi'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vbmi2'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vl'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vnni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fsrm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='gfni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='ibrs-all'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='la57'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pku'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='taa-no'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='vaes'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='vpclmulqdq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Icelake-Server-v5'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512-vpopcntdq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bitalg'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bw'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512cd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512dq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512f'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512ifma'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vbmi'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vbmi2'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vl'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vnni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fsrm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='gfni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='ibrs-all'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='la57'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pku'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='taa-no'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='vaes'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='vpclmulqdq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='xsaves'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Icelake-Server-v6'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512-vpopcntdq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bitalg'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bw'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512cd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512dq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512f'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512ifma'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vbmi'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vbmi2'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vl'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vnni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fsrm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='gfni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='ibrs-all'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='la57'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pku'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='taa-no'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='vaes'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='vpclmulqdq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='xsaves'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Icelake-Server-v7'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512-vpopcntdq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bitalg'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bw'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512cd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512dq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512f'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512ifma'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vbmi'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vbmi2'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vl'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vnni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fsrm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='gfni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='hle'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='ibrs-all'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='la57'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pku'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='rtm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='taa-no'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='vaes'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='vpclmulqdq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='xsaves'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='IvyBridge'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='IvyBridge-IBRS'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='IvyBridge-v1'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='IvyBridge-v2'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='KnightsMill'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512-4fmaps'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512-4vnniw'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512-vpopcntdq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512cd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512er'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512f'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512pf'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='ss'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='KnightsMill-v1'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512-4fmaps'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512-4vnniw'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512-vpopcntdq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512cd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512er'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512f'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512pf'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='ss'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Opteron_G4'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fma4'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='xop'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Opteron_G4-v1'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fma4'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='xop'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Opteron_G5'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fma4'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='tbm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='xop'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Opteron_G5-v1'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fma4'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='tbm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='xop'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='SapphireRapids'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='amx-bf16'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='amx-int8'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='amx-tile'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx-vnni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512-bf16'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512-fp16'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512-vpopcntdq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bitalg'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bw'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512cd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512dq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512f'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512ifma'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vbmi'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vbmi2'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vl'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vnni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='bus-lock-detect'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fsrc'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fsrm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fsrs'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fzrm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='gfni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='hle'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='ibrs-all'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='la57'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pku'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='rtm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='serialize'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='taa-no'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='tsx-ldtrk'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='vaes'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='vpclmulqdq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='xfd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='xsaves'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='SapphireRapids-v1'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='amx-bf16'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='amx-int8'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='amx-tile'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx-vnni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512-bf16'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512-fp16'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512-vpopcntdq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bitalg'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bw'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512cd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512dq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512f'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512ifma'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vbmi'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vbmi2'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vl'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vnni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='bus-lock-detect'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fsrc'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fsrm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fsrs'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fzrm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='gfni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='hle'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='ibrs-all'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='la57'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pku'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='rtm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='serialize'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='taa-no'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='tsx-ldtrk'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='vaes'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='vpclmulqdq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='xfd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='xsaves'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='SapphireRapids-v2'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='amx-bf16'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='amx-int8'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='amx-tile'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx-vnni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512-bf16'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512-fp16'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512-vpopcntdq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bitalg'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bw'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512cd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512dq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512f'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512ifma'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vbmi'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vbmi2'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vl'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vnni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='bus-lock-detect'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fbsdp-no'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fsrc'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fsrm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fsrs'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fzrm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='gfni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='hle'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='ibrs-all'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='la57'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pku'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='psdp-no'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='rtm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='sbdr-ssdp-no'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='serialize'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='taa-no'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='tsx-ldtrk'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='vaes'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='vpclmulqdq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='xfd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='xsaves'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='SapphireRapids-v3'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='amx-bf16'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='amx-int8'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='amx-tile'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx-vnni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512-bf16'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512-fp16'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512-vpopcntdq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bitalg'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bw'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512cd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512dq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512f'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512ifma'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vbmi'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vbmi2'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vl'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vnni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='bus-lock-detect'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='cldemote'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fbsdp-no'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fsrc'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fsrm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fsrs'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fzrm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='gfni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='hle'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='ibrs-all'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='la57'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='movdir64b'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='movdiri'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pku'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='psdp-no'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='rtm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='sbdr-ssdp-no'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='serialize'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='ss'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='taa-no'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='tsx-ldtrk'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='vaes'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='vpclmulqdq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='xfd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='xsaves'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='SierraForest'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx-ifma'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx-ne-convert'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx-vnni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx-vnni-int8'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='bus-lock-detect'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='cmpccxadd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fbsdp-no'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fsrm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fsrs'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='gfni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='ibrs-all'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='mcdt-no'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pbrsb-no'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pku'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='psdp-no'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='sbdr-ssdp-no'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='serialize'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='vaes'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='vpclmulqdq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='xsaves'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='SierraForest-v1'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx-ifma'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx-ne-convert'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx-vnni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx-vnni-int8'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='bus-lock-detect'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='cmpccxadd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fbsdp-no'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fsrm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='fsrs'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='gfni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='ibrs-all'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='mcdt-no'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pbrsb-no'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pku'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='psdp-no'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='sbdr-ssdp-no'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='serialize'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='vaes'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='vpclmulqdq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='xsaves'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Skylake-Client'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='hle'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='rtm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Skylake-Client-IBRS'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='hle'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='rtm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Skylake-Client-v1'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='hle'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='rtm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Skylake-Client-v2'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='hle'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='rtm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Skylake-Client-v3'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Skylake-Client-v4'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='xsaves'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Skylake-Server'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bw'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512cd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512dq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512f'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vl'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='hle'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pku'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='rtm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Skylake-Server-IBRS'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bw'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512cd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512dq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512f'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vl'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='hle'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pku'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='rtm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bw'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512cd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512dq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512f'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vl'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pku'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Skylake-Server-v1'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bw'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512cd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512dq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512f'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vl'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='hle'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pku'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='rtm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Skylake-Server-v2'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bw'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512cd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512dq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512f'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vl'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='hle'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pku'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='rtm'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Skylake-Server-v3'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bw'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512cd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512dq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512f'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vl'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pku'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Skylake-Server-v4'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bw'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512cd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512dq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512f'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vl'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pku'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Skylake-Server-v5'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512bw'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512cd'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512dq'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512f'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='avx512vl'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='invpcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pcid'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='pku'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='xsaves'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Snowridge'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='cldemote'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='core-capability'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='gfni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='movdir64b'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='movdiri'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='mpx'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='split-lock-detect'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Snowridge-v1'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='cldemote'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='core-capability'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='gfni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='movdir64b'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='movdiri'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='mpx'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='split-lock-detect'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Snowridge-v2'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='cldemote'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='core-capability'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='gfni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='movdir64b'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='movdiri'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='split-lock-detect'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Snowridge-v3'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='cldemote'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='core-capability'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='gfni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='movdir64b'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='movdiri'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='split-lock-detect'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='xsaves'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='Snowridge-v4'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='cldemote'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='erms'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='gfni'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='movdir64b'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='movdiri'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='xsaves'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='athlon'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='3dnow'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='3dnowext'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='athlon-v1'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='3dnow'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='3dnowext'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='core2duo'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='ss'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='core2duo-v1'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='ss'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='coreduo'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='ss'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='coreduo-v1'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='ss'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='n270'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='ss'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='n270-v1'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='ss'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='phenom'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='3dnow'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='3dnowext'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <blockers model='phenom-v1'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='3dnow'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <feature name='3dnowext'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </blockers>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     </mode>
Jan 10 17:15:43 compute-0 nova_compute[236122]:   </cpu>
Jan 10 17:15:43 compute-0 nova_compute[236122]:   <memoryBacking supported='yes'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     <enum name='sourceType'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <value>file</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <value>anonymous</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <value>memfd</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     </enum>
Jan 10 17:15:43 compute-0 nova_compute[236122]:   </memoryBacking>
Jan 10 17:15:43 compute-0 nova_compute[236122]:   <devices>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     <disk supported='yes'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <enum name='diskDevice'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>disk</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>cdrom</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>floppy</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>lun</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </enum>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <enum name='bus'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>fdc</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>scsi</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>virtio</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>usb</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>sata</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </enum>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <enum name='model'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>virtio</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>virtio-transitional</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>virtio-non-transitional</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </enum>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     </disk>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     <graphics supported='yes'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <enum name='type'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>vnc</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>egl-headless</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>dbus</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </enum>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     </graphics>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     <video supported='yes'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <enum name='modelType'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>vga</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>cirrus</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>virtio</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>none</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>bochs</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>ramfb</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </enum>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     </video>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     <hostdev supported='yes'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <enum name='mode'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>subsystem</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </enum>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <enum name='startupPolicy'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>default</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>mandatory</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>requisite</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>optional</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </enum>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <enum name='subsysType'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>usb</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>pci</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>scsi</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </enum>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <enum name='capsType'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <enum name='pciBackend'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     </hostdev>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     <rng supported='yes'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <enum name='model'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>virtio</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>virtio-transitional</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>virtio-non-transitional</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </enum>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <enum name='backendModel'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>random</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>egd</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>builtin</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </enum>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     </rng>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     <filesystem supported='yes'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <enum name='driverType'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>path</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>handle</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>virtiofs</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </enum>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     </filesystem>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     <tpm supported='yes'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <enum name='model'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>tpm-tis</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>tpm-crb</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </enum>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <enum name='backendModel'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>emulator</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>external</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </enum>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <enum name='backendVersion'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>2.0</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </enum>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     </tpm>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     <redirdev supported='yes'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <enum name='bus'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>usb</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </enum>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     </redirdev>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     <channel supported='yes'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <enum name='type'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>pty</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>unix</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </enum>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     </channel>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     <crypto supported='yes'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <enum name='model'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <enum name='type'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>qemu</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </enum>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <enum name='backendModel'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>builtin</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </enum>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     </crypto>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     <interface supported='yes'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <enum name='backendType'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>default</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>passt</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </enum>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     </interface>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     <panic supported='yes'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <enum name='model'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>isa</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>hyperv</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </enum>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     </panic>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     <console supported='yes'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <enum name='type'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>null</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>vc</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>pty</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>dev</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>file</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>pipe</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>stdio</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>udp</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>tcp</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>unix</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>qemu-vdagent</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>dbus</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </enum>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     </console>
Jan 10 17:15:43 compute-0 nova_compute[236122]:   </devices>
Jan 10 17:15:43 compute-0 nova_compute[236122]:   <features>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     <gic supported='no'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     <vmcoreinfo supported='yes'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     <genid supported='yes'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     <backingStoreInput supported='yes'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     <backup supported='yes'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     <async-teardown supported='yes'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     <ps2 supported='yes'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     <sev supported='no'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     <sgx supported='no'/>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     <hyperv supported='yes'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <enum name='features'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>relaxed</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>vapic</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>spinlocks</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>vpindex</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>runtime</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>synic</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>stimer</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>reset</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>vendor_id</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>frequencies</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>reenlightenment</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>tlbflush</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>ipi</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>avic</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>emsr_bitmap</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>xmm_input</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </enum>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <defaults>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <spinlocks>4095</spinlocks>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <stimer_direct>on</stimer_direct>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <tlbflush_direct>on</tlbflush_direct>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <tlbflush_extended>on</tlbflush_extended>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </defaults>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     </hyperv>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     <launchSecurity supported='yes'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       <enum name='sectype'>
Jan 10 17:15:43 compute-0 nova_compute[236122]:         <value>tdx</value>
Jan 10 17:15:43 compute-0 nova_compute[236122]:       </enum>
Jan 10 17:15:43 compute-0 nova_compute[236122]:     </launchSecurity>
Jan 10 17:15:43 compute-0 nova_compute[236122]:   </features>
Jan 10 17:15:43 compute-0 nova_compute[236122]: </domainCapabilities>
Jan 10 17:15:43 compute-0 nova_compute[236122]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 10 17:15:43 compute-0 nova_compute[236122]: 2026-01-10 17:15:43.116 236126 DEBUG oslo_concurrency.lockutils [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 10 17:15:43 compute-0 nova_compute[236122]: 2026-01-10 17:15:43.121 236126 DEBUG oslo_concurrency.lockutils [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 10 17:15:43 compute-0 nova_compute[236122]: 2026-01-10 17:15:43.122 236126 DEBUG oslo_concurrency.lockutils [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 10 17:15:43 compute-0 ceph-mon[75249]: pgmap v601: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:15:43 compute-0 virtqemud[236762]: libvirt version: 11.9.0, package: 1.el9 (builder@centos.org, 2025-11-04-09:54:50, )
Jan 10 17:15:43 compute-0 virtqemud[236762]: hostname: compute-0
Jan 10 17:15:43 compute-0 virtqemud[236762]: End of file while reading data: Input/output error
Jan 10 17:15:43 compute-0 systemd[1]: libpod-8f8874914a56179fcc5831574e1cc112fdac465b9ddd5d3ee5069e9a44f58d02.scope: Deactivated successfully.
Jan 10 17:15:43 compute-0 systemd[1]: libpod-8f8874914a56179fcc5831574e1cc112fdac465b9ddd5d3ee5069e9a44f58d02.scope: Consumed 3.500s CPU time.
Jan 10 17:15:43 compute-0 podman[236981]: 2026-01-10 17:15:43.523821425 +0000 UTC m=+0.477849444 container died 8f8874914a56179fcc5831574e1cc112fdac465b9ddd5d3ee5069e9a44f58d02 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=nova_compute)
Jan 10 17:15:43 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8f8874914a56179fcc5831574e1cc112fdac465b9ddd5d3ee5069e9a44f58d02-userdata-shm.mount: Deactivated successfully.
Jan 10 17:15:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-52eb7d54ee1a3effee233654d289e0ab9b595d43483ba376afc253a0cb5086a7-merged.mount: Deactivated successfully.
Jan 10 17:15:44 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:15:44 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v602: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:15:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] _maybe_adjust
Jan 10 17:15:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:15:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 10 17:15:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:15:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 17:15:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:15:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 17:15:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:15:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 17:15:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:15:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 17:15:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:15:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 9.302004027771843e-07 of space, bias 4.0, pg target 0.0011162404833326212 quantized to 16 (current 16)
Jan 10 17:15:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:15:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 17:15:45 compute-0 podman[236981]: 2026-01-10 17:15:45.606449265 +0000 UTC m=+2.560477314 container cleanup 8f8874914a56179fcc5831574e1cc112fdac465b9ddd5d3ee5069e9a44f58d02 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.build-date=20251202, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=nova_compute, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, managed_by=edpm_ansible)
Jan 10 17:15:45 compute-0 podman[236981]: nova_compute
Jan 10 17:15:45 compute-0 ceph-mon[75249]: pgmap v602: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:15:45 compute-0 podman[237018]: nova_compute
Jan 10 17:15:45 compute-0 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Jan 10 17:15:45 compute-0 systemd[1]: Stopped nova_compute container.
Jan 10 17:15:45 compute-0 systemd[1]: Starting nova_compute container...
Jan 10 17:15:45 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:15:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52eb7d54ee1a3effee233654d289e0ab9b595d43483ba376afc253a0cb5086a7/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Jan 10 17:15:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52eb7d54ee1a3effee233654d289e0ab9b595d43483ba376afc253a0cb5086a7/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Jan 10 17:15:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52eb7d54ee1a3effee233654d289e0ab9b595d43483ba376afc253a0cb5086a7/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Jan 10 17:15:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52eb7d54ee1a3effee233654d289e0ab9b595d43483ba376afc253a0cb5086a7/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Jan 10 17:15:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52eb7d54ee1a3effee233654d289e0ab9b595d43483ba376afc253a0cb5086a7/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Jan 10 17:15:45 compute-0 podman[237032]: 2026-01-10 17:15:45.894579136 +0000 UTC m=+0.135187824 container init 8f8874914a56179fcc5831574e1cc112fdac465b9ddd5d3ee5069e9a44f58d02 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, container_name=nova_compute, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 10 17:15:45 compute-0 podman[237032]: 2026-01-10 17:15:45.907753163 +0000 UTC m=+0.148361811 container start 8f8874914a56179fcc5831574e1cc112fdac465b9ddd5d3ee5069e9a44f58d02 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=nova_compute, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']})
Jan 10 17:15:45 compute-0 podman[237032]: nova_compute
Jan 10 17:15:45 compute-0 nova_compute[237049]: + sudo -E kolla_set_configs
Jan 10 17:15:45 compute-0 systemd[1]: Started nova_compute container.
Jan 10 17:15:45 compute-0 sudo[236971]: pam_unix(sudo:session): session closed for user root
Jan 10 17:15:46 compute-0 nova_compute[237049]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 10 17:15:46 compute-0 nova_compute[237049]: INFO:__main__:Validating config file
Jan 10 17:15:46 compute-0 nova_compute[237049]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 10 17:15:46 compute-0 nova_compute[237049]: INFO:__main__:Copying service configuration files
Jan 10 17:15:46 compute-0 nova_compute[237049]: INFO:__main__:Deleting /etc/nova/nova.conf
Jan 10 17:15:46 compute-0 nova_compute[237049]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Jan 10 17:15:46 compute-0 nova_compute[237049]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Jan 10 17:15:46 compute-0 nova_compute[237049]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Jan 10 17:15:46 compute-0 nova_compute[237049]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Jan 10 17:15:46 compute-0 nova_compute[237049]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Jan 10 17:15:46 compute-0 nova_compute[237049]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 10 17:15:46 compute-0 nova_compute[237049]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 10 17:15:46 compute-0 nova_compute[237049]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 10 17:15:46 compute-0 nova_compute[237049]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 10 17:15:46 compute-0 nova_compute[237049]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 10 17:15:46 compute-0 nova_compute[237049]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 10 17:15:46 compute-0 nova_compute[237049]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Jan 10 17:15:46 compute-0 nova_compute[237049]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Jan 10 17:15:46 compute-0 nova_compute[237049]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Jan 10 17:15:46 compute-0 nova_compute[237049]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 10 17:15:46 compute-0 nova_compute[237049]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 10 17:15:46 compute-0 nova_compute[237049]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 10 17:15:46 compute-0 nova_compute[237049]: INFO:__main__:Deleting /etc/ceph
Jan 10 17:15:46 compute-0 nova_compute[237049]: INFO:__main__:Creating directory /etc/ceph
Jan 10 17:15:46 compute-0 nova_compute[237049]: INFO:__main__:Setting permission for /etc/ceph
Jan 10 17:15:46 compute-0 nova_compute[237049]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Jan 10 17:15:46 compute-0 nova_compute[237049]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 10 17:15:46 compute-0 nova_compute[237049]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Jan 10 17:15:46 compute-0 nova_compute[237049]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 10 17:15:46 compute-0 nova_compute[237049]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Jan 10 17:15:46 compute-0 nova_compute[237049]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Jan 10 17:15:46 compute-0 nova_compute[237049]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 10 17:15:46 compute-0 nova_compute[237049]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Jan 10 17:15:46 compute-0 nova_compute[237049]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Jan 10 17:15:46 compute-0 nova_compute[237049]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 10 17:15:46 compute-0 nova_compute[237049]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Jan 10 17:15:46 compute-0 nova_compute[237049]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Jan 10 17:15:46 compute-0 nova_compute[237049]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Jan 10 17:15:46 compute-0 nova_compute[237049]: INFO:__main__:Writing out command to execute
Jan 10 17:15:46 compute-0 nova_compute[237049]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 10 17:15:46 compute-0 nova_compute[237049]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 10 17:15:46 compute-0 nova_compute[237049]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Jan 10 17:15:46 compute-0 nova_compute[237049]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 10 17:15:46 compute-0 nova_compute[237049]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 10 17:15:46 compute-0 nova_compute[237049]: ++ cat /run_command
Jan 10 17:15:46 compute-0 nova_compute[237049]: + CMD=nova-compute
Jan 10 17:15:46 compute-0 nova_compute[237049]: + ARGS=
Jan 10 17:15:46 compute-0 nova_compute[237049]: + sudo kolla_copy_cacerts
Jan 10 17:15:46 compute-0 nova_compute[237049]: + [[ ! -n '' ]]
Jan 10 17:15:46 compute-0 nova_compute[237049]: + . kolla_extend_start
Jan 10 17:15:46 compute-0 nova_compute[237049]: Running command: 'nova-compute'
Jan 10 17:15:46 compute-0 nova_compute[237049]: + echo 'Running command: '\''nova-compute'\'''
Jan 10 17:15:46 compute-0 nova_compute[237049]: + umask 0022
Jan 10 17:15:46 compute-0 nova_compute[237049]: + exec nova-compute
Jan 10 17:15:46 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v603: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:15:46 compute-0 sudo[237210]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pcgeunubhsphfxhrxvojgcqgikmyemkz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768065346.3030536-1307-89128820460768/AnsiballZ_podman_container.py'
Jan 10 17:15:46 compute-0 sudo[237210]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:15:47 compute-0 python3.9[237212]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Jan 10 17:15:47 compute-0 systemd[1]: Started libpod-conmon-829794c073326f89be46fc607171dd9fff823b74d404292c89250303cc4e08fd.scope.
Jan 10 17:15:47 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:15:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c546114f17d1c4a40376cc9ffd809fc39eb03c4df86f85b95bcda46b001bcfd/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Jan 10 17:15:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c546114f17d1c4a40376cc9ffd809fc39eb03c4df86f85b95bcda46b001bcfd/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Jan 10 17:15:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c546114f17d1c4a40376cc9ffd809fc39eb03c4df86f85b95bcda46b001bcfd/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Jan 10 17:15:47 compute-0 podman[237236]: 2026-01-10 17:15:47.314482131 +0000 UTC m=+0.140088440 container init 829794c073326f89be46fc607171dd9fff823b74d404292c89250303cc4e08fd (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, tcib_managed=true, config_id=edpm, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, container_name=nova_compute_init, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS)
Jan 10 17:15:47 compute-0 podman[237236]: 2026-01-10 17:15:47.321814586 +0000 UTC m=+0.147420885 container start 829794c073326f89be46fc607171dd9fff823b74d404292c89250303cc4e08fd (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, container_name=nova_compute_init, org.label-schema.license=GPLv2)
Jan 10 17:15:47 compute-0 python3.9[237212]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Jan 10 17:15:47 compute-0 nova_compute_init[237257]: INFO:nova_statedir:Applying nova statedir ownership
Jan 10 17:15:47 compute-0 nova_compute_init[237257]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Jan 10 17:15:47 compute-0 nova_compute_init[237257]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Jan 10 17:15:47 compute-0 nova_compute_init[237257]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Jan 10 17:15:47 compute-0 nova_compute_init[237257]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Jan 10 17:15:47 compute-0 nova_compute_init[237257]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Jan 10 17:15:47 compute-0 nova_compute_init[237257]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Jan 10 17:15:47 compute-0 nova_compute_init[237257]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Jan 10 17:15:47 compute-0 nova_compute_init[237257]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Jan 10 17:15:47 compute-0 nova_compute_init[237257]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Jan 10 17:15:47 compute-0 nova_compute_init[237257]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Jan 10 17:15:47 compute-0 nova_compute_init[237257]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Jan 10 17:15:47 compute-0 nova_compute_init[237257]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Jan 10 17:15:47 compute-0 nova_compute_init[237257]: INFO:nova_statedir:Nova statedir ownership complete
Jan 10 17:15:47 compute-0 systemd[1]: libpod-829794c073326f89be46fc607171dd9fff823b74d404292c89250303cc4e08fd.scope: Deactivated successfully.
Jan 10 17:15:47 compute-0 podman[237258]: 2026-01-10 17:15:47.413377684 +0000 UTC m=+0.052447111 container died 829794c073326f89be46fc607171dd9fff823b74d404292c89250303cc4e08fd (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 10 17:15:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-5c546114f17d1c4a40376cc9ffd809fc39eb03c4df86f85b95bcda46b001bcfd-merged.mount: Deactivated successfully.
Jan 10 17:15:47 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-829794c073326f89be46fc607171dd9fff823b74d404292c89250303cc4e08fd-userdata-shm.mount: Deactivated successfully.
Jan 10 17:15:47 compute-0 podman[237265]: 2026-01-10 17:15:47.466235776 +0000 UTC m=+0.072266183 container cleanup 829794c073326f89be46fc607171dd9fff823b74d404292c89250303cc4e08fd (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, container_name=nova_compute_init, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 10 17:15:47 compute-0 systemd[1]: libpod-conmon-829794c073326f89be46fc607171dd9fff823b74d404292c89250303cc4e08fd.scope: Deactivated successfully.
Jan 10 17:15:47 compute-0 sudo[237210]: pam_unix(sudo:session): session closed for user root
Jan 10 17:15:47 compute-0 ceph-mon[75249]: pgmap v603: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:15:48 compute-0 sshd-session[212630]: Connection closed by 192.168.122.30 port 43830
Jan 10 17:15:48 compute-0 sshd-session[212627]: pam_unix(sshd:session): session closed for user zuul
Jan 10 17:15:48 compute-0 systemd[1]: session-50.scope: Deactivated successfully.
Jan 10 17:15:48 compute-0 systemd-logind[798]: Session 50 logged out. Waiting for processes to exit.
Jan 10 17:15:48 compute-0 systemd[1]: session-50.scope: Consumed 2min 21.093s CPU time.
Jan 10 17:15:48 compute-0 systemd-logind[798]: Removed session 50.
Jan 10 17:15:48 compute-0 nova_compute[237049]: 2026-01-10 17:15:48.127 237053 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 10 17:15:48 compute-0 nova_compute[237049]: 2026-01-10 17:15:48.127 237053 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 10 17:15:48 compute-0 nova_compute[237049]: 2026-01-10 17:15:48.127 237053 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 10 17:15:48 compute-0 nova_compute[237049]: 2026-01-10 17:15:48.128 237053 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Jan 10 17:15:48 compute-0 nova_compute[237049]: 2026-01-10 17:15:48.294 237053 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 10 17:15:48 compute-0 nova_compute[237049]: 2026-01-10 17:15:48.323 237053 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.029s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 10 17:15:48 compute-0 nova_compute[237049]: 2026-01-10 17:15:48.324 237053 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Jan 10 17:15:48 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v604: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:15:48 compute-0 nova_compute[237049]: 2026-01-10 17:15:48.900 237053 INFO nova.virt.driver [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Jan 10 17:15:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:15:48.917 152671 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 10 17:15:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:15:48.920 152671 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 10 17:15:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:15:48.920 152671 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.022 237053 INFO nova.compute.provider_config [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.038 237053 DEBUG oslo_concurrency.lockutils [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.038 237053 DEBUG oslo_concurrency.lockutils [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.039 237053 DEBUG oslo_concurrency.lockutils [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.039 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.039 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.039 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.039 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.040 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.040 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.040 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.040 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.040 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.040 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.041 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.041 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.041 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.041 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.041 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.041 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.041 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.042 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.042 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.042 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.042 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.042 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.042 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.042 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.043 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.043 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.043 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.043 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.043 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.043 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.043 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.044 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.044 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.044 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.044 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.044 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.044 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.044 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.044 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.045 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.045 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.045 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.045 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.045 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.045 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.046 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.046 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.046 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.046 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.046 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.046 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.046 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.047 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.047 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.047 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.047 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.047 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.047 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.047 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.048 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.048 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.048 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.048 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.048 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.048 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.048 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.048 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.049 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.049 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.049 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.049 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.049 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.049 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.049 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.049 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.050 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.050 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.050 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.050 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.050 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.050 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.050 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.051 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.051 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.051 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.051 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.051 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.051 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.051 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.051 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.052 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.052 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.052 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.052 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.052 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.052 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.052 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.053 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.053 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.053 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.053 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.053 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.053 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.053 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.053 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.054 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.054 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.054 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.054 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.054 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.054 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.054 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.054 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.055 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.055 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.055 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.055 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.055 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.055 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.055 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.056 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.056 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.056 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.056 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.056 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.056 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.056 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.056 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.057 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.057 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.057 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.057 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.057 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.057 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.057 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.058 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.058 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.058 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.058 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.058 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.058 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.058 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.058 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.059 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.059 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.059 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.059 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.059 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.059 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.059 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.060 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.060 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.060 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.060 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.060 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.060 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.060 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.061 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.061 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.061 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.061 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.061 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.061 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.061 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.062 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.062 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.062 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.062 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.062 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.062 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.062 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.063 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.063 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.063 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.063 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.063 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.063 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.063 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.063 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.064 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.064 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.064 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.064 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.064 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.065 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.065 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.065 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.065 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.065 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.065 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.065 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.065 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.066 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.066 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.066 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.066 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.066 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.066 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.066 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.067 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.067 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.067 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.067 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.067 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.067 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.067 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.068 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.068 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.068 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.068 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.068 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.068 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.068 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.069 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.069 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.069 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.069 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.069 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.069 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.070 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.070 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.070 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.070 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.070 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.070 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.070 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.070 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.071 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.071 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.071 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.071 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.071 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.071 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.071 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.072 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.072 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.072 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.072 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.072 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.072 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.072 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.073 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.073 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.073 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.073 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.073 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.073 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.073 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.073 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.074 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.074 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.074 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.074 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.074 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.074 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.074 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.075 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.075 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.075 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.075 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.075 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.075 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.075 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.075 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.076 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.076 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.076 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.076 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.076 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.076 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.076 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.077 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.077 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.077 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.077 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.077 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.077 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.077 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.077 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.078 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.078 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.078 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.078 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.078 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.078 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.078 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.079 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.079 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.079 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.079 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.079 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.079 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.079 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.079 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.080 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.080 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.080 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.080 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.080 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.080 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.080 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.081 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.081 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.081 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.081 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.081 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.081 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.081 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.081 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.082 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.082 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.082 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.082 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.082 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.082 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.082 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.083 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.083 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.083 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.083 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.083 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.083 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.083 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.083 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.084 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.084 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.084 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.084 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.084 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.084 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.084 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.085 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.085 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.085 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.085 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.085 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.085 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.085 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.085 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.086 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.086 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.086 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.086 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.086 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.086 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.086 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.087 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.087 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.087 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.087 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.087 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.087 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.087 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.087 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.088 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.088 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.088 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.088 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.088 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.089 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.089 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.089 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.089 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.089 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.089 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.089 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.089 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.090 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.090 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.090 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.090 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.090 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.090 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.090 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.091 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.091 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.091 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.091 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.091 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.091 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.091 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.091 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.092 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.092 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.092 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.092 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.092 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.092 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.092 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.092 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.093 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.093 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.093 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.093 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.093 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.093 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.093 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.094 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.094 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.094 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.094 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.094 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.094 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.094 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.095 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.095 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.095 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.095 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.095 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.095 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.095 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.095 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.096 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.096 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.096 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.096 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.096 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.096 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.096 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.097 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.097 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.097 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.097 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.097 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.097 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.097 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.097 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.098 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.098 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.098 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.098 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.098 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.098 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.098 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.099 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.099 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.099 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.099 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.099 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.099 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.099 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.100 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.100 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.100 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.100 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.100 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.100 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.100 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.100 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.101 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.101 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.101 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.101 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.101 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.101 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.101 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.102 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.102 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.102 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.102 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.102 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.102 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.102 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.103 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.103 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.103 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.103 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.103 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.103 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.103 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.103 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.104 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.104 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.104 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.104 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.104 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.104 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.104 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.105 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.105 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.105 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.105 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.105 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.105 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.105 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.105 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.106 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.106 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.106 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.106 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.106 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.106 237053 WARNING oslo_config.cfg [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Jan 10 17:15:49 compute-0 nova_compute[237049]: live_migration_uri is deprecated for removal in favor of two other options that
Jan 10 17:15:49 compute-0 nova_compute[237049]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Jan 10 17:15:49 compute-0 nova_compute[237049]: and ``live_migration_inbound_addr`` respectively.
Jan 10 17:15:49 compute-0 nova_compute[237049]: ).  Its value may be silently ignored in the future.
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.107 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.107 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.107 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.107 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.107 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.107 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.107 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.108 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.108 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.108 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.108 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.108 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.108 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.108 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.109 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.109 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.109 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.109 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.109 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.rbd_secret_uuid        = a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.109 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.109 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.110 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.110 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.110 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.110 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.110 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.110 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.110 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.111 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.111 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.111 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.111 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.111 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.111 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.111 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.112 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.112 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.112 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.112 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.112 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.112 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.112 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.113 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.113 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.113 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.113 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.113 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.113 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.113 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.114 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.114 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.114 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.114 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.114 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.114 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.114 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.114 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.115 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.115 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.115 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.115 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.115 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.115 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.115 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.116 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.116 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.116 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.116 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.116 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.116 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.116 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.116 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.117 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.117 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.117 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.117 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.117 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.117 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.117 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.118 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.118 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.118 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.118 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.118 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.118 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.118 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.119 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.119 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.119 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.119 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.119 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.119 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.119 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.119 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.120 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.120 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.120 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.120 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.120 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.120 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.120 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.121 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.121 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.121 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.121 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.121 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.121 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.121 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.121 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.122 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.122 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.122 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.122 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.122 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.122 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.122 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.123 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.123 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.123 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.123 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.123 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.123 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.123 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.123 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.124 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.124 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.124 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.124 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.124 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.124 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.124 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.125 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.125 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.125 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.125 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.125 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.125 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.125 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.126 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.126 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.126 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.126 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.126 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.126 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.127 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.127 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.127 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.127 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.127 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.127 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.127 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.127 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.128 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.128 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.128 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.128 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.128 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.128 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.128 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.129 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.129 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.129 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.129 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.129 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.129 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.129 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.130 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.130 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.130 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.130 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.130 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.130 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.130 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.130 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.131 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.131 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.131 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.131 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.131 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.131 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.132 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.132 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.132 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.132 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.132 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.132 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.132 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.132 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.133 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.133 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.133 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.133 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.133 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.133 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.133 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.134 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.134 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.134 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.134 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.134 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.134 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.134 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.135 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.135 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.135 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.135 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.135 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.135 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.135 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.136 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.136 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.136 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.136 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.136 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.136 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.136 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.137 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.137 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.137 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.137 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.137 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.137 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.137 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.137 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.138 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.138 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.138 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.138 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.138 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.138 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.138 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.139 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.139 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.139 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.139 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.139 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.139 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.139 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.139 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.140 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.140 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.140 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.140 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.140 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.140 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.140 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.141 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.141 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.141 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.141 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.141 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.141 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.142 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.142 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.142 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.142 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.142 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.142 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.142 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.143 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.143 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.143 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.143 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.143 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.143 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.143 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.143 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.144 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.144 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.144 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.144 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.144 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.144 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.144 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.145 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.145 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.145 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.145 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.145 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.145 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.145 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.146 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.146 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.146 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.146 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.146 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.146 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.146 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.146 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.147 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.147 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.147 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.147 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.147 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.147 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.147 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.148 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.148 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.148 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.148 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.148 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.148 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.148 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.149 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.149 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.149 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.149 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.149 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.149 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.149 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.150 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.150 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.150 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.150 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.150 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.150 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.150 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.151 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.151 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.151 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.151 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.151 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.151 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.152 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.152 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.152 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.152 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.152 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.153 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.153 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.153 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.153 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.153 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.154 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.154 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.154 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.154 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.155 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.155 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.155 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.155 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.155 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.156 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.156 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.156 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.156 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.156 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.157 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.157 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.157 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.157 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.158 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.158 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.158 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.158 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.158 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.159 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.159 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.159 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.159 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.160 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.160 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.160 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.160 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.161 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.161 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.161 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.162 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.162 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.162 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.162 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.162 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.162 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.163 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.163 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.163 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.163 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.164 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.164 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.164 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.165 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.165 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.165 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.165 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.165 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.166 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.166 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.166 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.166 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.167 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.167 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.167 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.167 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.168 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.168 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.168 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.169 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.169 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.169 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.169 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.170 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.170 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.170 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.170 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.171 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.171 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.171 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.171 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.172 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.172 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.172 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.172 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.173 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.173 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.173 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.174 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.174 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.174 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.174 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.174 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.175 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.175 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.175 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.175 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.176 237053 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.198 237053 DEBUG nova.virt.libvirt.host [None req-e8bffe3c-fd35-435d-be63-f593c81b47ca - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.198 237053 DEBUG nova.virt.libvirt.host [None req-e8bffe3c-fd35-435d-be63-f593c81b47ca - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.198 237053 DEBUG nova.virt.libvirt.host [None req-e8bffe3c-fd35-435d-be63-f593c81b47ca - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.199 237053 DEBUG nova.virt.libvirt.host [None req-e8bffe3c-fd35-435d-be63-f593c81b47ca - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.214 237053 DEBUG nova.virt.libvirt.host [None req-e8bffe3c-fd35-435d-be63-f593c81b47ca - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f25e4d0c430> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.216 237053 DEBUG nova.virt.libvirt.host [None req-e8bffe3c-fd35-435d-be63-f593c81b47ca - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f25e4d0c430> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.217 237053 INFO nova.virt.libvirt.driver [None req-e8bffe3c-fd35-435d-be63-f593c81b47ca - - - - - -] Connection event '1' reason 'None'
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.224 237053 INFO nova.virt.libvirt.host [None req-e8bffe3c-fd35-435d-be63-f593c81b47ca - - - - - -] Libvirt host capabilities <capabilities>
Jan 10 17:15:49 compute-0 nova_compute[237049]: 
Jan 10 17:15:49 compute-0 nova_compute[237049]:   <host>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     <uuid>a9d7d544-72dd-4b08-9e5e-495057bde287</uuid>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     <cpu>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <arch>x86_64</arch>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model>EPYC-Rome-v4</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <vendor>AMD</vendor>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <microcode version='16777317'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <signature family='23' model='49' stepping='0'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <maxphysaddr mode='emulate' bits='40'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <feature name='x2apic'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <feature name='tsc-deadline'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <feature name='osxsave'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <feature name='hypervisor'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <feature name='tsc_adjust'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <feature name='spec-ctrl'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <feature name='stibp'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <feature name='arch-capabilities'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <feature name='ssbd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <feature name='cmp_legacy'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <feature name='topoext'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <feature name='virt-ssbd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <feature name='lbrv'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <feature name='tsc-scale'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <feature name='vmcb-clean'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <feature name='pause-filter'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <feature name='pfthreshold'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <feature name='svme-addr-chk'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <feature name='rdctl-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <feature name='skip-l1dfl-vmentry'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <feature name='mds-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <feature name='pschange-mc-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <pages unit='KiB' size='4'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <pages unit='KiB' size='2048'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <pages unit='KiB' size='1048576'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     </cpu>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     <power_management>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <suspend_mem/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     </power_management>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     <iommu support='no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     <migration_features>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <live/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <uri_transports>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <uri_transport>tcp</uri_transport>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <uri_transport>rdma</uri_transport>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </uri_transports>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     </migration_features>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     <topology>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <cells num='1'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <cell id='0'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:           <memory unit='KiB'>7864312</memory>
Jan 10 17:15:49 compute-0 nova_compute[237049]:           <pages unit='KiB' size='4'>1966078</pages>
Jan 10 17:15:49 compute-0 nova_compute[237049]:           <pages unit='KiB' size='2048'>0</pages>
Jan 10 17:15:49 compute-0 nova_compute[237049]:           <pages unit='KiB' size='1048576'>0</pages>
Jan 10 17:15:49 compute-0 nova_compute[237049]:           <distances>
Jan 10 17:15:49 compute-0 nova_compute[237049]:             <sibling id='0' value='10'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:           </distances>
Jan 10 17:15:49 compute-0 nova_compute[237049]:           <cpus num='8'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:           </cpus>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         </cell>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </cells>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     </topology>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     <cache>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     </cache>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     <secmodel>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model>selinux</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <doi>0</doi>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     </secmodel>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     <secmodel>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model>dac</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <doi>0</doi>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <baselabel type='kvm'>+107:+107</baselabel>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <baselabel type='qemu'>+107:+107</baselabel>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     </secmodel>
Jan 10 17:15:49 compute-0 nova_compute[237049]:   </host>
Jan 10 17:15:49 compute-0 nova_compute[237049]: 
Jan 10 17:15:49 compute-0 nova_compute[237049]:   <guest>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     <os_type>hvm</os_type>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     <arch name='i686'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <wordsize>32</wordsize>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <domain type='qemu'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <domain type='kvm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     </arch>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     <features>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <pae/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <nonpae/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <acpi default='on' toggle='yes'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <apic default='on' toggle='no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <cpuselection/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <deviceboot/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <disksnapshot default='on' toggle='no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <externalSnapshot/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     </features>
Jan 10 17:15:49 compute-0 nova_compute[237049]:   </guest>
Jan 10 17:15:49 compute-0 nova_compute[237049]: 
Jan 10 17:15:49 compute-0 nova_compute[237049]:   <guest>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     <os_type>hvm</os_type>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     <arch name='x86_64'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <wordsize>64</wordsize>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <domain type='qemu'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <domain type='kvm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     </arch>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     <features>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <acpi default='on' toggle='yes'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <apic default='on' toggle='no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <cpuselection/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <deviceboot/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <disksnapshot default='on' toggle='no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <externalSnapshot/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     </features>
Jan 10 17:15:49 compute-0 nova_compute[237049]:   </guest>
Jan 10 17:15:49 compute-0 nova_compute[237049]: 
Jan 10 17:15:49 compute-0 nova_compute[237049]: </capabilities>
Jan 10 17:15:49 compute-0 nova_compute[237049]: 
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.233 237053 DEBUG nova.virt.libvirt.host [None req-e8bffe3c-fd35-435d-be63-f593c81b47ca - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.242 237053 DEBUG nova.virt.libvirt.host [None req-e8bffe3c-fd35-435d-be63-f593c81b47ca - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Jan 10 17:15:49 compute-0 nova_compute[237049]: <domainCapabilities>
Jan 10 17:15:49 compute-0 nova_compute[237049]:   <path>/usr/libexec/qemu-kvm</path>
Jan 10 17:15:49 compute-0 nova_compute[237049]:   <domain>kvm</domain>
Jan 10 17:15:49 compute-0 nova_compute[237049]:   <machine>pc-q35-rhel9.8.0</machine>
Jan 10 17:15:49 compute-0 nova_compute[237049]:   <arch>i686</arch>
Jan 10 17:15:49 compute-0 nova_compute[237049]:   <vcpu max='4096'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:   <iothreads supported='yes'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:   <os supported='yes'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     <enum name='firmware'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     <loader supported='yes'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <enum name='type'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>rom</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>pflash</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </enum>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <enum name='readonly'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>yes</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>no</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </enum>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <enum name='secure'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>no</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </enum>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     </loader>
Jan 10 17:15:49 compute-0 nova_compute[237049]:   </os>
Jan 10 17:15:49 compute-0 nova_compute[237049]:   <cpu>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     <mode name='host-passthrough' supported='yes'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <enum name='hostPassthroughMigratable'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>on</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>off</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </enum>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     </mode>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     <mode name='maximum' supported='yes'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <enum name='maximumMigratable'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>on</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>off</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </enum>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     </mode>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     <mode name='host-model' supported='yes'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <vendor>AMD</vendor>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <feature policy='require' name='x2apic'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <feature policy='require' name='tsc-deadline'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <feature policy='require' name='hypervisor'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <feature policy='require' name='tsc_adjust'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <feature policy='require' name='spec-ctrl'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <feature policy='require' name='stibp'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <feature policy='require' name='ssbd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <feature policy='require' name='cmp_legacy'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <feature policy='require' name='overflow-recov'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <feature policy='require' name='succor'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <feature policy='require' name='ibrs'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <feature policy='require' name='amd-ssbd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <feature policy='require' name='virt-ssbd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <feature policy='require' name='lbrv'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <feature policy='require' name='tsc-scale'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <feature policy='require' name='vmcb-clean'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <feature policy='require' name='flushbyasid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <feature policy='require' name='pause-filter'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <feature policy='require' name='pfthreshold'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <feature policy='require' name='svme-addr-chk'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <feature policy='disable' name='xsaves'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     </mode>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     <mode name='custom' supported='yes'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Broadwell'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='hle'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='rtm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Broadwell-IBRS'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='hle'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='rtm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Broadwell-noTSX'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Broadwell-v1'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='hle'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='rtm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Broadwell-v2'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Broadwell-v3'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='hle'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='rtm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Broadwell-v4'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Cascadelake-Server'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512dq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vl'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vnni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='hle'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='rtm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512dq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vl'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vnni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='ibrs-all'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Cascadelake-Server-v1'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512dq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vl'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vnni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='hle'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='rtm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Cascadelake-Server-v2'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512dq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vl'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vnni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='hle'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='ibrs-all'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='rtm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Cascadelake-Server-v3'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512dq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vl'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vnni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='ibrs-all'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Cascadelake-Server-v4'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512dq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vl'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vnni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='ibrs-all'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Cascadelake-Server-v5'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512dq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vl'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vnni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='ibrs-all'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xsaves'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Cooperlake'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-bf16'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512dq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vl'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vnni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='hle'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='ibrs-all'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='rtm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='taa-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Cooperlake-v1'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-bf16'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512dq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vl'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vnni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='hle'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='ibrs-all'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='rtm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='taa-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Cooperlake-v2'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-bf16'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512dq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vl'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vnni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='hle'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='ibrs-all'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='rtm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='taa-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xsaves'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Denverton'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='mpx'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Denverton-v1'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='mpx'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Denverton-v2'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Denverton-v3'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xsaves'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Dhyana-v2'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xsaves'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='EPYC-Genoa'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='amd-psfd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='auto-ibrs'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-bf16'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-vpopcntdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bitalg'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512dq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512ifma'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vbmi'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vbmi2'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vl'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vnni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fsrm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='gfni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='la57'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='no-nested-data-bp'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='null-sel-clr-base'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='stibp-always-on'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vaes'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vpclmulqdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xsaves'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='EPYC-Genoa-v1'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='amd-psfd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='auto-ibrs'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-bf16'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-vpopcntdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bitalg'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512dq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512ifma'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vbmi'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vbmi2'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vl'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vnni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fsrm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='gfni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='la57'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='no-nested-data-bp'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='null-sel-clr-base'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='stibp-always-on'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vaes'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vpclmulqdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xsaves'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='EPYC-Milan'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fsrm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xsaves'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='EPYC-Milan-v1'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fsrm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xsaves'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='EPYC-Milan-v2'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='amd-psfd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fsrm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='no-nested-data-bp'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='null-sel-clr-base'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='stibp-always-on'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vaes'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vpclmulqdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xsaves'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='EPYC-Rome'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xsaves'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='EPYC-Rome-v1'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xsaves'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='EPYC-Rome-v2'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xsaves'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='EPYC-Rome-v3'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xsaves'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='EPYC-v3'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xsaves'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='EPYC-v4'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xsaves'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='GraniteRapids'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='amx-bf16'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='amx-fp16'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='amx-int8'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='amx-tile'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx-vnni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-bf16'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-fp16'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-vpopcntdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bitalg'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512dq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512ifma'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vbmi'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vbmi2'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vl'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vnni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='bus-lock-detect'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fbsdp-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fsrc'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fsrm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fsrs'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fzrm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='gfni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='hle'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='ibrs-all'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='la57'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='mcdt-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pbrsb-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='prefetchiti'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='psdp-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='rtm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='sbdr-ssdp-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='serialize'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='taa-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='tsx-ldtrk'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vaes'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vpclmulqdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xfd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xsaves'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='GraniteRapids-v1'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='amx-bf16'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='amx-fp16'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='amx-int8'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='amx-tile'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx-vnni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-bf16'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-fp16'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-vpopcntdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bitalg'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512dq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512ifma'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vbmi'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vbmi2'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vl'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vnni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='bus-lock-detect'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fbsdp-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fsrc'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fsrm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fsrs'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fzrm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='gfni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='hle'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='ibrs-all'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='la57'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='mcdt-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pbrsb-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='prefetchiti'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='psdp-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='rtm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='sbdr-ssdp-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='serialize'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='taa-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='tsx-ldtrk'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vaes'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vpclmulqdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xfd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xsaves'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='GraniteRapids-v2'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='amx-bf16'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='amx-fp16'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='amx-int8'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='amx-tile'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx-vnni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx10'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx10-128'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx10-256'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx10-512'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-bf16'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-fp16'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-vpopcntdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bitalg'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512dq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512ifma'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vbmi'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vbmi2'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vl'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vnni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='bus-lock-detect'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='cldemote'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fbsdp-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fsrc'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fsrm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fsrs'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fzrm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='gfni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='hle'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='ibrs-all'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='la57'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='mcdt-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='movdir64b'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='movdiri'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pbrsb-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='prefetchiti'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='psdp-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='rtm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='sbdr-ssdp-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='serialize'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='ss'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='taa-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='tsx-ldtrk'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vaes'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vpclmulqdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xfd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xsaves'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Haswell'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='hle'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='rtm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Haswell-IBRS'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='hle'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='rtm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Haswell-noTSX'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Haswell-v1'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='hle'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='rtm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Haswell-v2'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Haswell-v3'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='hle'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='rtm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Haswell-v4'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Icelake-Server'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-vpopcntdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bitalg'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512dq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vbmi'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vbmi2'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vl'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vnni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='gfni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='hle'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='la57'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='rtm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vaes'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vpclmulqdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Icelake-Server-noTSX'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-vpopcntdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bitalg'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512dq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vbmi'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vbmi2'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vl'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vnni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='gfni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='la57'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vaes'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vpclmulqdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Icelake-Server-v1'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-vpopcntdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bitalg'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512dq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vbmi'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vbmi2'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vl'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vnni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='gfni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='hle'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='la57'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='rtm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vaes'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vpclmulqdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Icelake-Server-v2'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-vpopcntdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bitalg'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512dq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vbmi'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vbmi2'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vl'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vnni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='gfni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='la57'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vaes'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vpclmulqdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Icelake-Server-v3'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-vpopcntdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bitalg'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512dq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vbmi'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vbmi2'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vl'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vnni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='gfni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='ibrs-all'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='la57'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='taa-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vaes'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vpclmulqdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Icelake-Server-v4'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-vpopcntdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bitalg'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512dq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512ifma'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vbmi'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vbmi2'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vl'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vnni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fsrm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='gfni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='ibrs-all'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='la57'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='taa-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vaes'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vpclmulqdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Icelake-Server-v5'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-vpopcntdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bitalg'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512dq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512ifma'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vbmi'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vbmi2'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vl'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vnni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fsrm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='gfni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='ibrs-all'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='la57'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='taa-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vaes'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vpclmulqdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xsaves'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Icelake-Server-v6'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-vpopcntdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bitalg'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512dq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512ifma'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vbmi'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vbmi2'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vl'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vnni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fsrm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='gfni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='ibrs-all'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='la57'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='taa-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vaes'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vpclmulqdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xsaves'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Icelake-Server-v7'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-vpopcntdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bitalg'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512dq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512ifma'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vbmi'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vbmi2'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vl'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vnni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fsrm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='gfni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='hle'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='ibrs-all'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='la57'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='rtm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='taa-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vaes'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vpclmulqdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xsaves'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='IvyBridge'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='IvyBridge-IBRS'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='IvyBridge-v1'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='IvyBridge-v2'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='KnightsMill'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-4fmaps'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-4vnniw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-vpopcntdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512er'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512pf'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='ss'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='KnightsMill-v1'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-4fmaps'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-4vnniw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-vpopcntdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512er'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512pf'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='ss'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Opteron_G4'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fma4'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xop'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Opteron_G4-v1'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fma4'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xop'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Opteron_G5'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fma4'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='tbm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xop'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Opteron_G5-v1'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fma4'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='tbm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xop'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='SapphireRapids'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='amx-bf16'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='amx-int8'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='amx-tile'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx-vnni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-bf16'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-fp16'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-vpopcntdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bitalg'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512dq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512ifma'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vbmi'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vbmi2'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vl'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vnni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='bus-lock-detect'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fsrc'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fsrm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fsrs'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fzrm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='gfni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='hle'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='ibrs-all'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='la57'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='rtm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='serialize'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='taa-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='tsx-ldtrk'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vaes'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vpclmulqdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xfd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xsaves'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='SapphireRapids-v1'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='amx-bf16'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='amx-int8'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='amx-tile'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx-vnni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-bf16'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-fp16'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-vpopcntdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bitalg'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512dq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512ifma'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vbmi'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vbmi2'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vl'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vnni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='bus-lock-detect'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fsrc'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fsrm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fsrs'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fzrm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='gfni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='hle'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='ibrs-all'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='la57'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='rtm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='serialize'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='taa-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='tsx-ldtrk'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vaes'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vpclmulqdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xfd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xsaves'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='SapphireRapids-v2'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='amx-bf16'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='amx-int8'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='amx-tile'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx-vnni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-bf16'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-fp16'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-vpopcntdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bitalg'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512dq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512ifma'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vbmi'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vbmi2'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vl'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vnni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='bus-lock-detect'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fbsdp-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fsrc'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fsrm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fsrs'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fzrm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='gfni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='hle'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='ibrs-all'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='la57'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='psdp-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='rtm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='sbdr-ssdp-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='serialize'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='taa-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='tsx-ldtrk'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vaes'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vpclmulqdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xfd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xsaves'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='SapphireRapids-v3'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='amx-bf16'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='amx-int8'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='amx-tile'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx-vnni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-bf16'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-fp16'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-vpopcntdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bitalg'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512dq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512ifma'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vbmi'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vbmi2'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vl'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vnni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='bus-lock-detect'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='cldemote'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fbsdp-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fsrc'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fsrm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fsrs'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fzrm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='gfni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='hle'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='ibrs-all'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='la57'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='movdir64b'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='movdiri'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='psdp-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='rtm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='sbdr-ssdp-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='serialize'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='ss'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='taa-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='tsx-ldtrk'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vaes'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vpclmulqdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xfd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xsaves'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='SierraForest'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx-ifma'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx-ne-convert'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx-vnni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx-vnni-int8'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='bus-lock-detect'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='cmpccxadd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fbsdp-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fsrm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fsrs'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='gfni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='ibrs-all'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='mcdt-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pbrsb-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='psdp-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='sbdr-ssdp-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='serialize'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vaes'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vpclmulqdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xsaves'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='SierraForest-v1'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx-ifma'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx-ne-convert'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx-vnni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx-vnni-int8'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='bus-lock-detect'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='cmpccxadd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fbsdp-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fsrm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fsrs'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='gfni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='ibrs-all'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='mcdt-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pbrsb-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='psdp-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='sbdr-ssdp-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='serialize'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vaes'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vpclmulqdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xsaves'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Skylake-Client'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='hle'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='rtm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Skylake-Client-IBRS'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='hle'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='rtm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Skylake-Client-v1'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='hle'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='rtm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Skylake-Client-v2'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='hle'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='rtm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Skylake-Client-v3'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Skylake-Client-v4'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xsaves'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Skylake-Server'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512dq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vl'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='hle'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='rtm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Skylake-Server-IBRS'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512dq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vl'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='hle'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='rtm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512dq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vl'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Skylake-Server-v1'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512dq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vl'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='hle'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='rtm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Skylake-Server-v2'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512dq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vl'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='hle'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='rtm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Skylake-Server-v3'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512dq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vl'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Skylake-Server-v4'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512dq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vl'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Skylake-Server-v5'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512dq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vl'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xsaves'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Snowridge'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='cldemote'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='core-capability'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='gfni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='movdir64b'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='movdiri'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='mpx'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='split-lock-detect'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Snowridge-v1'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='cldemote'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='core-capability'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='gfni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='movdir64b'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='movdiri'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='mpx'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='split-lock-detect'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Snowridge-v2'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='cldemote'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='core-capability'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='gfni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='movdir64b'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='movdiri'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='split-lock-detect'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Snowridge-v3'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='cldemote'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='core-capability'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='gfni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='movdir64b'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='movdiri'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='split-lock-detect'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xsaves'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Snowridge-v4'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='cldemote'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='gfni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='movdir64b'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='movdiri'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xsaves'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='athlon'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='3dnow'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='3dnowext'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='athlon-v1'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='3dnow'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='3dnowext'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='core2duo'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='ss'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='core2duo-v1'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='ss'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='coreduo'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='ss'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='coreduo-v1'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='ss'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='n270'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='ss'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='n270-v1'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='ss'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='phenom'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='3dnow'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='3dnowext'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='phenom-v1'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='3dnow'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='3dnowext'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     </mode>
Jan 10 17:15:49 compute-0 nova_compute[237049]:   </cpu>
Jan 10 17:15:49 compute-0 nova_compute[237049]:   <memoryBacking supported='yes'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     <enum name='sourceType'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <value>file</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <value>anonymous</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <value>memfd</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     </enum>
Jan 10 17:15:49 compute-0 nova_compute[237049]:   </memoryBacking>
Jan 10 17:15:49 compute-0 nova_compute[237049]:   <devices>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     <disk supported='yes'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <enum name='diskDevice'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>disk</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>cdrom</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>floppy</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>lun</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </enum>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <enum name='bus'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>fdc</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>scsi</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>virtio</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>usb</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>sata</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </enum>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <enum name='model'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>virtio</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>virtio-transitional</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>virtio-non-transitional</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </enum>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     </disk>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     <graphics supported='yes'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <enum name='type'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>vnc</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>egl-headless</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>dbus</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </enum>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     </graphics>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     <video supported='yes'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <enum name='modelType'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>vga</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>cirrus</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>virtio</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>none</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>bochs</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>ramfb</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </enum>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     </video>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     <hostdev supported='yes'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <enum name='mode'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>subsystem</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </enum>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <enum name='startupPolicy'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>default</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>mandatory</value>
Jan 10 17:15:49 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>requisite</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>optional</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </enum>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <enum name='subsysType'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>usb</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>pci</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>scsi</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </enum>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <enum name='capsType'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <enum name='pciBackend'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     </hostdev>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     <rng supported='yes'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <enum name='model'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>virtio</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>virtio-transitional</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>virtio-non-transitional</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </enum>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <enum name='backendModel'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>random</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>egd</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>builtin</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </enum>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     </rng>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     <filesystem supported='yes'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <enum name='driverType'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>path</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>handle</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>virtiofs</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </enum>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     </filesystem>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     <tpm supported='yes'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <enum name='model'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>tpm-tis</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>tpm-crb</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </enum>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <enum name='backendModel'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>emulator</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>external</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </enum>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <enum name='backendVersion'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>2.0</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </enum>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     </tpm>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     <redirdev supported='yes'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <enum name='bus'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>usb</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </enum>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     </redirdev>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     <channel supported='yes'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <enum name='type'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>pty</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>unix</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </enum>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     </channel>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     <crypto supported='yes'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <enum name='model'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <enum name='type'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>qemu</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </enum>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <enum name='backendModel'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>builtin</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </enum>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     </crypto>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     <interface supported='yes'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <enum name='backendType'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>default</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>passt</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </enum>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     </interface>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     <panic supported='yes'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <enum name='model'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>isa</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>hyperv</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </enum>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     </panic>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     <console supported='yes'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <enum name='type'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>null</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>vc</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>pty</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>dev</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>file</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>pipe</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>stdio</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>udp</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>tcp</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>unix</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>qemu-vdagent</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>dbus</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </enum>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     </console>
Jan 10 17:15:49 compute-0 nova_compute[237049]:   </devices>
Jan 10 17:15:49 compute-0 nova_compute[237049]:   <features>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     <gic supported='no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     <vmcoreinfo supported='yes'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     <genid supported='yes'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     <backingStoreInput supported='yes'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     <backup supported='yes'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     <async-teardown supported='yes'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     <ps2 supported='yes'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     <sev supported='no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     <sgx supported='no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     <hyperv supported='yes'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <enum name='features'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>relaxed</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>vapic</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>spinlocks</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>vpindex</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>runtime</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>synic</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>stimer</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>reset</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>vendor_id</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>frequencies</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>reenlightenment</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>tlbflush</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>ipi</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>avic</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>emsr_bitmap</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>xmm_input</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </enum>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <defaults>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <spinlocks>4095</spinlocks>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <stimer_direct>on</stimer_direct>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <tlbflush_direct>on</tlbflush_direct>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <tlbflush_extended>on</tlbflush_extended>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </defaults>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     </hyperv>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     <launchSecurity supported='yes'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <enum name='sectype'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>tdx</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </enum>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     </launchSecurity>
Jan 10 17:15:49 compute-0 nova_compute[237049]:   </features>
Jan 10 17:15:49 compute-0 nova_compute[237049]: </domainCapabilities>
Jan 10 17:15:49 compute-0 nova_compute[237049]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.246 237053 WARNING nova.virt.libvirt.driver [None req-e8bffe3c-fd35-435d-be63-f593c81b47ca - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.247 237053 DEBUG nova.virt.libvirt.volume.mount [None req-e8bffe3c-fd35-435d-be63-f593c81b47ca - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.252 237053 DEBUG nova.virt.libvirt.host [None req-e8bffe3c-fd35-435d-be63-f593c81b47ca - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Jan 10 17:15:49 compute-0 nova_compute[237049]: <domainCapabilities>
Jan 10 17:15:49 compute-0 nova_compute[237049]:   <path>/usr/libexec/qemu-kvm</path>
Jan 10 17:15:49 compute-0 nova_compute[237049]:   <domain>kvm</domain>
Jan 10 17:15:49 compute-0 nova_compute[237049]:   <machine>pc-i440fx-rhel7.6.0</machine>
Jan 10 17:15:49 compute-0 nova_compute[237049]:   <arch>i686</arch>
Jan 10 17:15:49 compute-0 nova_compute[237049]:   <vcpu max='240'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:   <iothreads supported='yes'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:   <os supported='yes'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     <enum name='firmware'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     <loader supported='yes'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <enum name='type'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>rom</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>pflash</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </enum>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <enum name='readonly'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>yes</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>no</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </enum>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <enum name='secure'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>no</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </enum>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     </loader>
Jan 10 17:15:49 compute-0 nova_compute[237049]:   </os>
Jan 10 17:15:49 compute-0 nova_compute[237049]:   <cpu>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     <mode name='host-passthrough' supported='yes'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <enum name='hostPassthroughMigratable'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>on</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>off</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </enum>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     </mode>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     <mode name='maximum' supported='yes'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <enum name='maximumMigratable'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>on</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>off</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </enum>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     </mode>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     <mode name='host-model' supported='yes'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <vendor>AMD</vendor>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <feature policy='require' name='x2apic'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <feature policy='require' name='tsc-deadline'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <feature policy='require' name='hypervisor'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <feature policy='require' name='tsc_adjust'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <feature policy='require' name='spec-ctrl'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <feature policy='require' name='stibp'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <feature policy='require' name='ssbd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <feature policy='require' name='cmp_legacy'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <feature policy='require' name='overflow-recov'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <feature policy='require' name='succor'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <feature policy='require' name='ibrs'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <feature policy='require' name='amd-ssbd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <feature policy='require' name='virt-ssbd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <feature policy='require' name='lbrv'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <feature policy='require' name='tsc-scale'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <feature policy='require' name='vmcb-clean'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <feature policy='require' name='flushbyasid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <feature policy='require' name='pause-filter'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <feature policy='require' name='pfthreshold'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <feature policy='require' name='svme-addr-chk'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <feature policy='disable' name='xsaves'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     </mode>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     <mode name='custom' supported='yes'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Broadwell'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='hle'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='rtm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Broadwell-IBRS'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='hle'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='rtm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Broadwell-noTSX'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Broadwell-v1'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='hle'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='rtm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Broadwell-v2'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Broadwell-v3'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='hle'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='rtm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Broadwell-v4'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Cascadelake-Server'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512dq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vl'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vnni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='hle'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='rtm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512dq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vl'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vnni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='ibrs-all'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Cascadelake-Server-v1'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512dq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vl'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vnni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='hle'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='rtm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Cascadelake-Server-v2'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512dq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vl'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vnni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='hle'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='ibrs-all'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='rtm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Cascadelake-Server-v3'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512dq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vl'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vnni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='ibrs-all'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Cascadelake-Server-v4'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512dq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vl'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vnni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='ibrs-all'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Cascadelake-Server-v5'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512dq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vl'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vnni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='ibrs-all'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xsaves'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Cooperlake'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-bf16'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512dq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vl'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vnni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='hle'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='ibrs-all'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='rtm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='taa-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Cooperlake-v1'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-bf16'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512dq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vl'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vnni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='hle'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='ibrs-all'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='rtm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='taa-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Cooperlake-v2'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-bf16'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512dq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vl'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vnni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='hle'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='ibrs-all'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='rtm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='taa-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xsaves'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Denverton'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='mpx'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Denverton-v1'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='mpx'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Denverton-v2'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Denverton-v3'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xsaves'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Dhyana-v2'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xsaves'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='EPYC-Genoa'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='amd-psfd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='auto-ibrs'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-bf16'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-vpopcntdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bitalg'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512dq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512ifma'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vbmi'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vbmi2'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vl'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vnni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fsrm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='gfni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='la57'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='no-nested-data-bp'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='null-sel-clr-base'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='stibp-always-on'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vaes'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vpclmulqdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xsaves'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='EPYC-Genoa-v1'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='amd-psfd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='auto-ibrs'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-bf16'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-vpopcntdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bitalg'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512dq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512ifma'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vbmi'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vbmi2'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vl'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vnni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fsrm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='gfni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='la57'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='no-nested-data-bp'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='null-sel-clr-base'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='stibp-always-on'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vaes'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vpclmulqdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xsaves'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='EPYC-Milan'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fsrm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xsaves'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='EPYC-Milan-v1'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fsrm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xsaves'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='EPYC-Milan-v2'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='amd-psfd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fsrm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='no-nested-data-bp'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='null-sel-clr-base'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='stibp-always-on'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vaes'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vpclmulqdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xsaves'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='EPYC-Rome'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xsaves'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='EPYC-Rome-v1'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xsaves'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='EPYC-Rome-v2'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xsaves'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='EPYC-Rome-v3'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xsaves'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='EPYC-v3'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xsaves'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='EPYC-v4'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xsaves'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='GraniteRapids'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='amx-bf16'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='amx-fp16'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='amx-int8'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='amx-tile'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx-vnni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-bf16'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-fp16'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-vpopcntdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bitalg'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512dq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512ifma'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vbmi'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vbmi2'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vl'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vnni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='bus-lock-detect'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fbsdp-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fsrc'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fsrm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fsrs'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fzrm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='gfni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='hle'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='ibrs-all'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='la57'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='mcdt-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pbrsb-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='prefetchiti'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='psdp-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='rtm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='sbdr-ssdp-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='serialize'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='taa-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='tsx-ldtrk'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vaes'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vpclmulqdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xfd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xsaves'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='GraniteRapids-v1'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='amx-bf16'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='amx-fp16'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='amx-int8'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='amx-tile'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx-vnni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-bf16'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-fp16'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-vpopcntdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bitalg'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512dq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512ifma'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vbmi'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vbmi2'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vl'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vnni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='bus-lock-detect'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fbsdp-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fsrc'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fsrm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fsrs'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fzrm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='gfni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='hle'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='ibrs-all'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='la57'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='mcdt-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pbrsb-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='prefetchiti'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='psdp-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='rtm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='sbdr-ssdp-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='serialize'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='taa-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='tsx-ldtrk'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vaes'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vpclmulqdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xfd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xsaves'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='GraniteRapids-v2'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='amx-bf16'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='amx-fp16'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='amx-int8'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='amx-tile'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx-vnni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx10'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx10-128'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx10-256'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx10-512'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-bf16'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-fp16'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-vpopcntdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bitalg'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512dq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512ifma'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vbmi'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vbmi2'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vl'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vnni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='bus-lock-detect'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='cldemote'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fbsdp-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fsrc'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fsrm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fsrs'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fzrm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='gfni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='hle'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='ibrs-all'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='la57'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='mcdt-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='movdir64b'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='movdiri'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pbrsb-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='prefetchiti'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='psdp-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='rtm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='sbdr-ssdp-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='serialize'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='ss'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='taa-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='tsx-ldtrk'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vaes'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vpclmulqdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xfd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xsaves'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Haswell'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='hle'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='rtm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Haswell-IBRS'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='hle'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='rtm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Haswell-noTSX'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Haswell-v1'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='hle'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='rtm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Haswell-v2'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Haswell-v3'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='hle'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='rtm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Haswell-v4'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Icelake-Server'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-vpopcntdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bitalg'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512dq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vbmi'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vbmi2'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vl'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vnni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='gfni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='hle'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='la57'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='rtm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vaes'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vpclmulqdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Icelake-Server-noTSX'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-vpopcntdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bitalg'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512dq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vbmi'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vbmi2'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vl'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vnni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='gfni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='la57'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vaes'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vpclmulqdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Icelake-Server-v1'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-vpopcntdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bitalg'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512dq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vbmi'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vbmi2'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vl'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vnni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='gfni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='hle'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='la57'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='rtm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vaes'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vpclmulqdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Icelake-Server-v2'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-vpopcntdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bitalg'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512dq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vbmi'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vbmi2'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vl'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vnni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='gfni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='la57'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vaes'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vpclmulqdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Icelake-Server-v3'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-vpopcntdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bitalg'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512dq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vbmi'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vbmi2'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vl'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vnni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='gfni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='ibrs-all'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='la57'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='taa-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vaes'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vpclmulqdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Icelake-Server-v4'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-vpopcntdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bitalg'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512dq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512ifma'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vbmi'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vbmi2'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vl'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vnni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fsrm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='gfni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='ibrs-all'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='la57'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='taa-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vaes'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vpclmulqdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Icelake-Server-v5'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-vpopcntdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bitalg'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512dq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512ifma'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vbmi'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vbmi2'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vl'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vnni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fsrm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='gfni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='ibrs-all'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='la57'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='taa-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vaes'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vpclmulqdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xsaves'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Icelake-Server-v6'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-vpopcntdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bitalg'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512dq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512ifma'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vbmi'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vbmi2'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vl'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vnni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fsrm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='gfni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='ibrs-all'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='la57'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='taa-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vaes'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vpclmulqdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xsaves'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Icelake-Server-v7'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-vpopcntdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bitalg'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512dq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512ifma'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vbmi'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vbmi2'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vl'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vnni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fsrm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='gfni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='hle'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='ibrs-all'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='la57'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='rtm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='taa-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vaes'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vpclmulqdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xsaves'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='IvyBridge'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='IvyBridge-IBRS'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='IvyBridge-v1'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='IvyBridge-v2'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='KnightsMill'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-4fmaps'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-4vnniw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-vpopcntdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512er'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512pf'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='ss'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='KnightsMill-v1'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-4fmaps'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-4vnniw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-vpopcntdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512er'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512pf'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='ss'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Opteron_G4'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fma4'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xop'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Opteron_G4-v1'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fma4'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xop'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Opteron_G5'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fma4'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='tbm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xop'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Opteron_G5-v1'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fma4'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='tbm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xop'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='SapphireRapids'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='amx-bf16'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='amx-int8'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='amx-tile'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx-vnni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-bf16'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-fp16'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-vpopcntdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bitalg'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512dq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512ifma'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vbmi'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vbmi2'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vl'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vnni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='bus-lock-detect'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fsrc'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fsrm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fsrs'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fzrm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='gfni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='hle'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='ibrs-all'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='la57'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='rtm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='serialize'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='taa-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='tsx-ldtrk'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vaes'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vpclmulqdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xfd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xsaves'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='SapphireRapids-v1'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='amx-bf16'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='amx-int8'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='amx-tile'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx-vnni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-bf16'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-fp16'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-vpopcntdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bitalg'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512dq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512ifma'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vbmi'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vbmi2'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vl'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vnni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='bus-lock-detect'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fsrc'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fsrm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fsrs'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fzrm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='gfni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='hle'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='ibrs-all'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='la57'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='rtm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='serialize'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='taa-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='tsx-ldtrk'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vaes'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vpclmulqdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xfd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xsaves'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='SapphireRapids-v2'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='amx-bf16'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='amx-int8'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='amx-tile'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx-vnni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-bf16'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-fp16'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-vpopcntdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bitalg'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512dq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512ifma'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vbmi'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vbmi2'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vl'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vnni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='bus-lock-detect'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fbsdp-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fsrc'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fsrm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fsrs'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fzrm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='gfni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='hle'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='ibrs-all'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='la57'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='psdp-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='rtm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='sbdr-ssdp-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='serialize'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='taa-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='tsx-ldtrk'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vaes'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vpclmulqdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xfd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xsaves'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='SapphireRapids-v3'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='amx-bf16'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='amx-int8'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='amx-tile'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx-vnni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-bf16'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-fp16'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-vpopcntdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bitalg'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512dq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512ifma'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vbmi'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vbmi2'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vl'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vnni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='bus-lock-detect'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='cldemote'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fbsdp-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fsrc'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fsrm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fsrs'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fzrm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='gfni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='hle'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='ibrs-all'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='la57'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='movdir64b'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='movdiri'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='psdp-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='rtm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='sbdr-ssdp-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='serialize'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='ss'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='taa-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='tsx-ldtrk'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vaes'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vpclmulqdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xfd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xsaves'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='SierraForest'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx-ifma'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx-ne-convert'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx-vnni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx-vnni-int8'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='bus-lock-detect'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='cmpccxadd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fbsdp-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fsrm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fsrs'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='gfni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='ibrs-all'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='mcdt-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pbrsb-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='psdp-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='sbdr-ssdp-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='serialize'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vaes'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vpclmulqdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xsaves'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='SierraForest-v1'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx-ifma'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx-ne-convert'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx-vnni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx-vnni-int8'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='bus-lock-detect'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='cmpccxadd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fbsdp-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fsrm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fsrs'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='gfni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='ibrs-all'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='mcdt-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pbrsb-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='psdp-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='sbdr-ssdp-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='serialize'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vaes'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vpclmulqdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xsaves'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Skylake-Client'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='hle'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='rtm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Skylake-Client-IBRS'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='hle'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='rtm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Skylake-Client-v1'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='hle'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='rtm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Skylake-Client-v2'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='hle'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='rtm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Skylake-Client-v3'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Skylake-Client-v4'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xsaves'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Skylake-Server'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512dq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vl'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='hle'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='rtm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Skylake-Server-IBRS'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512dq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vl'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='hle'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='rtm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512dq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vl'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Skylake-Server-v1'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512dq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vl'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='hle'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='rtm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Skylake-Server-v2'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512dq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vl'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='hle'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='rtm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Skylake-Server-v3'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512dq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vl'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Skylake-Server-v4'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512dq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vl'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Skylake-Server-v5'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512dq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vl'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xsaves'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Snowridge'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='cldemote'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='core-capability'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='gfni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='movdir64b'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='movdiri'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='mpx'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='split-lock-detect'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Snowridge-v1'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='cldemote'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='core-capability'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='gfni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='movdir64b'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='movdiri'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='mpx'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='split-lock-detect'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Snowridge-v2'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='cldemote'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='core-capability'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='gfni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='movdir64b'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='movdiri'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='split-lock-detect'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Snowridge-v3'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='cldemote'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='core-capability'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='gfni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='movdir64b'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='movdiri'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='split-lock-detect'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xsaves'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Snowridge-v4'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='cldemote'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='gfni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='movdir64b'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='movdiri'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xsaves'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='athlon'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='3dnow'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='3dnowext'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='athlon-v1'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='3dnow'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='3dnowext'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='core2duo'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='ss'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='core2duo-v1'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='ss'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='coreduo'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='ss'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='coreduo-v1'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='ss'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='n270'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='ss'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='n270-v1'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='ss'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='phenom'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='3dnow'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='3dnowext'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='phenom-v1'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='3dnow'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='3dnowext'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     </mode>
Jan 10 17:15:49 compute-0 nova_compute[237049]:   </cpu>
Jan 10 17:15:49 compute-0 nova_compute[237049]:   <memoryBacking supported='yes'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     <enum name='sourceType'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <value>file</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <value>anonymous</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <value>memfd</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     </enum>
Jan 10 17:15:49 compute-0 nova_compute[237049]:   </memoryBacking>
Jan 10 17:15:49 compute-0 nova_compute[237049]:   <devices>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     <disk supported='yes'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <enum name='diskDevice'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>disk</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>cdrom</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>floppy</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>lun</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </enum>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <enum name='bus'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>ide</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>fdc</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>scsi</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>virtio</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>usb</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>sata</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </enum>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <enum name='model'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>virtio</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>virtio-transitional</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>virtio-non-transitional</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </enum>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     </disk>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     <graphics supported='yes'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <enum name='type'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>vnc</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>egl-headless</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>dbus</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </enum>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     </graphics>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     <video supported='yes'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <enum name='modelType'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>vga</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>cirrus</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>virtio</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>none</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>bochs</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>ramfb</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </enum>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     </video>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     <hostdev supported='yes'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <enum name='mode'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>subsystem</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </enum>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <enum name='startupPolicy'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>default</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>mandatory</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>requisite</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>optional</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </enum>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <enum name='subsysType'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>usb</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>pci</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>scsi</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </enum>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <enum name='capsType'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <enum name='pciBackend'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     </hostdev>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     <rng supported='yes'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <enum name='model'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>virtio</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>virtio-transitional</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>virtio-non-transitional</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </enum>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <enum name='backendModel'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>random</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>egd</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>builtin</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </enum>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     </rng>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     <filesystem supported='yes'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <enum name='driverType'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>path</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>handle</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>virtiofs</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </enum>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     </filesystem>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     <tpm supported='yes'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <enum name='model'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>tpm-tis</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>tpm-crb</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </enum>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <enum name='backendModel'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>emulator</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>external</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </enum>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <enum name='backendVersion'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>2.0</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </enum>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     </tpm>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     <redirdev supported='yes'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <enum name='bus'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>usb</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </enum>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     </redirdev>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     <channel supported='yes'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <enum name='type'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>pty</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>unix</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </enum>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     </channel>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     <crypto supported='yes'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <enum name='model'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <enum name='type'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>qemu</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </enum>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <enum name='backendModel'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>builtin</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </enum>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     </crypto>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     <interface supported='yes'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <enum name='backendType'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>default</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>passt</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </enum>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     </interface>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     <panic supported='yes'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <enum name='model'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>isa</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>hyperv</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </enum>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     </panic>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     <console supported='yes'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <enum name='type'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>null</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>vc</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>pty</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>dev</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>file</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>pipe</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>stdio</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>udp</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>tcp</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>unix</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>qemu-vdagent</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>dbus</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </enum>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     </console>
Jan 10 17:15:49 compute-0 nova_compute[237049]:   </devices>
Jan 10 17:15:49 compute-0 nova_compute[237049]:   <features>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     <gic supported='no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     <vmcoreinfo supported='yes'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     <genid supported='yes'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     <backingStoreInput supported='yes'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     <backup supported='yes'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     <async-teardown supported='yes'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     <ps2 supported='yes'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     <sev supported='no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     <sgx supported='no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     <hyperv supported='yes'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <enum name='features'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>relaxed</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>vapic</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>spinlocks</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>vpindex</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>runtime</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>synic</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>stimer</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>reset</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>vendor_id</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>frequencies</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>reenlightenment</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>tlbflush</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>ipi</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>avic</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>emsr_bitmap</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>xmm_input</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </enum>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <defaults>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <spinlocks>4095</spinlocks>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <stimer_direct>on</stimer_direct>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <tlbflush_direct>on</tlbflush_direct>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <tlbflush_extended>on</tlbflush_extended>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </defaults>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     </hyperv>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     <launchSecurity supported='yes'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <enum name='sectype'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>tdx</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </enum>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     </launchSecurity>
Jan 10 17:15:49 compute-0 nova_compute[237049]:   </features>
Jan 10 17:15:49 compute-0 nova_compute[237049]: </domainCapabilities>
Jan 10 17:15:49 compute-0 nova_compute[237049]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.291 237053 DEBUG nova.virt.libvirt.host [None req-e8bffe3c-fd35-435d-be63-f593c81b47ca - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.296 237053 DEBUG nova.virt.libvirt.host [None req-e8bffe3c-fd35-435d-be63-f593c81b47ca - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Jan 10 17:15:49 compute-0 nova_compute[237049]: <domainCapabilities>
Jan 10 17:15:49 compute-0 nova_compute[237049]:   <path>/usr/libexec/qemu-kvm</path>
Jan 10 17:15:49 compute-0 nova_compute[237049]:   <domain>kvm</domain>
Jan 10 17:15:49 compute-0 nova_compute[237049]:   <machine>pc-q35-rhel9.8.0</machine>
Jan 10 17:15:49 compute-0 nova_compute[237049]:   <arch>x86_64</arch>
Jan 10 17:15:49 compute-0 nova_compute[237049]:   <vcpu max='4096'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:   <iothreads supported='yes'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:   <os supported='yes'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     <enum name='firmware'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <value>efi</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     </enum>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     <loader supported='yes'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <enum name='type'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>rom</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>pflash</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </enum>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <enum name='readonly'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>yes</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>no</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </enum>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <enum name='secure'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>yes</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>no</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </enum>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     </loader>
Jan 10 17:15:49 compute-0 nova_compute[237049]:   </os>
Jan 10 17:15:49 compute-0 nova_compute[237049]:   <cpu>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     <mode name='host-passthrough' supported='yes'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <enum name='hostPassthroughMigratable'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>on</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>off</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </enum>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     </mode>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     <mode name='maximum' supported='yes'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <enum name='maximumMigratable'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>on</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>off</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </enum>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     </mode>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     <mode name='host-model' supported='yes'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <vendor>AMD</vendor>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <feature policy='require' name='x2apic'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <feature policy='require' name='tsc-deadline'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <feature policy='require' name='hypervisor'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <feature policy='require' name='tsc_adjust'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <feature policy='require' name='spec-ctrl'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <feature policy='require' name='stibp'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <feature policy='require' name='ssbd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <feature policy='require' name='cmp_legacy'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <feature policy='require' name='overflow-recov'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <feature policy='require' name='succor'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <feature policy='require' name='ibrs'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <feature policy='require' name='amd-ssbd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <feature policy='require' name='virt-ssbd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <feature policy='require' name='lbrv'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <feature policy='require' name='tsc-scale'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <feature policy='require' name='vmcb-clean'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <feature policy='require' name='flushbyasid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <feature policy='require' name='pause-filter'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <feature policy='require' name='pfthreshold'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <feature policy='require' name='svme-addr-chk'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <feature policy='disable' name='xsaves'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     </mode>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     <mode name='custom' supported='yes'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Broadwell'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='hle'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='rtm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Broadwell-IBRS'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='hle'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='rtm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Broadwell-noTSX'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Broadwell-v1'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='hle'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='rtm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Broadwell-v2'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Broadwell-v3'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='hle'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='rtm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Broadwell-v4'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Cascadelake-Server'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512dq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vl'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vnni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='hle'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='rtm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512dq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vl'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vnni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='ibrs-all'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Cascadelake-Server-v1'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512dq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vl'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vnni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='hle'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='rtm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Cascadelake-Server-v2'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512dq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vl'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vnni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='hle'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='ibrs-all'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='rtm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Cascadelake-Server-v3'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512dq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vl'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vnni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='ibrs-all'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Cascadelake-Server-v4'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512dq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vl'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vnni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='ibrs-all'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Cascadelake-Server-v5'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512dq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vl'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vnni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='ibrs-all'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xsaves'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Cooperlake'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-bf16'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512dq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vl'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vnni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='hle'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='ibrs-all'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='rtm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='taa-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Cooperlake-v1'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-bf16'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512dq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vl'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vnni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='hle'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='ibrs-all'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='rtm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='taa-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Cooperlake-v2'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-bf16'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512dq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vl'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vnni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='hle'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='ibrs-all'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='rtm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='taa-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xsaves'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Denverton'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='mpx'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Denverton-v1'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='mpx'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Denverton-v2'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Denverton-v3'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xsaves'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Dhyana-v2'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xsaves'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='EPYC-Genoa'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='amd-psfd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='auto-ibrs'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-bf16'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-vpopcntdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bitalg'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512dq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512ifma'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vbmi'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vbmi2'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vl'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vnni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fsrm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='gfni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='la57'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='no-nested-data-bp'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='null-sel-clr-base'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='stibp-always-on'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vaes'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vpclmulqdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xsaves'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='EPYC-Genoa-v1'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='amd-psfd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='auto-ibrs'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-bf16'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-vpopcntdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bitalg'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512dq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512ifma'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vbmi'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vbmi2'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vl'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vnni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fsrm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='gfni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='la57'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='no-nested-data-bp'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='null-sel-clr-base'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='stibp-always-on'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vaes'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vpclmulqdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xsaves'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='EPYC-Milan'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fsrm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xsaves'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='EPYC-Milan-v1'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fsrm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xsaves'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='EPYC-Milan-v2'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='amd-psfd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fsrm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='no-nested-data-bp'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='null-sel-clr-base'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='stibp-always-on'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vaes'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vpclmulqdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xsaves'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='EPYC-Rome'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xsaves'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='EPYC-Rome-v1'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xsaves'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='EPYC-Rome-v2'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xsaves'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='EPYC-Rome-v3'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xsaves'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='EPYC-v3'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xsaves'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='EPYC-v4'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xsaves'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='GraniteRapids'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='amx-bf16'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='amx-fp16'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='amx-int8'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='amx-tile'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx-vnni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-bf16'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-fp16'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-vpopcntdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bitalg'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512dq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512ifma'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vbmi'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vbmi2'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vl'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vnni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='bus-lock-detect'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fbsdp-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fsrc'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fsrm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fsrs'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fzrm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='gfni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='hle'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='ibrs-all'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='la57'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='mcdt-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pbrsb-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='prefetchiti'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='psdp-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='rtm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='sbdr-ssdp-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='serialize'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='taa-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='tsx-ldtrk'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vaes'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vpclmulqdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xfd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xsaves'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='GraniteRapids-v1'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='amx-bf16'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='amx-fp16'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='amx-int8'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='amx-tile'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx-vnni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-bf16'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-fp16'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-vpopcntdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bitalg'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512dq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512ifma'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vbmi'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vbmi2'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vl'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vnni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='bus-lock-detect'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fbsdp-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fsrc'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fsrm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fsrs'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fzrm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='gfni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='hle'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='ibrs-all'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='la57'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='mcdt-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pbrsb-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='prefetchiti'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='psdp-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='rtm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='sbdr-ssdp-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='serialize'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='taa-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='tsx-ldtrk'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vaes'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vpclmulqdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xfd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xsaves'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='GraniteRapids-v2'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='amx-bf16'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='amx-fp16'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='amx-int8'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='amx-tile'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx-vnni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx10'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx10-128'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx10-256'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx10-512'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-bf16'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-fp16'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-vpopcntdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bitalg'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512dq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512ifma'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vbmi'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vbmi2'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vl'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vnni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='bus-lock-detect'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='cldemote'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fbsdp-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fsrc'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fsrm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fsrs'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fzrm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='gfni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='hle'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='ibrs-all'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='la57'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='mcdt-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='movdir64b'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='movdiri'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pbrsb-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='prefetchiti'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='psdp-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='rtm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='sbdr-ssdp-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='serialize'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='ss'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='taa-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='tsx-ldtrk'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vaes'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vpclmulqdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xfd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xsaves'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Haswell'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='hle'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='rtm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Haswell-IBRS'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='hle'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='rtm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Haswell-noTSX'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Haswell-v1'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='hle'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='rtm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Haswell-v2'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Haswell-v3'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='hle'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='rtm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Haswell-v4'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Icelake-Server'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-vpopcntdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bitalg'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512dq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vbmi'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vbmi2'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vl'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vnni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='gfni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='hle'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='la57'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='rtm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vaes'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vpclmulqdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Icelake-Server-noTSX'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-vpopcntdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bitalg'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512dq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vbmi'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vbmi2'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vl'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vnni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='gfni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='la57'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vaes'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vpclmulqdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Icelake-Server-v1'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-vpopcntdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bitalg'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512dq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vbmi'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vbmi2'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vl'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vnni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='gfni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='hle'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='la57'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='rtm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vaes'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vpclmulqdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Icelake-Server-v2'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-vpopcntdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bitalg'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512dq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vbmi'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vbmi2'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vl'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vnni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='gfni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='la57'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vaes'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vpclmulqdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Icelake-Server-v3'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-vpopcntdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bitalg'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512dq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vbmi'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vbmi2'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vl'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vnni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='gfni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='ibrs-all'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='la57'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='taa-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vaes'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vpclmulqdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Icelake-Server-v4'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-vpopcntdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bitalg'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512dq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512ifma'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vbmi'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vbmi2'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vl'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vnni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fsrm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='gfni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='ibrs-all'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='la57'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='taa-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vaes'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vpclmulqdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Icelake-Server-v5'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-vpopcntdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bitalg'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512dq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512ifma'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vbmi'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vbmi2'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vl'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vnni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fsrm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='gfni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='ibrs-all'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='la57'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='taa-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vaes'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vpclmulqdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xsaves'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Icelake-Server-v6'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-vpopcntdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bitalg'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512dq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512ifma'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vbmi'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vbmi2'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vl'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vnni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fsrm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='gfni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='ibrs-all'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='la57'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='taa-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vaes'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vpclmulqdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xsaves'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Icelake-Server-v7'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-vpopcntdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bitalg'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512dq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512ifma'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vbmi'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vbmi2'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vl'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vnni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fsrm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='gfni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='hle'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='ibrs-all'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='la57'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='rtm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='taa-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vaes'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vpclmulqdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xsaves'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='IvyBridge'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='IvyBridge-IBRS'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='IvyBridge-v1'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='IvyBridge-v2'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='KnightsMill'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-4fmaps'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-4vnniw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-vpopcntdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512er'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512pf'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='ss'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='KnightsMill-v1'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-4fmaps'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-4vnniw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-vpopcntdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512er'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512pf'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='ss'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Opteron_G4'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fma4'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xop'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Opteron_G4-v1'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fma4'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xop'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Opteron_G5'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fma4'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='tbm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xop'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Opteron_G5-v1'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fma4'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='tbm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xop'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='SapphireRapids'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='amx-bf16'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='amx-int8'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='amx-tile'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx-vnni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-bf16'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-fp16'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-vpopcntdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bitalg'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512dq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512ifma'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vbmi'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vbmi2'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vl'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vnni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='bus-lock-detect'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fsrc'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fsrm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fsrs'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fzrm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='gfni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='hle'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='ibrs-all'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='la57'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='rtm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='serialize'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='taa-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='tsx-ldtrk'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vaes'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vpclmulqdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xfd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xsaves'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='SapphireRapids-v1'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='amx-bf16'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='amx-int8'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='amx-tile'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx-vnni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-bf16'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-fp16'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-vpopcntdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bitalg'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512dq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512ifma'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vbmi'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vbmi2'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vl'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vnni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='bus-lock-detect'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fsrc'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fsrm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fsrs'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fzrm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='gfni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='hle'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='ibrs-all'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='la57'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='rtm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='serialize'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='taa-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='tsx-ldtrk'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vaes'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vpclmulqdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xfd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xsaves'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='SapphireRapids-v2'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='amx-bf16'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='amx-int8'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='amx-tile'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx-vnni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-bf16'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-fp16'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-vpopcntdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bitalg'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512dq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512ifma'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vbmi'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vbmi2'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vl'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vnni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='bus-lock-detect'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fbsdp-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fsrc'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fsrm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fsrs'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fzrm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='gfni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='hle'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='ibrs-all'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='la57'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='psdp-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='rtm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='sbdr-ssdp-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='serialize'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='taa-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='tsx-ldtrk'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vaes'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vpclmulqdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xfd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xsaves'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='SapphireRapids-v3'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='amx-bf16'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='amx-int8'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='amx-tile'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx-vnni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-bf16'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-fp16'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-vpopcntdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bitalg'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512dq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512ifma'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vbmi'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vbmi2'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vl'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vnni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='bus-lock-detect'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='cldemote'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fbsdp-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fsrc'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fsrm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fsrs'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fzrm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='gfni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='hle'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='ibrs-all'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='la57'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='movdir64b'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='movdiri'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='psdp-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='rtm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='sbdr-ssdp-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='serialize'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='ss'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='taa-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='tsx-ldtrk'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vaes'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vpclmulqdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xfd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xsaves'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='SierraForest'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx-ifma'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx-ne-convert'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx-vnni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx-vnni-int8'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='bus-lock-detect'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='cmpccxadd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fbsdp-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fsrm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fsrs'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='gfni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='ibrs-all'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='mcdt-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pbrsb-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='psdp-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='sbdr-ssdp-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='serialize'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vaes'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vpclmulqdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xsaves'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='SierraForest-v1'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx-ifma'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx-ne-convert'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx-vnni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx-vnni-int8'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='bus-lock-detect'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='cmpccxadd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fbsdp-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fsrm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fsrs'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='gfni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='ibrs-all'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='mcdt-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pbrsb-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='psdp-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='sbdr-ssdp-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='serialize'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vaes'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vpclmulqdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xsaves'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Skylake-Client'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='hle'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='rtm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Skylake-Client-IBRS'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='hle'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='rtm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Skylake-Client-v1'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='hle'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='rtm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Skylake-Client-v2'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='hle'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='rtm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Skylake-Client-v3'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Skylake-Client-v4'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xsaves'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Skylake-Server'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512dq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vl'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='hle'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='rtm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Skylake-Server-IBRS'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512dq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vl'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='hle'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='rtm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512dq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vl'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Skylake-Server-v1'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512dq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vl'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='hle'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='rtm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Skylake-Server-v2'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512dq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vl'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='hle'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='rtm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Skylake-Server-v3'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512dq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vl'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Skylake-Server-v4'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512dq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vl'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Skylake-Server-v5'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512dq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vl'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xsaves'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Snowridge'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='cldemote'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='core-capability'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='gfni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='movdir64b'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='movdiri'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='mpx'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='split-lock-detect'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Snowridge-v1'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='cldemote'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='core-capability'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='gfni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='movdir64b'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='movdiri'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='mpx'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='split-lock-detect'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Snowridge-v2'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='cldemote'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='core-capability'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='gfni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='movdir64b'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='movdiri'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='split-lock-detect'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Snowridge-v3'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='cldemote'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='core-capability'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='gfni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='movdir64b'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='movdiri'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='split-lock-detect'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xsaves'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Snowridge-v4'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='cldemote'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='gfni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='movdir64b'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='movdiri'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xsaves'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='athlon'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='3dnow'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='3dnowext'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='athlon-v1'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='3dnow'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='3dnowext'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='core2duo'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='ss'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='core2duo-v1'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='ss'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='coreduo'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='ss'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='coreduo-v1'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='ss'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='n270'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='ss'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='n270-v1'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='ss'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='phenom'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='3dnow'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='3dnowext'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='phenom-v1'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='3dnow'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='3dnowext'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     </mode>
Jan 10 17:15:49 compute-0 nova_compute[237049]:   </cpu>
Jan 10 17:15:49 compute-0 nova_compute[237049]:   <memoryBacking supported='yes'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     <enum name='sourceType'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <value>file</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <value>anonymous</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <value>memfd</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     </enum>
Jan 10 17:15:49 compute-0 nova_compute[237049]:   </memoryBacking>
Jan 10 17:15:49 compute-0 nova_compute[237049]:   <devices>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     <disk supported='yes'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <enum name='diskDevice'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>disk</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>cdrom</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>floppy</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>lun</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </enum>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <enum name='bus'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>fdc</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>scsi</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>virtio</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>usb</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>sata</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </enum>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <enum name='model'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>virtio</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>virtio-transitional</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>virtio-non-transitional</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </enum>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     </disk>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     <graphics supported='yes'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <enum name='type'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>vnc</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>egl-headless</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>dbus</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </enum>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     </graphics>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     <video supported='yes'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <enum name='modelType'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>vga</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>cirrus</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>virtio</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>none</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>bochs</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>ramfb</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </enum>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     </video>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     <hostdev supported='yes'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <enum name='mode'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>subsystem</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </enum>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <enum name='startupPolicy'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>default</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>mandatory</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>requisite</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>optional</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </enum>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <enum name='subsysType'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>usb</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>pci</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>scsi</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </enum>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <enum name='capsType'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <enum name='pciBackend'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     </hostdev>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     <rng supported='yes'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <enum name='model'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>virtio</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>virtio-transitional</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>virtio-non-transitional</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </enum>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <enum name='backendModel'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>random</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>egd</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>builtin</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </enum>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     </rng>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     <filesystem supported='yes'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <enum name='driverType'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>path</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>handle</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>virtiofs</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </enum>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     </filesystem>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     <tpm supported='yes'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <enum name='model'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>tpm-tis</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>tpm-crb</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </enum>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <enum name='backendModel'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>emulator</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>external</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </enum>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <enum name='backendVersion'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>2.0</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </enum>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     </tpm>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     <redirdev supported='yes'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <enum name='bus'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>usb</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </enum>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     </redirdev>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     <channel supported='yes'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <enum name='type'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>pty</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>unix</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </enum>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     </channel>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     <crypto supported='yes'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <enum name='model'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <enum name='type'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>qemu</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </enum>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <enum name='backendModel'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>builtin</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </enum>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     </crypto>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     <interface supported='yes'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <enum name='backendType'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>default</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>passt</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </enum>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     </interface>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     <panic supported='yes'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <enum name='model'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>isa</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>hyperv</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </enum>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     </panic>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     <console supported='yes'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <enum name='type'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>null</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>vc</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>pty</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>dev</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>file</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>pipe</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>stdio</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>udp</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>tcp</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>unix</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>qemu-vdagent</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>dbus</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </enum>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     </console>
Jan 10 17:15:49 compute-0 nova_compute[237049]:   </devices>
Jan 10 17:15:49 compute-0 nova_compute[237049]:   <features>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     <gic supported='no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     <vmcoreinfo supported='yes'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     <genid supported='yes'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     <backingStoreInput supported='yes'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     <backup supported='yes'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     <async-teardown supported='yes'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     <ps2 supported='yes'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     <sev supported='no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     <sgx supported='no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     <hyperv supported='yes'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <enum name='features'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>relaxed</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>vapic</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>spinlocks</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>vpindex</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>runtime</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>synic</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>stimer</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>reset</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>vendor_id</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>frequencies</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>reenlightenment</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>tlbflush</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>ipi</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>avic</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>emsr_bitmap</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>xmm_input</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </enum>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <defaults>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <spinlocks>4095</spinlocks>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <stimer_direct>on</stimer_direct>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <tlbflush_direct>on</tlbflush_direct>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <tlbflush_extended>on</tlbflush_extended>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </defaults>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     </hyperv>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     <launchSecurity supported='yes'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <enum name='sectype'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>tdx</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </enum>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     </launchSecurity>
Jan 10 17:15:49 compute-0 nova_compute[237049]:   </features>
Jan 10 17:15:49 compute-0 nova_compute[237049]: </domainCapabilities>
Jan 10 17:15:49 compute-0 nova_compute[237049]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.353 237053 DEBUG nova.virt.libvirt.host [None req-e8bffe3c-fd35-435d-be63-f593c81b47ca - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Jan 10 17:15:49 compute-0 nova_compute[237049]: <domainCapabilities>
Jan 10 17:15:49 compute-0 nova_compute[237049]:   <path>/usr/libexec/qemu-kvm</path>
Jan 10 17:15:49 compute-0 nova_compute[237049]:   <domain>kvm</domain>
Jan 10 17:15:49 compute-0 nova_compute[237049]:   <machine>pc-i440fx-rhel7.6.0</machine>
Jan 10 17:15:49 compute-0 nova_compute[237049]:   <arch>x86_64</arch>
Jan 10 17:15:49 compute-0 nova_compute[237049]:   <vcpu max='240'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:   <iothreads supported='yes'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:   <os supported='yes'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     <enum name='firmware'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     <loader supported='yes'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <enum name='type'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>rom</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>pflash</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </enum>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <enum name='readonly'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>yes</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>no</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </enum>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <enum name='secure'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>no</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </enum>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     </loader>
Jan 10 17:15:49 compute-0 nova_compute[237049]:   </os>
Jan 10 17:15:49 compute-0 nova_compute[237049]:   <cpu>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     <mode name='host-passthrough' supported='yes'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <enum name='hostPassthroughMigratable'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>on</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>off</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </enum>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     </mode>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     <mode name='maximum' supported='yes'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <enum name='maximumMigratable'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>on</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>off</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </enum>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     </mode>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     <mode name='host-model' supported='yes'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <vendor>AMD</vendor>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <feature policy='require' name='x2apic'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <feature policy='require' name='tsc-deadline'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <feature policy='require' name='hypervisor'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <feature policy='require' name='tsc_adjust'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <feature policy='require' name='spec-ctrl'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <feature policy='require' name='stibp'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <feature policy='require' name='ssbd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <feature policy='require' name='cmp_legacy'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <feature policy='require' name='overflow-recov'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <feature policy='require' name='succor'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <feature policy='require' name='ibrs'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <feature policy='require' name='amd-ssbd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <feature policy='require' name='virt-ssbd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <feature policy='require' name='lbrv'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <feature policy='require' name='tsc-scale'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <feature policy='require' name='vmcb-clean'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <feature policy='require' name='flushbyasid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <feature policy='require' name='pause-filter'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <feature policy='require' name='pfthreshold'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <feature policy='require' name='svme-addr-chk'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <feature policy='disable' name='xsaves'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     </mode>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     <mode name='custom' supported='yes'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Broadwell'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='hle'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='rtm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Broadwell-IBRS'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='hle'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='rtm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Broadwell-noTSX'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Broadwell-v1'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='hle'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='rtm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Broadwell-v2'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Broadwell-v3'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='hle'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='rtm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Broadwell-v4'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Cascadelake-Server'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512dq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vl'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vnni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='hle'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='rtm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512dq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vl'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vnni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='ibrs-all'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Cascadelake-Server-v1'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512dq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vl'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vnni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='hle'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='rtm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Cascadelake-Server-v2'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512dq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vl'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vnni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='hle'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='ibrs-all'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='rtm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Cascadelake-Server-v3'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512dq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vl'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vnni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='ibrs-all'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Cascadelake-Server-v4'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512dq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vl'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vnni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='ibrs-all'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Cascadelake-Server-v5'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512dq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vl'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vnni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='ibrs-all'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xsaves'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Cooperlake'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-bf16'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512dq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vl'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vnni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='hle'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='ibrs-all'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='rtm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='taa-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Cooperlake-v1'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-bf16'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512dq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vl'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vnni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='hle'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='ibrs-all'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='rtm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='taa-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Cooperlake-v2'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-bf16'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512dq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vl'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vnni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='hle'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='ibrs-all'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='rtm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='taa-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xsaves'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Denverton'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='mpx'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Denverton-v1'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='mpx'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Denverton-v2'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Denverton-v3'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xsaves'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Dhyana-v2'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xsaves'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='EPYC-Genoa'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='amd-psfd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='auto-ibrs'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-bf16'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-vpopcntdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bitalg'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512dq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512ifma'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vbmi'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vbmi2'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vl'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vnni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fsrm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='gfni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='la57'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='no-nested-data-bp'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='null-sel-clr-base'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='stibp-always-on'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vaes'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vpclmulqdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xsaves'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='EPYC-Genoa-v1'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='amd-psfd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='auto-ibrs'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-bf16'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-vpopcntdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bitalg'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512dq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512ifma'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vbmi'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vbmi2'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vl'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vnni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fsrm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='gfni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='la57'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='no-nested-data-bp'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='null-sel-clr-base'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='stibp-always-on'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vaes'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vpclmulqdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xsaves'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='EPYC-Milan'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fsrm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xsaves'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='EPYC-Milan-v1'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fsrm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xsaves'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='EPYC-Milan-v2'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='amd-psfd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fsrm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='no-nested-data-bp'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='null-sel-clr-base'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='stibp-always-on'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vaes'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vpclmulqdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xsaves'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='EPYC-Rome'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xsaves'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='EPYC-Rome-v1'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xsaves'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='EPYC-Rome-v2'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xsaves'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='EPYC-Rome-v3'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xsaves'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='EPYC-v3'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xsaves'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='EPYC-v4'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xsaves'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='GraniteRapids'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='amx-bf16'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='amx-fp16'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='amx-int8'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='amx-tile'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx-vnni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-bf16'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-fp16'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-vpopcntdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bitalg'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512dq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512ifma'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vbmi'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vbmi2'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vl'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vnni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='bus-lock-detect'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fbsdp-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fsrc'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fsrm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fsrs'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fzrm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='gfni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='hle'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='ibrs-all'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='la57'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='mcdt-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pbrsb-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='prefetchiti'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='psdp-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='rtm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='sbdr-ssdp-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='serialize'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='taa-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='tsx-ldtrk'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vaes'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vpclmulqdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xfd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xsaves'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='GraniteRapids-v1'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='amx-bf16'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='amx-fp16'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='amx-int8'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='amx-tile'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx-vnni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-bf16'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-fp16'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-vpopcntdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bitalg'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512dq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512ifma'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vbmi'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vbmi2'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vl'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vnni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='bus-lock-detect'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fbsdp-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fsrc'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fsrm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fsrs'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fzrm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='gfni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='hle'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='ibrs-all'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='la57'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='mcdt-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pbrsb-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='prefetchiti'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='psdp-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='rtm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='sbdr-ssdp-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='serialize'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='taa-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='tsx-ldtrk'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vaes'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vpclmulqdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xfd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xsaves'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='GraniteRapids-v2'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='amx-bf16'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='amx-fp16'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='amx-int8'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='amx-tile'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx-vnni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx10'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx10-128'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx10-256'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx10-512'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-bf16'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-fp16'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-vpopcntdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bitalg'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512dq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512ifma'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vbmi'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vbmi2'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vl'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vnni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='bus-lock-detect'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='cldemote'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fbsdp-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fsrc'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fsrm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fsrs'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fzrm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='gfni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='hle'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='ibrs-all'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='la57'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='mcdt-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='movdir64b'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='movdiri'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pbrsb-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='prefetchiti'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='psdp-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='rtm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='sbdr-ssdp-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='serialize'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='ss'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='taa-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='tsx-ldtrk'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vaes'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vpclmulqdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xfd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xsaves'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Haswell'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='hle'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='rtm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Haswell-IBRS'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='hle'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='rtm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Haswell-noTSX'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Haswell-v1'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='hle'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='rtm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Haswell-v2'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Haswell-v3'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='hle'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='rtm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Haswell-v4'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Icelake-Server'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-vpopcntdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bitalg'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512dq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vbmi'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vbmi2'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vl'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vnni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='gfni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='hle'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='la57'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='rtm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vaes'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vpclmulqdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Icelake-Server-noTSX'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-vpopcntdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bitalg'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512dq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vbmi'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vbmi2'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vl'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vnni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='gfni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='la57'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vaes'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vpclmulqdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Icelake-Server-v1'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-vpopcntdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bitalg'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512dq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vbmi'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vbmi2'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vl'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vnni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='gfni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='hle'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='la57'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='rtm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vaes'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vpclmulqdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Icelake-Server-v2'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-vpopcntdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bitalg'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512dq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vbmi'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vbmi2'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vl'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vnni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='gfni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='la57'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vaes'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vpclmulqdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Icelake-Server-v3'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-vpopcntdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bitalg'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512dq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vbmi'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vbmi2'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vl'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vnni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='gfni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='ibrs-all'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='la57'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='taa-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vaes'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vpclmulqdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Icelake-Server-v4'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-vpopcntdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bitalg'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512dq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512ifma'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vbmi'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vbmi2'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vl'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vnni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fsrm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='gfni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='ibrs-all'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='la57'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='taa-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vaes'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vpclmulqdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Icelake-Server-v5'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-vpopcntdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bitalg'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512dq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512ifma'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vbmi'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vbmi2'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vl'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vnni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fsrm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='gfni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='ibrs-all'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='la57'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='taa-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vaes'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vpclmulqdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xsaves'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Icelake-Server-v6'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-vpopcntdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bitalg'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512dq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512ifma'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vbmi'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vbmi2'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vl'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vnni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fsrm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='gfni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='ibrs-all'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='la57'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='taa-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vaes'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vpclmulqdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xsaves'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Icelake-Server-v7'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-vpopcntdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bitalg'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512dq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512ifma'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vbmi'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vbmi2'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vl'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vnni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fsrm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='gfni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='hle'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='ibrs-all'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='la57'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='rtm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='taa-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vaes'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vpclmulqdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xsaves'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='IvyBridge'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='IvyBridge-IBRS'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='IvyBridge-v1'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='IvyBridge-v2'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='KnightsMill'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-4fmaps'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-4vnniw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-vpopcntdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512er'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512pf'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='ss'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='KnightsMill-v1'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-4fmaps'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-4vnniw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-vpopcntdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512er'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512pf'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='ss'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Opteron_G4'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fma4'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xop'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Opteron_G4-v1'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fma4'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xop'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Opteron_G5'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fma4'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='tbm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xop'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Opteron_G5-v1'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fma4'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='tbm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xop'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='SapphireRapids'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='amx-bf16'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='amx-int8'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='amx-tile'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx-vnni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-bf16'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-fp16'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-vpopcntdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bitalg'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512dq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512ifma'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vbmi'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vbmi2'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vl'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vnni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='bus-lock-detect'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fsrc'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fsrm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fsrs'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fzrm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='gfni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='hle'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='ibrs-all'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='la57'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='rtm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='serialize'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='taa-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='tsx-ldtrk'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vaes'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vpclmulqdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xfd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xsaves'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='SapphireRapids-v1'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='amx-bf16'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='amx-int8'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='amx-tile'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx-vnni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-bf16'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-fp16'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-vpopcntdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bitalg'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512dq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512ifma'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vbmi'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vbmi2'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vl'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vnni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='bus-lock-detect'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fsrc'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fsrm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fsrs'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fzrm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='gfni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='hle'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='ibrs-all'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='la57'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='rtm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='serialize'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='taa-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='tsx-ldtrk'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vaes'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vpclmulqdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xfd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xsaves'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='SapphireRapids-v2'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='amx-bf16'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='amx-int8'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='amx-tile'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx-vnni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-bf16'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-fp16'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-vpopcntdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bitalg'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512dq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512ifma'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vbmi'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vbmi2'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vl'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vnni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='bus-lock-detect'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fbsdp-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fsrc'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fsrm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fsrs'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fzrm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='gfni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='hle'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='ibrs-all'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='la57'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='psdp-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='rtm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='sbdr-ssdp-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='serialize'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='taa-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='tsx-ldtrk'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vaes'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vpclmulqdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xfd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xsaves'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='SapphireRapids-v3'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='amx-bf16'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='amx-int8'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='amx-tile'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx-vnni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-bf16'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-fp16'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512-vpopcntdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bitalg'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512dq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512ifma'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vbmi'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vbmi2'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vl'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vnni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='bus-lock-detect'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='cldemote'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fbsdp-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fsrc'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fsrm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fsrs'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fzrm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='gfni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='hle'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='ibrs-all'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='la57'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='movdir64b'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='movdiri'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='psdp-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='rtm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='sbdr-ssdp-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='serialize'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='ss'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='taa-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='tsx-ldtrk'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vaes'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vpclmulqdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xfd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xsaves'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='SierraForest'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx-ifma'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx-ne-convert'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx-vnni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx-vnni-int8'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='bus-lock-detect'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='cmpccxadd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fbsdp-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fsrm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fsrs'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='gfni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='ibrs-all'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='mcdt-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pbrsb-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='psdp-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='sbdr-ssdp-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='serialize'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vaes'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vpclmulqdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xsaves'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='SierraForest-v1'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx-ifma'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx-ne-convert'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx-vnni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx-vnni-int8'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='bus-lock-detect'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='cmpccxadd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fbsdp-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fsrm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='fsrs'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='gfni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='ibrs-all'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='mcdt-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pbrsb-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='psdp-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='sbdr-ssdp-no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='serialize'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vaes'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='vpclmulqdq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xsaves'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Skylake-Client'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='hle'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='rtm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Skylake-Client-IBRS'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='hle'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='rtm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Skylake-Client-v1'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='hle'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='rtm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Skylake-Client-v2'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='hle'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='rtm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Skylake-Client-v3'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Skylake-Client-v4'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xsaves'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Skylake-Server'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512dq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vl'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='hle'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='rtm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Skylake-Server-IBRS'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512dq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vl'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='hle'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='rtm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512dq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vl'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Skylake-Server-v1'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512dq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vl'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='hle'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='rtm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Skylake-Server-v2'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512dq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vl'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='hle'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='rtm'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Skylake-Server-v3'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512dq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vl'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Skylake-Server-v4'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512dq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vl'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Skylake-Server-v5'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512bw'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512cd'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512dq'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512f'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='avx512vl'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='invpcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pcid'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='pku'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xsaves'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Snowridge'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='cldemote'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='core-capability'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='gfni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='movdir64b'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='movdiri'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='mpx'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='split-lock-detect'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Snowridge-v1'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='cldemote'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='core-capability'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='gfni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='movdir64b'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='movdiri'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='mpx'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='split-lock-detect'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Snowridge-v2'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='cldemote'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='core-capability'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='gfni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='movdir64b'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='movdiri'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='split-lock-detect'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Snowridge-v3'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='cldemote'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='core-capability'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='gfni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='movdir64b'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='movdiri'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='split-lock-detect'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xsaves'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='Snowridge-v4'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='cldemote'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='erms'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='gfni'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='movdir64b'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='movdiri'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='xsaves'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='athlon'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='3dnow'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='3dnowext'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='athlon-v1'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='3dnow'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='3dnowext'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='core2duo'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='ss'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='core2duo-v1'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='ss'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='coreduo'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='ss'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='coreduo-v1'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='ss'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='n270'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='ss'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='n270-v1'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='ss'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='phenom'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='3dnow'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='3dnowext'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <blockers model='phenom-v1'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='3dnow'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <feature name='3dnowext'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </blockers>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     </mode>
Jan 10 17:15:49 compute-0 nova_compute[237049]:   </cpu>
Jan 10 17:15:49 compute-0 nova_compute[237049]:   <memoryBacking supported='yes'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     <enum name='sourceType'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <value>file</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <value>anonymous</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <value>memfd</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     </enum>
Jan 10 17:15:49 compute-0 nova_compute[237049]:   </memoryBacking>
Jan 10 17:15:49 compute-0 nova_compute[237049]:   <devices>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     <disk supported='yes'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <enum name='diskDevice'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>disk</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>cdrom</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>floppy</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>lun</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </enum>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <enum name='bus'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>ide</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>fdc</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>scsi</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>virtio</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>usb</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>sata</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </enum>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <enum name='model'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>virtio</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>virtio-transitional</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>virtio-non-transitional</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </enum>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     </disk>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     <graphics supported='yes'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <enum name='type'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>vnc</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>egl-headless</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>dbus</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </enum>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     </graphics>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     <video supported='yes'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <enum name='modelType'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>vga</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>cirrus</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>virtio</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>none</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>bochs</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>ramfb</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </enum>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     </video>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     <hostdev supported='yes'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <enum name='mode'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>subsystem</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </enum>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <enum name='startupPolicy'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>default</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>mandatory</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>requisite</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>optional</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </enum>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <enum name='subsysType'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>usb</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>pci</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>scsi</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </enum>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <enum name='capsType'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <enum name='pciBackend'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     </hostdev>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     <rng supported='yes'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <enum name='model'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>virtio</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>virtio-transitional</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>virtio-non-transitional</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </enum>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <enum name='backendModel'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>random</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>egd</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>builtin</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </enum>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     </rng>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     <filesystem supported='yes'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <enum name='driverType'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>path</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>handle</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>virtiofs</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </enum>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     </filesystem>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     <tpm supported='yes'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <enum name='model'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>tpm-tis</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>tpm-crb</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </enum>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <enum name='backendModel'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>emulator</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>external</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </enum>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <enum name='backendVersion'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>2.0</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </enum>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     </tpm>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     <redirdev supported='yes'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <enum name='bus'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>usb</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </enum>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     </redirdev>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     <channel supported='yes'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <enum name='type'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>pty</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>unix</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </enum>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     </channel>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     <crypto supported='yes'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <enum name='model'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <enum name='type'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>qemu</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </enum>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <enum name='backendModel'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>builtin</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </enum>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     </crypto>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     <interface supported='yes'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <enum name='backendType'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>default</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>passt</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </enum>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     </interface>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     <panic supported='yes'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <enum name='model'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>isa</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>hyperv</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </enum>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     </panic>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     <console supported='yes'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <enum name='type'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>null</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>vc</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>pty</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>dev</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>file</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>pipe</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>stdio</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>udp</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>tcp</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>unix</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>qemu-vdagent</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>dbus</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </enum>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     </console>
Jan 10 17:15:49 compute-0 nova_compute[237049]:   </devices>
Jan 10 17:15:49 compute-0 nova_compute[237049]:   <features>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     <gic supported='no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     <vmcoreinfo supported='yes'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     <genid supported='yes'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     <backingStoreInput supported='yes'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     <backup supported='yes'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     <async-teardown supported='yes'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     <ps2 supported='yes'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     <sev supported='no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     <sgx supported='no'/>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     <hyperv supported='yes'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <enum name='features'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>relaxed</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>vapic</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>spinlocks</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>vpindex</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>runtime</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>synic</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>stimer</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>reset</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>vendor_id</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>frequencies</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>reenlightenment</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>tlbflush</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>ipi</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>avic</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>emsr_bitmap</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>xmm_input</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </enum>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <defaults>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <spinlocks>4095</spinlocks>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <stimer_direct>on</stimer_direct>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <tlbflush_direct>on</tlbflush_direct>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <tlbflush_extended>on</tlbflush_extended>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </defaults>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     </hyperv>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     <launchSecurity supported='yes'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       <enum name='sectype'>
Jan 10 17:15:49 compute-0 nova_compute[237049]:         <value>tdx</value>
Jan 10 17:15:49 compute-0 nova_compute[237049]:       </enum>
Jan 10 17:15:49 compute-0 nova_compute[237049]:     </launchSecurity>
Jan 10 17:15:49 compute-0 nova_compute[237049]:   </features>
Jan 10 17:15:49 compute-0 nova_compute[237049]: </domainCapabilities>
Jan 10 17:15:49 compute-0 nova_compute[237049]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.451 237053 DEBUG nova.virt.libvirt.host [None req-e8bffe3c-fd35-435d-be63-f593c81b47ca - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.451 237053 INFO nova.virt.libvirt.host [None req-e8bffe3c-fd35-435d-be63-f593c81b47ca - - - - - -] Secure Boot support detected
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.453 237053 INFO nova.virt.libvirt.driver [None req-e8bffe3c-fd35-435d-be63-f593c81b47ca - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.454 237053 INFO nova.virt.libvirt.driver [None req-e8bffe3c-fd35-435d-be63-f593c81b47ca - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.462 237053 DEBUG nova.virt.libvirt.driver [None req-e8bffe3c-fd35-435d-be63-f593c81b47ca - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.518 237053 INFO nova.virt.node [None req-e8bffe3c-fd35-435d-be63-f593c81b47ca - - - - - -] Determined node identity 5f85855c-8a9b-43b5-ae49-f5846b9dcebe from /var/lib/nova/compute_id
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.548 237053 WARNING nova.compute.manager [None req-e8bffe3c-fd35-435d-be63-f593c81b47ca - - - - - -] Compute nodes ['5f85855c-8a9b-43b5-ae49-f5846b9dcebe'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.584 237053 INFO nova.compute.manager [None req-e8bffe3c-fd35-435d-be63-f593c81b47ca - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.618 237053 WARNING nova.compute.manager [None req-e8bffe3c-fd35-435d-be63-f593c81b47ca - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.619 237053 DEBUG oslo_concurrency.lockutils [None req-e8bffe3c-fd35-435d-be63-f593c81b47ca - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.619 237053 DEBUG oslo_concurrency.lockutils [None req-e8bffe3c-fd35-435d-be63-f593c81b47ca - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.619 237053 DEBUG oslo_concurrency.lockutils [None req-e8bffe3c-fd35-435d-be63-f593c81b47ca - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.619 237053 DEBUG nova.compute.resource_tracker [None req-e8bffe3c-fd35-435d-be63-f593c81b47ca - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 10 17:15:49 compute-0 nova_compute[237049]: 2026-01-10 17:15:49.619 237053 DEBUG oslo_concurrency.processutils [None req-e8bffe3c-fd35-435d-be63-f593c81b47ca - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 10 17:15:49 compute-0 ceph-mon[75249]: pgmap v604: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:15:50 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 10 17:15:50 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/799514320' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 10 17:15:50 compute-0 nova_compute[237049]: 2026-01-10 17:15:50.193 237053 DEBUG oslo_concurrency.processutils [None req-e8bffe3c-fd35-435d-be63-f593c81b47ca - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.573s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 10 17:15:50 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Jan 10 17:15:50 compute-0 systemd[1]: Started libvirt nodedev daemon.
Jan 10 17:15:50 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v605: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:15:50 compute-0 nova_compute[237049]: 2026-01-10 17:15:50.543 237053 WARNING nova.virt.libvirt.driver [None req-e8bffe3c-fd35-435d-be63-f593c81b47ca - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 10 17:15:50 compute-0 nova_compute[237049]: 2026-01-10 17:15:50.546 237053 DEBUG nova.compute.resource_tracker [None req-e8bffe3c-fd35-435d-be63-f593c81b47ca - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5259MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 10 17:15:50 compute-0 nova_compute[237049]: 2026-01-10 17:15:50.546 237053 DEBUG oslo_concurrency.lockutils [None req-e8bffe3c-fd35-435d-be63-f593c81b47ca - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 10 17:15:50 compute-0 nova_compute[237049]: 2026-01-10 17:15:50.547 237053 DEBUG oslo_concurrency.lockutils [None req-e8bffe3c-fd35-435d-be63-f593c81b47ca - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 10 17:15:50 compute-0 nova_compute[237049]: 2026-01-10 17:15:50.566 237053 WARNING nova.compute.resource_tracker [None req-e8bffe3c-fd35-435d-be63-f593c81b47ca - - - - - -] No compute node record for compute-0.ctlplane.example.com:5f85855c-8a9b-43b5-ae49-f5846b9dcebe: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host 5f85855c-8a9b-43b5-ae49-f5846b9dcebe could not be found.
Jan 10 17:15:50 compute-0 nova_compute[237049]: 2026-01-10 17:15:50.585 237053 INFO nova.compute.resource_tracker [None req-e8bffe3c-fd35-435d-be63-f593c81b47ca - - - - - -] Compute node record created for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com with uuid: 5f85855c-8a9b-43b5-ae49-f5846b9dcebe
Jan 10 17:15:50 compute-0 nova_compute[237049]: 2026-01-10 17:15:50.652 237053 DEBUG nova.compute.resource_tracker [None req-e8bffe3c-fd35-435d-be63-f593c81b47ca - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 10 17:15:50 compute-0 nova_compute[237049]: 2026-01-10 17:15:50.652 237053 DEBUG nova.compute.resource_tracker [None req-e8bffe3c-fd35-435d-be63-f593c81b47ca - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 10 17:15:50 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/799514320' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 10 17:15:51 compute-0 nova_compute[237049]: 2026-01-10 17:15:51.691 237053 INFO nova.scheduler.client.report [None req-e8bffe3c-fd35-435d-be63-f593c81b47ca - - - - - -] [req-55c6c937-395c-49e6-b754-05c0e3db2256] Created resource provider record via placement API for resource provider with UUID 5f85855c-8a9b-43b5-ae49-f5846b9dcebe and name compute-0.ctlplane.example.com.
Jan 10 17:15:51 compute-0 ceph-mon[75249]: pgmap v605: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:15:52 compute-0 nova_compute[237049]: 2026-01-10 17:15:52.046 237053 DEBUG oslo_concurrency.processutils [None req-e8bffe3c-fd35-435d-be63-f593c81b47ca - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 10 17:15:52 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v606: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:15:52 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 10 17:15:52 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4294518686' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 10 17:15:52 compute-0 nova_compute[237049]: 2026-01-10 17:15:52.660 237053 DEBUG oslo_concurrency.processutils [None req-e8bffe3c-fd35-435d-be63-f593c81b47ca - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.614s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 10 17:15:52 compute-0 nova_compute[237049]: 2026-01-10 17:15:52.667 237053 DEBUG nova.virt.libvirt.host [None req-e8bffe3c-fd35-435d-be63-f593c81b47ca - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Jan 10 17:15:52 compute-0 nova_compute[237049]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803
Jan 10 17:15:52 compute-0 nova_compute[237049]: 2026-01-10 17:15:52.667 237053 INFO nova.virt.libvirt.host [None req-e8bffe3c-fd35-435d-be63-f593c81b47ca - - - - - -] kernel doesn't support AMD SEV
Jan 10 17:15:52 compute-0 nova_compute[237049]: 2026-01-10 17:15:52.668 237053 DEBUG nova.compute.provider_tree [None req-e8bffe3c-fd35-435d-be63-f593c81b47ca - - - - - -] Updating inventory in ProviderTree for provider 5f85855c-8a9b-43b5-ae49-f5846b9dcebe with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 10 17:15:52 compute-0 nova_compute[237049]: 2026-01-10 17:15:52.668 237053 DEBUG nova.virt.libvirt.driver [None req-e8bffe3c-fd35-435d-be63-f593c81b47ca - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 10 17:15:52 compute-0 nova_compute[237049]: 2026-01-10 17:15:52.748 237053 DEBUG nova.scheduler.client.report [None req-e8bffe3c-fd35-435d-be63-f593c81b47ca - - - - - -] Updated inventory for provider 5f85855c-8a9b-43b5-ae49-f5846b9dcebe with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Jan 10 17:15:52 compute-0 nova_compute[237049]: 2026-01-10 17:15:52.750 237053 DEBUG nova.compute.provider_tree [None req-e8bffe3c-fd35-435d-be63-f593c81b47ca - - - - - -] Updating resource provider 5f85855c-8a9b-43b5-ae49-f5846b9dcebe generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Jan 10 17:15:52 compute-0 nova_compute[237049]: 2026-01-10 17:15:52.751 237053 DEBUG nova.compute.provider_tree [None req-e8bffe3c-fd35-435d-be63-f593c81b47ca - - - - - -] Updating inventory in ProviderTree for provider 5f85855c-8a9b-43b5-ae49-f5846b9dcebe with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 10 17:15:52 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/4294518686' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 10 17:15:52 compute-0 nova_compute[237049]: 2026-01-10 17:15:52.869 237053 DEBUG nova.compute.provider_tree [None req-e8bffe3c-fd35-435d-be63-f593c81b47ca - - - - - -] Updating resource provider 5f85855c-8a9b-43b5-ae49-f5846b9dcebe generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Jan 10 17:15:52 compute-0 nova_compute[237049]: 2026-01-10 17:15:52.895 237053 DEBUG nova.compute.resource_tracker [None req-e8bffe3c-fd35-435d-be63-f593c81b47ca - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 10 17:15:52 compute-0 nova_compute[237049]: 2026-01-10 17:15:52.896 237053 DEBUG oslo_concurrency.lockutils [None req-e8bffe3c-fd35-435d-be63-f593c81b47ca - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.350s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 10 17:15:52 compute-0 nova_compute[237049]: 2026-01-10 17:15:52.897 237053 DEBUG nova.service [None req-e8bffe3c-fd35-435d-be63-f593c81b47ca - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182
Jan 10 17:15:52 compute-0 nova_compute[237049]: 2026-01-10 17:15:52.989 237053 DEBUG nova.service [None req-e8bffe3c-fd35-435d-be63-f593c81b47ca - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199
Jan 10 17:15:52 compute-0 nova_compute[237049]: 2026-01-10 17:15:52.990 237053 DEBUG nova.servicegroup.drivers.db [None req-e8bffe3c-fd35-435d-be63-f593c81b47ca - - - - - -] DB_Driver: join new ServiceGroup member compute-0.ctlplane.example.com to the compute group, service = <Service: host=compute-0.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44
Jan 10 17:15:53 compute-0 ceph-mon[75249]: pgmap v606: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:15:54 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:15:54 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v607: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:15:55 compute-0 ceph-mon[75249]: pgmap v607: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:15:56 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v608: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:15:57 compute-0 podman[237414]: 2026-01-10 17:15:57.076108986 +0000 UTC m=+0.078478325 container health_status 4a66317a3a3660e903224f5e823ead0a57597ae17c6ac34919f8a3aeea25124f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Jan 10 17:15:57 compute-0 podman[237415]: 2026-01-10 17:15:57.128089573 +0000 UTC m=+0.116772981 container health_status a2049b55215718ead6719d161f51721cb7ba61ad8a59a8a1174e05b17008fa6f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Jan 10 17:15:57 compute-0 ceph-mon[75249]: pgmap v608: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:15:58 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v609: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:15:59 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:15:59 compute-0 ceph-mon[75249]: pgmap v609: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:16:00 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v610: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:16:01 compute-0 anacron[99453]: Job `cron.daily' started
Jan 10 17:16:01 compute-0 anacron[99453]: Job `cron.daily' terminated
Jan 10 17:16:01 compute-0 ceph-mon[75249]: pgmap v610: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:16:02 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v611: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:16:03 compute-0 sudo[237461]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 17:16:03 compute-0 sudo[237461]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:16:03 compute-0 sudo[237461]: pam_unix(sudo:session): session closed for user root
Jan 10 17:16:03 compute-0 sudo[237486]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 10 17:16:03 compute-0 sudo[237486]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:16:03 compute-0 ceph-mon[75249]: pgmap v611: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:16:04 compute-0 sudo[237486]: pam_unix(sudo:session): session closed for user root
Jan 10 17:16:04 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 17:16:04 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 17:16:04 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 10 17:16:04 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 10 17:16:04 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 10 17:16:04 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:16:04 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 10 17:16:04 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 10 17:16:04 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 10 17:16:04 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 10 17:16:04 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 17:16:04 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 17:16:04 compute-0 sudo[237542]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 17:16:04 compute-0 sudo[237542]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:16:04 compute-0 sudo[237542]: pam_unix(sudo:session): session closed for user root
Jan 10 17:16:04 compute-0 sudo[237567]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 10 17:16:04 compute-0 sudo[237567]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:16:04 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:16:04 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v612: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:16:04 compute-0 podman[237603]: 2026-01-10 17:16:04.608330651 +0000 UTC m=+0.047166914 container create 8137eaabc56f35c4e06ed3ed012016152697e678325ac60f7182e17027d12dd3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_hoover, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 10 17:16:04 compute-0 systemd[1]: Started libpod-conmon-8137eaabc56f35c4e06ed3ed012016152697e678325ac60f7182e17027d12dd3.scope.
Jan 10 17:16:04 compute-0 podman[237603]: 2026-01-10 17:16:04.586631287 +0000 UTC m=+0.025467550 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:16:04 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:16:04 compute-0 podman[237603]: 2026-01-10 17:16:04.709008734 +0000 UTC m=+0.147845047 container init 8137eaabc56f35c4e06ed3ed012016152697e678325ac60f7182e17027d12dd3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_hoover, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 10 17:16:04 compute-0 podman[237603]: 2026-01-10 17:16:04.721178063 +0000 UTC m=+0.160014306 container start 8137eaabc56f35c4e06ed3ed012016152697e678325ac60f7182e17027d12dd3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_hoover, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030)
Jan 10 17:16:04 compute-0 podman[237603]: 2026-01-10 17:16:04.725224655 +0000 UTC m=+0.164060898 container attach 8137eaabc56f35c4e06ed3ed012016152697e678325ac60f7182e17027d12dd3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_hoover, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 10 17:16:04 compute-0 flamboyant_hoover[237619]: 167 167
Jan 10 17:16:04 compute-0 systemd[1]: libpod-8137eaabc56f35c4e06ed3ed012016152697e678325ac60f7182e17027d12dd3.scope: Deactivated successfully.
Jan 10 17:16:04 compute-0 podman[237603]: 2026-01-10 17:16:04.732649502 +0000 UTC m=+0.171485755 container died 8137eaabc56f35c4e06ed3ed012016152697e678325ac60f7182e17027d12dd3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_hoover, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 10 17:16:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-630e0f6efe8c12391f15fc5a7aa8a056b03c57cb7fc09c1a68b84da0fe2c4752-merged.mount: Deactivated successfully.
Jan 10 17:16:04 compute-0 podman[237603]: 2026-01-10 17:16:04.776507403 +0000 UTC m=+0.215343636 container remove 8137eaabc56f35c4e06ed3ed012016152697e678325ac60f7182e17027d12dd3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_hoover, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 10 17:16:04 compute-0 systemd[1]: libpod-conmon-8137eaabc56f35c4e06ed3ed012016152697e678325ac60f7182e17027d12dd3.scope: Deactivated successfully.
Jan 10 17:16:04 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 17:16:04 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 10 17:16:04 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:16:04 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 10 17:16:04 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 10 17:16:04 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 17:16:04 compute-0 podman[237643]: 2026-01-10 17:16:04.983421223 +0000 UTC m=+0.062515361 container create 35e194c4e956f4aa12a2c75473ce78f6db365cfab451fa2eb12b33fd98c374b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_curie, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 10 17:16:05 compute-0 systemd[1]: Started libpod-conmon-35e194c4e956f4aa12a2c75473ce78f6db365cfab451fa2eb12b33fd98c374b0.scope.
Jan 10 17:16:05 compute-0 podman[237643]: 2026-01-10 17:16:04.962210472 +0000 UTC m=+0.041304650 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:16:05 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:16:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45ee30863db5cc008df2fb5e393c7666ff74218f109524494a709b045cc6ed29/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 17:16:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45ee30863db5cc008df2fb5e393c7666ff74218f109524494a709b045cc6ed29/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 17:16:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45ee30863db5cc008df2fb5e393c7666ff74218f109524494a709b045cc6ed29/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 17:16:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45ee30863db5cc008df2fb5e393c7666ff74218f109524494a709b045cc6ed29/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 17:16:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45ee30863db5cc008df2fb5e393c7666ff74218f109524494a709b045cc6ed29/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 10 17:16:05 compute-0 podman[237643]: 2026-01-10 17:16:05.081694358 +0000 UTC m=+0.160788576 container init 35e194c4e956f4aa12a2c75473ce78f6db365cfab451fa2eb12b33fd98c374b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_curie, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 17:16:05 compute-0 podman[237643]: 2026-01-10 17:16:05.0943393 +0000 UTC m=+0.173433468 container start 35e194c4e956f4aa12a2c75473ce78f6db365cfab451fa2eb12b33fd98c374b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_curie, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 17:16:05 compute-0 podman[237643]: 2026-01-10 17:16:05.099922255 +0000 UTC m=+0.179016413 container attach 35e194c4e956f4aa12a2c75473ce78f6db365cfab451fa2eb12b33fd98c374b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_curie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0)
Jan 10 17:16:05 compute-0 vigilant_curie[237659]: --> passed data devices: 0 physical, 3 LVM
Jan 10 17:16:05 compute-0 vigilant_curie[237659]: --> All data devices are unavailable
Jan 10 17:16:05 compute-0 systemd[1]: libpod-35e194c4e956f4aa12a2c75473ce78f6db365cfab451fa2eb12b33fd98c374b0.scope: Deactivated successfully.
Jan 10 17:16:05 compute-0 podman[237643]: 2026-01-10 17:16:05.643542968 +0000 UTC m=+0.722637106 container died 35e194c4e956f4aa12a2c75473ce78f6db365cfab451fa2eb12b33fd98c374b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_curie, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 10 17:16:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-45ee30863db5cc008df2fb5e393c7666ff74218f109524494a709b045cc6ed29-merged.mount: Deactivated successfully.
Jan 10 17:16:05 compute-0 podman[237643]: 2026-01-10 17:16:05.711735636 +0000 UTC m=+0.790829794 container remove 35e194c4e956f4aa12a2c75473ce78f6db365cfab451fa2eb12b33fd98c374b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_curie, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 17:16:05 compute-0 systemd[1]: libpod-conmon-35e194c4e956f4aa12a2c75473ce78f6db365cfab451fa2eb12b33fd98c374b0.scope: Deactivated successfully.
Jan 10 17:16:05 compute-0 sudo[237567]: pam_unix(sudo:session): session closed for user root
Jan 10 17:16:05 compute-0 sudo[237692]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 17:16:05 compute-0 sudo[237692]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:16:05 compute-0 sudo[237692]: pam_unix(sudo:session): session closed for user root
Jan 10 17:16:05 compute-0 ceph-mon[75249]: pgmap v612: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:16:05 compute-0 sudo[237717]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 -- lvm list --format json
Jan 10 17:16:05 compute-0 sudo[237717]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:16:06 compute-0 podman[237754]: 2026-01-10 17:16:06.253555949 +0000 UTC m=+0.060920637 container create de7531dc863af961f085c2ca1624cfde98b354dc6383f51f5f40e45cfb13e6ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_cori, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 17:16:06 compute-0 systemd[1]: Started libpod-conmon-de7531dc863af961f085c2ca1624cfde98b354dc6383f51f5f40e45cfb13e6ba.scope.
Jan 10 17:16:06 compute-0 podman[237754]: 2026-01-10 17:16:06.227905845 +0000 UTC m=+0.035270623 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:16:06 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:16:06 compute-0 podman[237754]: 2026-01-10 17:16:06.356006601 +0000 UTC m=+0.163371319 container init de7531dc863af961f085c2ca1624cfde98b354dc6383f51f5f40e45cfb13e6ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_cori, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 10 17:16:06 compute-0 podman[237754]: 2026-01-10 17:16:06.367311986 +0000 UTC m=+0.174676674 container start de7531dc863af961f085c2ca1624cfde98b354dc6383f51f5f40e45cfb13e6ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_cori, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 17:16:06 compute-0 podman[237754]: 2026-01-10 17:16:06.370859404 +0000 UTC m=+0.178224092 container attach de7531dc863af961f085c2ca1624cfde98b354dc6383f51f5f40e45cfb13e6ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_cori, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 17:16:06 compute-0 epic_cori[237770]: 167 167
Jan 10 17:16:06 compute-0 systemd[1]: libpod-de7531dc863af961f085c2ca1624cfde98b354dc6383f51f5f40e45cfb13e6ba.scope: Deactivated successfully.
Jan 10 17:16:06 compute-0 podman[237754]: 2026-01-10 17:16:06.373026465 +0000 UTC m=+0.180391193 container died de7531dc863af961f085c2ca1624cfde98b354dc6383f51f5f40e45cfb13e6ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_cori, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 17:16:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-847e363c224eebeb0a57b1e0aca80da1a789424b194de48e0876ef672bc516b5-merged.mount: Deactivated successfully.
Jan 10 17:16:06 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v613: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:16:06 compute-0 podman[237754]: 2026-01-10 17:16:06.421948237 +0000 UTC m=+0.229312965 container remove de7531dc863af961f085c2ca1624cfde98b354dc6383f51f5f40e45cfb13e6ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_cori, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 17:16:06 compute-0 systemd[1]: libpod-conmon-de7531dc863af961f085c2ca1624cfde98b354dc6383f51f5f40e45cfb13e6ba.scope: Deactivated successfully.
Jan 10 17:16:06 compute-0 podman[237794]: 2026-01-10 17:16:06.635580554 +0000 UTC m=+0.054996592 container create 2478852f914982c17b0306501fba3394e42615a5009c35c8eb867a3938d1dd71 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_visvesvaraya, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 17:16:06 compute-0 systemd[1]: Started libpod-conmon-2478852f914982c17b0306501fba3394e42615a5009c35c8eb867a3938d1dd71.scope.
Jan 10 17:16:06 compute-0 podman[237794]: 2026-01-10 17:16:06.610037332 +0000 UTC m=+0.029453430 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:16:06 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:16:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1f67a9c500e5e93ff6e1d07a23cdf5e81744ce6bd273fdc2aa636698d27cb8b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 17:16:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1f67a9c500e5e93ff6e1d07a23cdf5e81744ce6bd273fdc2aa636698d27cb8b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 17:16:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1f67a9c500e5e93ff6e1d07a23cdf5e81744ce6bd273fdc2aa636698d27cb8b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 17:16:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1f67a9c500e5e93ff6e1d07a23cdf5e81744ce6bd273fdc2aa636698d27cb8b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 17:16:06 compute-0 podman[237794]: 2026-01-10 17:16:06.738163649 +0000 UTC m=+0.157579677 container init 2478852f914982c17b0306501fba3394e42615a5009c35c8eb867a3938d1dd71 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_visvesvaraya, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 10 17:16:06 compute-0 podman[237794]: 2026-01-10 17:16:06.749677 +0000 UTC m=+0.169093028 container start 2478852f914982c17b0306501fba3394e42615a5009c35c8eb867a3938d1dd71 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_visvesvaraya, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 17:16:06 compute-0 podman[237794]: 2026-01-10 17:16:06.75580267 +0000 UTC m=+0.175218668 container attach 2478852f914982c17b0306501fba3394e42615a5009c35c8eb867a3938d1dd71 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_visvesvaraya, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 17:16:07 compute-0 exciting_visvesvaraya[237811]: {
Jan 10 17:16:07 compute-0 exciting_visvesvaraya[237811]:     "0": [
Jan 10 17:16:07 compute-0 exciting_visvesvaraya[237811]:         {
Jan 10 17:16:07 compute-0 exciting_visvesvaraya[237811]:             "devices": [
Jan 10 17:16:07 compute-0 exciting_visvesvaraya[237811]:                 "/dev/loop3"
Jan 10 17:16:07 compute-0 exciting_visvesvaraya[237811]:             ],
Jan 10 17:16:07 compute-0 exciting_visvesvaraya[237811]:             "lv_name": "ceph_lv0",
Jan 10 17:16:07 compute-0 exciting_visvesvaraya[237811]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 10 17:16:07 compute-0 exciting_visvesvaraya[237811]:             "lv_size": "21470642176",
Jan 10 17:16:07 compute-0 exciting_visvesvaraya[237811]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9aa1dcc9-88f4-49c0-be40-744313964d3e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 17:16:07 compute-0 exciting_visvesvaraya[237811]:             "lv_uuid": "fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2",
Jan 10 17:16:07 compute-0 exciting_visvesvaraya[237811]:             "name": "ceph_lv0",
Jan 10 17:16:07 compute-0 exciting_visvesvaraya[237811]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 10 17:16:07 compute-0 exciting_visvesvaraya[237811]:             "tags": {
Jan 10 17:16:07 compute-0 exciting_visvesvaraya[237811]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 10 17:16:07 compute-0 exciting_visvesvaraya[237811]:                 "ceph.block_uuid": "fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2",
Jan 10 17:16:07 compute-0 exciting_visvesvaraya[237811]:                 "ceph.cephx_lockbox_secret": "",
Jan 10 17:16:07 compute-0 exciting_visvesvaraya[237811]:                 "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 17:16:07 compute-0 exciting_visvesvaraya[237811]:                 "ceph.cluster_name": "ceph",
Jan 10 17:16:07 compute-0 exciting_visvesvaraya[237811]:                 "ceph.crush_device_class": "",
Jan 10 17:16:07 compute-0 exciting_visvesvaraya[237811]:                 "ceph.encrypted": "0",
Jan 10 17:16:07 compute-0 exciting_visvesvaraya[237811]:                 "ceph.objectstore": "bluestore",
Jan 10 17:16:07 compute-0 exciting_visvesvaraya[237811]:                 "ceph.osd_fsid": "9aa1dcc9-88f4-49c0-be40-744313964d3e",
Jan 10 17:16:07 compute-0 exciting_visvesvaraya[237811]:                 "ceph.osd_id": "0",
Jan 10 17:16:07 compute-0 exciting_visvesvaraya[237811]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 10 17:16:07 compute-0 exciting_visvesvaraya[237811]:                 "ceph.type": "block",
Jan 10 17:16:07 compute-0 exciting_visvesvaraya[237811]:                 "ceph.vdo": "0",
Jan 10 17:16:07 compute-0 exciting_visvesvaraya[237811]:                 "ceph.with_tpm": "0"
Jan 10 17:16:07 compute-0 exciting_visvesvaraya[237811]:             },
Jan 10 17:16:07 compute-0 exciting_visvesvaraya[237811]:             "type": "block",
Jan 10 17:16:07 compute-0 exciting_visvesvaraya[237811]:             "vg_name": "ceph_vg0"
Jan 10 17:16:07 compute-0 exciting_visvesvaraya[237811]:         }
Jan 10 17:16:07 compute-0 exciting_visvesvaraya[237811]:     ],
Jan 10 17:16:07 compute-0 exciting_visvesvaraya[237811]:     "1": [
Jan 10 17:16:07 compute-0 exciting_visvesvaraya[237811]:         {
Jan 10 17:16:07 compute-0 exciting_visvesvaraya[237811]:             "devices": [
Jan 10 17:16:07 compute-0 exciting_visvesvaraya[237811]:                 "/dev/loop4"
Jan 10 17:16:07 compute-0 exciting_visvesvaraya[237811]:             ],
Jan 10 17:16:07 compute-0 exciting_visvesvaraya[237811]:             "lv_name": "ceph_lv1",
Jan 10 17:16:07 compute-0 exciting_visvesvaraya[237811]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 10 17:16:07 compute-0 exciting_visvesvaraya[237811]:             "lv_size": "21470642176",
Jan 10 17:16:07 compute-0 exciting_visvesvaraya[237811]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e8e31518-65ae-476c-891c-e2fc550d0a1c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 17:16:07 compute-0 exciting_visvesvaraya[237811]:             "lv_uuid": "jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE",
Jan 10 17:16:07 compute-0 exciting_visvesvaraya[237811]:             "name": "ceph_lv1",
Jan 10 17:16:07 compute-0 exciting_visvesvaraya[237811]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 10 17:16:07 compute-0 exciting_visvesvaraya[237811]:             "tags": {
Jan 10 17:16:07 compute-0 exciting_visvesvaraya[237811]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 10 17:16:07 compute-0 exciting_visvesvaraya[237811]:                 "ceph.block_uuid": "jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE",
Jan 10 17:16:07 compute-0 exciting_visvesvaraya[237811]:                 "ceph.cephx_lockbox_secret": "",
Jan 10 17:16:07 compute-0 exciting_visvesvaraya[237811]:                 "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 17:16:07 compute-0 exciting_visvesvaraya[237811]:                 "ceph.cluster_name": "ceph",
Jan 10 17:16:07 compute-0 exciting_visvesvaraya[237811]:                 "ceph.crush_device_class": "",
Jan 10 17:16:07 compute-0 exciting_visvesvaraya[237811]:                 "ceph.encrypted": "0",
Jan 10 17:16:07 compute-0 exciting_visvesvaraya[237811]:                 "ceph.objectstore": "bluestore",
Jan 10 17:16:07 compute-0 exciting_visvesvaraya[237811]:                 "ceph.osd_fsid": "e8e31518-65ae-476c-891c-e2fc550d0a1c",
Jan 10 17:16:07 compute-0 exciting_visvesvaraya[237811]:                 "ceph.osd_id": "1",
Jan 10 17:16:07 compute-0 exciting_visvesvaraya[237811]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 10 17:16:07 compute-0 exciting_visvesvaraya[237811]:                 "ceph.type": "block",
Jan 10 17:16:07 compute-0 exciting_visvesvaraya[237811]:                 "ceph.vdo": "0",
Jan 10 17:16:07 compute-0 exciting_visvesvaraya[237811]:                 "ceph.with_tpm": "0"
Jan 10 17:16:07 compute-0 exciting_visvesvaraya[237811]:             },
Jan 10 17:16:07 compute-0 exciting_visvesvaraya[237811]:             "type": "block",
Jan 10 17:16:07 compute-0 exciting_visvesvaraya[237811]:             "vg_name": "ceph_vg1"
Jan 10 17:16:07 compute-0 exciting_visvesvaraya[237811]:         }
Jan 10 17:16:07 compute-0 exciting_visvesvaraya[237811]:     ],
Jan 10 17:16:07 compute-0 exciting_visvesvaraya[237811]:     "2": [
Jan 10 17:16:07 compute-0 exciting_visvesvaraya[237811]:         {
Jan 10 17:16:07 compute-0 exciting_visvesvaraya[237811]:             "devices": [
Jan 10 17:16:07 compute-0 exciting_visvesvaraya[237811]:                 "/dev/loop5"
Jan 10 17:16:07 compute-0 exciting_visvesvaraya[237811]:             ],
Jan 10 17:16:07 compute-0 exciting_visvesvaraya[237811]:             "lv_name": "ceph_lv2",
Jan 10 17:16:07 compute-0 exciting_visvesvaraya[237811]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 10 17:16:07 compute-0 exciting_visvesvaraya[237811]:             "lv_size": "21470642176",
Jan 10 17:16:07 compute-0 exciting_visvesvaraya[237811]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=87473727-6468-4f68-8371-e0bf60edaa43,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 17:16:07 compute-0 exciting_visvesvaraya[237811]:             "lv_uuid": "47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL",
Jan 10 17:16:07 compute-0 exciting_visvesvaraya[237811]:             "name": "ceph_lv2",
Jan 10 17:16:07 compute-0 exciting_visvesvaraya[237811]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 10 17:16:07 compute-0 exciting_visvesvaraya[237811]:             "tags": {
Jan 10 17:16:07 compute-0 exciting_visvesvaraya[237811]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 10 17:16:07 compute-0 exciting_visvesvaraya[237811]:                 "ceph.block_uuid": "47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL",
Jan 10 17:16:07 compute-0 exciting_visvesvaraya[237811]:                 "ceph.cephx_lockbox_secret": "",
Jan 10 17:16:07 compute-0 exciting_visvesvaraya[237811]:                 "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 17:16:07 compute-0 exciting_visvesvaraya[237811]:                 "ceph.cluster_name": "ceph",
Jan 10 17:16:07 compute-0 exciting_visvesvaraya[237811]:                 "ceph.crush_device_class": "",
Jan 10 17:16:07 compute-0 exciting_visvesvaraya[237811]:                 "ceph.encrypted": "0",
Jan 10 17:16:07 compute-0 exciting_visvesvaraya[237811]:                 "ceph.objectstore": "bluestore",
Jan 10 17:16:07 compute-0 exciting_visvesvaraya[237811]:                 "ceph.osd_fsid": "87473727-6468-4f68-8371-e0bf60edaa43",
Jan 10 17:16:07 compute-0 exciting_visvesvaraya[237811]:                 "ceph.osd_id": "2",
Jan 10 17:16:07 compute-0 exciting_visvesvaraya[237811]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 10 17:16:07 compute-0 exciting_visvesvaraya[237811]:                 "ceph.type": "block",
Jan 10 17:16:07 compute-0 exciting_visvesvaraya[237811]:                 "ceph.vdo": "0",
Jan 10 17:16:07 compute-0 exciting_visvesvaraya[237811]:                 "ceph.with_tpm": "0"
Jan 10 17:16:07 compute-0 exciting_visvesvaraya[237811]:             },
Jan 10 17:16:07 compute-0 exciting_visvesvaraya[237811]:             "type": "block",
Jan 10 17:16:07 compute-0 exciting_visvesvaraya[237811]:             "vg_name": "ceph_vg2"
Jan 10 17:16:07 compute-0 exciting_visvesvaraya[237811]:         }
Jan 10 17:16:07 compute-0 exciting_visvesvaraya[237811]:     ]
Jan 10 17:16:07 compute-0 exciting_visvesvaraya[237811]: }
Jan 10 17:16:07 compute-0 systemd[1]: libpod-2478852f914982c17b0306501fba3394e42615a5009c35c8eb867a3938d1dd71.scope: Deactivated successfully.
Jan 10 17:16:07 compute-0 podman[237794]: 2026-01-10 17:16:07.082863815 +0000 UTC m=+0.502279853 container died 2478852f914982c17b0306501fba3394e42615a5009c35c8eb867a3938d1dd71 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_visvesvaraya, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 17:16:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-c1f67a9c500e5e93ff6e1d07a23cdf5e81744ce6bd273fdc2aa636698d27cb8b-merged.mount: Deactivated successfully.
Jan 10 17:16:07 compute-0 podman[237794]: 2026-01-10 17:16:07.138767161 +0000 UTC m=+0.558183159 container remove 2478852f914982c17b0306501fba3394e42615a5009c35c8eb867a3938d1dd71 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_visvesvaraya, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 17:16:07 compute-0 systemd[1]: libpod-conmon-2478852f914982c17b0306501fba3394e42615a5009c35c8eb867a3938d1dd71.scope: Deactivated successfully.
Jan 10 17:16:07 compute-0 sudo[237717]: pam_unix(sudo:session): session closed for user root
Jan 10 17:16:07 compute-0 sudo[237834]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 17:16:07 compute-0 sudo[237834]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:16:07 compute-0 sudo[237834]: pam_unix(sudo:session): session closed for user root
Jan 10 17:16:07 compute-0 sudo[237859]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 -- raw list --format json
Jan 10 17:16:07 compute-0 sudo[237859]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:16:07 compute-0 podman[237896]: 2026-01-10 17:16:07.65063971 +0000 UTC m=+0.051692050 container create d4aebef4ba49a1071920034bf460d9bf04f94ac0ca915264122712df9d06f3d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_mccarthy, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 10 17:16:07 compute-0 systemd[1]: Started libpod-conmon-d4aebef4ba49a1071920034bf460d9bf04f94ac0ca915264122712df9d06f3d7.scope.
Jan 10 17:16:07 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:16:07 compute-0 podman[237896]: 2026-01-10 17:16:07.621602112 +0000 UTC m=+0.022654512 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:16:07 compute-0 podman[237896]: 2026-01-10 17:16:07.733055404 +0000 UTC m=+0.134107744 container init d4aebef4ba49a1071920034bf460d9bf04f94ac0ca915264122712df9d06f3d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_mccarthy, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 17:16:07 compute-0 podman[237896]: 2026-01-10 17:16:07.744951745 +0000 UTC m=+0.146004095 container start d4aebef4ba49a1071920034bf460d9bf04f94ac0ca915264122712df9d06f3d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_mccarthy, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2)
Jan 10 17:16:07 compute-0 busy_mccarthy[237912]: 167 167
Jan 10 17:16:07 compute-0 podman[237896]: 2026-01-10 17:16:07.749155872 +0000 UTC m=+0.150208222 container attach d4aebef4ba49a1071920034bf460d9bf04f94ac0ca915264122712df9d06f3d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_mccarthy, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 10 17:16:07 compute-0 systemd[1]: libpod-d4aebef4ba49a1071920034bf460d9bf04f94ac0ca915264122712df9d06f3d7.scope: Deactivated successfully.
Jan 10 17:16:07 compute-0 podman[237917]: 2026-01-10 17:16:07.814490251 +0000 UTC m=+0.040274092 container died d4aebef4ba49a1071920034bf460d9bf04f94ac0ca915264122712df9d06f3d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_mccarthy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 17:16:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-78ea7d79bf41ee1807e4af9d79d2e45ce8b966ad38a9a2a5bbd26bbbc3eb4510-merged.mount: Deactivated successfully.
Jan 10 17:16:07 compute-0 podman[237917]: 2026-01-10 17:16:07.864012319 +0000 UTC m=+0.089796110 container remove d4aebef4ba49a1071920034bf460d9bf04f94ac0ca915264122712df9d06f3d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_mccarthy, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 17:16:07 compute-0 systemd[1]: libpod-conmon-d4aebef4ba49a1071920034bf460d9bf04f94ac0ca915264122712df9d06f3d7.scope: Deactivated successfully.
Jan 10 17:16:07 compute-0 ceph-mon[75249]: pgmap v613: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:16:08 compute-0 podman[237939]: 2026-01-10 17:16:08.074066057 +0000 UTC m=+0.054532689 container create 5708b6f7c4d6d6724210c016025d0cdcd9c1995ad1d48b37ed25ee57d83e4672 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_faraday, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 17:16:08 compute-0 systemd[1]: Started libpod-conmon-5708b6f7c4d6d6724210c016025d0cdcd9c1995ad1d48b37ed25ee57d83e4672.scope.
Jan 10 17:16:08 compute-0 podman[237939]: 2026-01-10 17:16:08.049973896 +0000 UTC m=+0.030440578 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:16:08 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:16:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63955a5c0b425f422290a8b352a018f6052f18511bcea0a3eac1536b452ce5cf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 17:16:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63955a5c0b425f422290a8b352a018f6052f18511bcea0a3eac1536b452ce5cf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 17:16:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63955a5c0b425f422290a8b352a018f6052f18511bcea0a3eac1536b452ce5cf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 17:16:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63955a5c0b425f422290a8b352a018f6052f18511bcea0a3eac1536b452ce5cf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 17:16:08 compute-0 podman[237939]: 2026-01-10 17:16:08.169561765 +0000 UTC m=+0.150028417 container init 5708b6f7c4d6d6724210c016025d0cdcd9c1995ad1d48b37ed25ee57d83e4672 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_faraday, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True)
Jan 10 17:16:08 compute-0 podman[237939]: 2026-01-10 17:16:08.183060471 +0000 UTC m=+0.163527133 container start 5708b6f7c4d6d6724210c016025d0cdcd9c1995ad1d48b37ed25ee57d83e4672 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_faraday, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 17:16:08 compute-0 podman[237939]: 2026-01-10 17:16:08.187242027 +0000 UTC m=+0.167708719 container attach 5708b6f7c4d6d6724210c016025d0cdcd9c1995ad1d48b37ed25ee57d83e4672 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_faraday, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 10 17:16:08 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v614: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:16:08 compute-0 lvm[238037]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 10 17:16:08 compute-0 lvm[238037]: VG ceph_vg1 finished
Jan 10 17:16:08 compute-0 lvm[238033]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 10 17:16:08 compute-0 lvm[238033]: VG ceph_vg0 finished
Jan 10 17:16:08 compute-0 lvm[238038]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 10 17:16:08 compute-0 lvm[238038]: VG ceph_vg2 finished
Jan 10 17:16:08 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:16:08 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:16:08 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:16:08 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:16:08 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:16:08 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:16:09 compute-0 nostalgic_faraday[237955]: {}
Jan 10 17:16:09 compute-0 systemd[1]: libpod-5708b6f7c4d6d6724210c016025d0cdcd9c1995ad1d48b37ed25ee57d83e4672.scope: Deactivated successfully.
Jan 10 17:16:09 compute-0 systemd[1]: libpod-5708b6f7c4d6d6724210c016025d0cdcd9c1995ad1d48b37ed25ee57d83e4672.scope: Consumed 1.403s CPU time.
Jan 10 17:16:09 compute-0 podman[237939]: 2026-01-10 17:16:09.056419213 +0000 UTC m=+1.036885845 container died 5708b6f7c4d6d6724210c016025d0cdcd9c1995ad1d48b37ed25ee57d83e4672 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_faraday, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 10 17:16:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-63955a5c0b425f422290a8b352a018f6052f18511bcea0a3eac1536b452ce5cf-merged.mount: Deactivated successfully.
Jan 10 17:16:09 compute-0 podman[237939]: 2026-01-10 17:16:09.108349708 +0000 UTC m=+1.088816340 container remove 5708b6f7c4d6d6724210c016025d0cdcd9c1995ad1d48b37ed25ee57d83e4672 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_faraday, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 10 17:16:09 compute-0 systemd[1]: libpod-conmon-5708b6f7c4d6d6724210c016025d0cdcd9c1995ad1d48b37ed25ee57d83e4672.scope: Deactivated successfully.
Jan 10 17:16:09 compute-0 sudo[237859]: pam_unix(sudo:session): session closed for user root
Jan 10 17:16:09 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 10 17:16:09 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:16:09 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 10 17:16:09 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:16:09 compute-0 sudo[238053]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 10 17:16:09 compute-0 sudo[238053]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:16:09 compute-0 sudo[238053]: pam_unix(sudo:session): session closed for user root
Jan 10 17:16:09 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:16:09 compute-0 ceph-mon[75249]: pgmap v614: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:16:09 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:16:09 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:16:10 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v615: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:16:11 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 10 17:16:11 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1688670713' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 10 17:16:11 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 10 17:16:11 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1688670713' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 10 17:16:11 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 10 17:16:11 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3710747635' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 10 17:16:11 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 10 17:16:11 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3710747635' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 10 17:16:11 compute-0 ceph-mon[75249]: pgmap v615: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:16:11 compute-0 ceph-mon[75249]: from='client.? 192.168.122.10:0/1688670713' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 10 17:16:11 compute-0 ceph-mon[75249]: from='client.? 192.168.122.10:0/1688670713' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 10 17:16:11 compute-0 ceph-mon[75249]: from='client.? 192.168.122.10:0/3710747635' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 10 17:16:11 compute-0 ceph-mon[75249]: from='client.? 192.168.122.10:0/3710747635' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 10 17:16:11 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 10 17:16:11 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/945315605' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 10 17:16:11 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 10 17:16:11 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/945315605' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 10 17:16:12 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v616: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:16:12 compute-0 ceph-mon[75249]: from='client.? 192.168.122.10:0/945315605' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 10 17:16:12 compute-0 ceph-mon[75249]: from='client.? 192.168.122.10:0/945315605' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 10 17:16:12 compute-0 nova_compute[237049]: 2026-01-10 17:16:12.993 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:16:13 compute-0 nova_compute[237049]: 2026-01-10 17:16:13.018 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:16:13 compute-0 ceph-mon[75249]: pgmap v616: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:16:14 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:16:14 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v617: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:16:15 compute-0 ceph-mon[75249]: pgmap v617: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:16:16 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v618: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:16:17 compute-0 ceph-mon[75249]: pgmap v618: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:16:18 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v619: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:16:19 compute-0 ceph-mon[75249]: pgmap v619: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:16:19 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:16:20 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v620: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:16:20 compute-0 sshd-session[238078]: Connection closed by authenticating user root 216.36.124.133 port 50846 [preauth]
Jan 10 17:16:21 compute-0 ceph-mon[75249]: pgmap v620: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:16:22 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v621: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:16:23 compute-0 ceph-mon[75249]: pgmap v621: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:16:24 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:16:24 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v622: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:16:25 compute-0 ceph-mon[75249]: pgmap v622: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:16:26 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v623: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:16:27 compute-0 ceph-mon[75249]: pgmap v623: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:16:28 compute-0 podman[238080]: 2026-01-10 17:16:28.065777387 +0000 UTC m=+0.061468972 container health_status 4a66317a3a3660e903224f5e823ead0a57597ae17c6ac34919f8a3aeea25124f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 10 17:16:28 compute-0 podman[238081]: 2026-01-10 17:16:28.117547898 +0000 UTC m=+0.112299727 container health_status a2049b55215718ead6719d161f51721cb7ba61ad8a59a8a1174e05b17008fa6f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 10 17:16:28 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v624: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:16:29 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:16:29 compute-0 ceph-mon[75249]: pgmap v624: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:16:30 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v625: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:16:31 compute-0 ceph-mon[75249]: pgmap v625: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:16:32 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v626: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:16:33 compute-0 ceph-mon[75249]: pgmap v626: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:16:34 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:16:34 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v627: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:16:35 compute-0 ceph-mon[75249]: pgmap v627: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:16:36 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v628: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:16:37 compute-0 ceph-mon[75249]: pgmap v628: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:16:38 compute-0 ceph-mgr[75538]: [balancer INFO root] Optimize plan auto_2026-01-10_17:16:38
Jan 10 17:16:38 compute-0 ceph-mgr[75538]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 10 17:16:38 compute-0 ceph-mgr[75538]: [balancer INFO root] do_upmap
Jan 10 17:16:38 compute-0 ceph-mgr[75538]: [balancer INFO root] pools ['.mgr', 'vms', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'backups', 'volumes', 'images']
Jan 10 17:16:38 compute-0 ceph-mgr[75538]: [balancer INFO root] prepared 0/10 upmap changes
Jan 10 17:16:38 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v629: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:16:38 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:16:38 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:16:38 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:16:38 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:16:38 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:16:38 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:16:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 10 17:16:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 10 17:16:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 10 17:16:39 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:16:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 10 17:16:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 10 17:16:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 10 17:16:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 10 17:16:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 10 17:16:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 10 17:16:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 10 17:16:39 compute-0 ceph-mon[75249]: pgmap v629: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:16:40 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v630: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:16:41 compute-0 ceph-mon[75249]: pgmap v630: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:16:42 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v631: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:16:43 compute-0 ceph-mon[75249]: pgmap v631: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:16:44 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:16:44 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v632: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:16:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] _maybe_adjust
Jan 10 17:16:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:16:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 10 17:16:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:16:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 17:16:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:16:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 17:16:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:16:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 17:16:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:16:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 17:16:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:16:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 9.302004027771843e-07 of space, bias 4.0, pg target 0.0011162404833326212 quantized to 16 (current 16)
Jan 10 17:16:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:16:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 17:16:45 compute-0 ceph-mon[75249]: pgmap v632: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:16:46 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v633: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:16:47 compute-0 ceph-mon[75249]: pgmap v633: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:16:48 compute-0 nova_compute[237049]: 2026-01-10 17:16:48.347 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:16:48 compute-0 nova_compute[237049]: 2026-01-10 17:16:48.348 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:16:48 compute-0 nova_compute[237049]: 2026-01-10 17:16:48.348 237053 DEBUG nova.compute.manager [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 10 17:16:48 compute-0 nova_compute[237049]: 2026-01-10 17:16:48.348 237053 DEBUG nova.compute.manager [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 10 17:16:48 compute-0 nova_compute[237049]: 2026-01-10 17:16:48.360 237053 DEBUG nova.compute.manager [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 10 17:16:48 compute-0 nova_compute[237049]: 2026-01-10 17:16:48.360 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:16:48 compute-0 nova_compute[237049]: 2026-01-10 17:16:48.361 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:16:48 compute-0 nova_compute[237049]: 2026-01-10 17:16:48.361 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:16:48 compute-0 nova_compute[237049]: 2026-01-10 17:16:48.362 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:16:48 compute-0 nova_compute[237049]: 2026-01-10 17:16:48.362 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:16:48 compute-0 nova_compute[237049]: 2026-01-10 17:16:48.362 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:16:48 compute-0 nova_compute[237049]: 2026-01-10 17:16:48.363 237053 DEBUG nova.compute.manager [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 10 17:16:48 compute-0 nova_compute[237049]: 2026-01-10 17:16:48.363 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:16:48 compute-0 nova_compute[237049]: 2026-01-10 17:16:48.388 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 10 17:16:48 compute-0 nova_compute[237049]: 2026-01-10 17:16:48.389 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 10 17:16:48 compute-0 nova_compute[237049]: 2026-01-10 17:16:48.390 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 10 17:16:48 compute-0 nova_compute[237049]: 2026-01-10 17:16:48.390 237053 DEBUG nova.compute.resource_tracker [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 10 17:16:48 compute-0 nova_compute[237049]: 2026-01-10 17:16:48.391 237053 DEBUG oslo_concurrency.processutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 10 17:16:48 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v634: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:16:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:16:48.918 152671 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 10 17:16:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:16:48.918 152671 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 10 17:16:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:16:48.919 152671 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 10 17:16:48 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 10 17:16:48 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/750219385' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 10 17:16:49 compute-0 nova_compute[237049]: 2026-01-10 17:16:49.003 237053 DEBUG oslo_concurrency.processutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.612s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 10 17:16:49 compute-0 nova_compute[237049]: 2026-01-10 17:16:49.206 237053 WARNING nova.virt.libvirt.driver [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 10 17:16:49 compute-0 nova_compute[237049]: 2026-01-10 17:16:49.208 237053 DEBUG nova.compute.resource_tracker [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5297MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 10 17:16:49 compute-0 nova_compute[237049]: 2026-01-10 17:16:49.209 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 10 17:16:49 compute-0 nova_compute[237049]: 2026-01-10 17:16:49.209 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 10 17:16:49 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:16:49 compute-0 nova_compute[237049]: 2026-01-10 17:16:49.455 237053 DEBUG nova.compute.resource_tracker [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 10 17:16:49 compute-0 nova_compute[237049]: 2026-01-10 17:16:49.456 237053 DEBUG nova.compute.resource_tracker [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 10 17:16:49 compute-0 nova_compute[237049]: 2026-01-10 17:16:49.478 237053 DEBUG oslo_concurrency.processutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 10 17:16:49 compute-0 ceph-mon[75249]: pgmap v634: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:16:49 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/750219385' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 10 17:16:49 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 10 17:16:49 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1022571183' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 10 17:16:50 compute-0 nova_compute[237049]: 2026-01-10 17:16:50.014 237053 DEBUG oslo_concurrency.processutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.536s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 10 17:16:50 compute-0 nova_compute[237049]: 2026-01-10 17:16:50.021 237053 DEBUG nova.compute.provider_tree [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Inventory has not changed in ProviderTree for provider: 5f85855c-8a9b-43b5-ae49-f5846b9dcebe update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 10 17:16:50 compute-0 nova_compute[237049]: 2026-01-10 17:16:50.058 237053 DEBUG nova.scheduler.client.report [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Inventory has not changed for provider 5f85855c-8a9b-43b5-ae49-f5846b9dcebe based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 10 17:16:50 compute-0 nova_compute[237049]: 2026-01-10 17:16:50.082 237053 DEBUG nova.compute.resource_tracker [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 10 17:16:50 compute-0 nova_compute[237049]: 2026-01-10 17:16:50.082 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.873s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 10 17:16:50 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v635: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:16:50 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/1022571183' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 10 17:16:51 compute-0 ceph-mon[75249]: pgmap v635: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:16:52 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v636: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:16:53 compute-0 ceph-mon[75249]: pgmap v636: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:16:54 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:16:54 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v637: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:16:54 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "version", "format": "json"} v 0)
Jan 10 17:16:54 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/272340665' entity='client.openstack' cmd={"prefix": "version", "format": "json"} : dispatch
Jan 10 17:16:54 compute-0 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.14316 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Jan 10 17:16:54 compute-0 ceph-mgr[75538]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Jan 10 17:16:54 compute-0 ceph-mgr[75538]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Jan 10 17:16:55 compute-0 ceph-mon[75249]: pgmap v637: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:16:55 compute-0 ceph-mon[75249]: from='client.? 192.168.122.10:0/272340665' entity='client.openstack' cmd={"prefix": "version", "format": "json"} : dispatch
Jan 10 17:16:55 compute-0 ceph-mon[75249]: from='client.14316 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Jan 10 17:16:56 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v638: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:16:57 compute-0 ceph-mon[75249]: pgmap v638: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:16:58 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v639: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:16:59 compute-0 podman[238166]: 2026-01-10 17:16:59.044571077 +0000 UTC m=+0.051886607 container health_status 4a66317a3a3660e903224f5e823ead0a57597ae17c6ac34919f8a3aeea25124f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 10 17:16:59 compute-0 podman[238167]: 2026-01-10 17:16:59.083655856 +0000 UTC m=+0.090979886 container health_status a2049b55215718ead6719d161f51721cb7ba61ad8a59a8a1174e05b17008fa6f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 10 17:16:59 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:16:59 compute-0 ceph-mon[75249]: pgmap v639: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:17:00 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v640: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:17:01 compute-0 ceph-mon[75249]: pgmap v640: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:17:02 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v641: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:17:03 compute-0 ceph-mon[75249]: pgmap v641: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:17:04 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:17:04 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v642: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:17:05 compute-0 ceph-mon[75249]: pgmap v642: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:17:06 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v643: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:17:07 compute-0 ceph-mon[75249]: pgmap v643: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:17:08 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v644: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:17:08 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:17:08 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:17:08 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:17:08 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:17:08 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:17:08 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:17:09 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:17:09 compute-0 sudo[238207]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 17:17:09 compute-0 sudo[238207]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:17:09 compute-0 sudo[238207]: pam_unix(sudo:session): session closed for user root
Jan 10 17:17:09 compute-0 sudo[238232]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 10 17:17:09 compute-0 sudo[238232]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:17:09 compute-0 ceph-mon[75249]: pgmap v644: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:17:10 compute-0 sudo[238232]: pam_unix(sudo:session): session closed for user root
Jan 10 17:17:10 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 17:17:10 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 17:17:10 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 10 17:17:10 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 10 17:17:10 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 10 17:17:10 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:17:10 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 10 17:17:10 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 10 17:17:10 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 10 17:17:10 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 10 17:17:10 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 17:17:10 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 17:17:10 compute-0 sudo[238288]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 17:17:10 compute-0 sudo[238288]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:17:10 compute-0 sudo[238288]: pam_unix(sudo:session): session closed for user root
Jan 10 17:17:10 compute-0 sudo[238313]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 10 17:17:10 compute-0 sudo[238313]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:17:10 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v645: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:17:10 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 17:17:10 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 10 17:17:10 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:17:10 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 10 17:17:10 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 10 17:17:10 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 17:17:10 compute-0 podman[238351]: 2026-01-10 17:17:10.705214834 +0000 UTC m=+0.048055198 container create 209cfdccd136a24e22ba8e6e46733ae71bcca6a4495f17d43417981e44698a06 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_turing, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 17:17:10 compute-0 systemd[1]: Started libpod-conmon-209cfdccd136a24e22ba8e6e46733ae71bcca6a4495f17d43417981e44698a06.scope.
Jan 10 17:17:10 compute-0 podman[238351]: 2026-01-10 17:17:10.683934401 +0000 UTC m=+0.026774795 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:17:10 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:17:10 compute-0 podman[238351]: 2026-01-10 17:17:10.818761696 +0000 UTC m=+0.161602150 container init 209cfdccd136a24e22ba8e6e46733ae71bcca6a4495f17d43417981e44698a06 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_turing, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 17:17:10 compute-0 podman[238351]: 2026-01-10 17:17:10.832324912 +0000 UTC m=+0.175165306 container start 209cfdccd136a24e22ba8e6e46733ae71bcca6a4495f17d43417981e44698a06 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_turing, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 10 17:17:10 compute-0 podman[238351]: 2026-01-10 17:17:10.836812277 +0000 UTC m=+0.179652681 container attach 209cfdccd136a24e22ba8e6e46733ae71bcca6a4495f17d43417981e44698a06 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_turing, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 10 17:17:10 compute-0 upbeat_turing[238367]: 167 167
Jan 10 17:17:10 compute-0 systemd[1]: libpod-209cfdccd136a24e22ba8e6e46733ae71bcca6a4495f17d43417981e44698a06.scope: Deactivated successfully.
Jan 10 17:17:10 compute-0 podman[238372]: 2026-01-10 17:17:10.904898706 +0000 UTC m=+0.044127979 container died 209cfdccd136a24e22ba8e6e46733ae71bcca6a4495f17d43417981e44698a06 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_turing, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 10 17:17:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-6989a628d69c015454a4c162057b998022ddbda44ad4678de23724c5e83f18a4-merged.mount: Deactivated successfully.
Jan 10 17:17:10 compute-0 podman[238372]: 2026-01-10 17:17:10.964586572 +0000 UTC m=+0.103815814 container remove 209cfdccd136a24e22ba8e6e46733ae71bcca6a4495f17d43417981e44698a06 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_turing, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030)
Jan 10 17:17:10 compute-0 systemd[1]: libpod-conmon-209cfdccd136a24e22ba8e6e46733ae71bcca6a4495f17d43417981e44698a06.scope: Deactivated successfully.
Jan 10 17:17:11 compute-0 podman[238394]: 2026-01-10 17:17:11.208927625 +0000 UTC m=+0.047751661 container create ee9ba58d06f1a849a9d2a1cedd4ddc92a318e127a252b388ba0e193f979b4db9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_ellis, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 10 17:17:11 compute-0 systemd[1]: Started libpod-conmon-ee9ba58d06f1a849a9d2a1cedd4ddc92a318e127a252b388ba0e193f979b4db9.scope.
Jan 10 17:17:11 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:17:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a4bf1f920ea07e71acdfe1351337843efa0ce201d117b5ddf4dd3bd33bf5581/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 17:17:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a4bf1f920ea07e71acdfe1351337843efa0ce201d117b5ddf4dd3bd33bf5581/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 17:17:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a4bf1f920ea07e71acdfe1351337843efa0ce201d117b5ddf4dd3bd33bf5581/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 17:17:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a4bf1f920ea07e71acdfe1351337843efa0ce201d117b5ddf4dd3bd33bf5581/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 17:17:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a4bf1f920ea07e71acdfe1351337843efa0ce201d117b5ddf4dd3bd33bf5581/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 10 17:17:11 compute-0 podman[238394]: 2026-01-10 17:17:11.188464842 +0000 UTC m=+0.027288898 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:17:11 compute-0 podman[238394]: 2026-01-10 17:17:11.297215651 +0000 UTC m=+0.136039727 container init ee9ba58d06f1a849a9d2a1cedd4ddc92a318e127a252b388ba0e193f979b4db9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_ellis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Jan 10 17:17:11 compute-0 podman[238394]: 2026-01-10 17:17:11.312342147 +0000 UTC m=+0.151166193 container start ee9ba58d06f1a849a9d2a1cedd4ddc92a318e127a252b388ba0e193f979b4db9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_ellis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 10 17:17:11 compute-0 podman[238394]: 2026-01-10 17:17:11.316501963 +0000 UTC m=+0.155326009 container attach ee9ba58d06f1a849a9d2a1cedd4ddc92a318e127a252b388ba0e193f979b4db9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_ellis, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 17:17:11 compute-0 ceph-mon[75249]: pgmap v645: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:17:11 compute-0 amazing_ellis[238410]: --> passed data devices: 0 physical, 3 LVM
Jan 10 17:17:11 compute-0 amazing_ellis[238410]: --> All data devices are unavailable
Jan 10 17:17:11 compute-0 systemd[1]: libpod-ee9ba58d06f1a849a9d2a1cedd4ddc92a318e127a252b388ba0e193f979b4db9.scope: Deactivated successfully.
Jan 10 17:17:11 compute-0 podman[238394]: 2026-01-10 17:17:11.913046716 +0000 UTC m=+0.751870782 container died ee9ba58d06f1a849a9d2a1cedd4ddc92a318e127a252b388ba0e193f979b4db9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_ellis, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 10 17:17:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-2a4bf1f920ea07e71acdfe1351337843efa0ce201d117b5ddf4dd3bd33bf5581-merged.mount: Deactivated successfully.
Jan 10 17:17:11 compute-0 podman[238394]: 2026-01-10 17:17:11.966391089 +0000 UTC m=+0.805215125 container remove ee9ba58d06f1a849a9d2a1cedd4ddc92a318e127a252b388ba0e193f979b4db9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_ellis, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 17:17:11 compute-0 systemd[1]: libpod-conmon-ee9ba58d06f1a849a9d2a1cedd4ddc92a318e127a252b388ba0e193f979b4db9.scope: Deactivated successfully.
Jan 10 17:17:12 compute-0 sudo[238313]: pam_unix(sudo:session): session closed for user root
Jan 10 17:17:12 compute-0 sudo[238441]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 17:17:12 compute-0 sudo[238441]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:17:12 compute-0 sudo[238441]: pam_unix(sudo:session): session closed for user root
Jan 10 17:17:12 compute-0 sudo[238466]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 -- lvm list --format json
Jan 10 17:17:12 compute-0 sudo[238466]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:17:12 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v646: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:17:12 compute-0 podman[238504]: 2026-01-10 17:17:12.468037136 +0000 UTC m=+0.049633239 container create e8aa62979a66c05898b00a4278488e851eec52d56cd31d04f126f0727e907a41 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_murdock, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True)
Jan 10 17:17:12 compute-0 systemd[1]: Started libpod-conmon-e8aa62979a66c05898b00a4278488e851eec52d56cd31d04f126f0727e907a41.scope.
Jan 10 17:17:12 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:17:12 compute-0 podman[238504]: 2026-01-10 17:17:12.446216369 +0000 UTC m=+0.027812472 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:17:12 compute-0 podman[238504]: 2026-01-10 17:17:12.542390486 +0000 UTC m=+0.123986609 container init e8aa62979a66c05898b00a4278488e851eec52d56cd31d04f126f0727e907a41 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_murdock, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 10 17:17:12 compute-0 podman[238504]: 2026-01-10 17:17:12.550864142 +0000 UTC m=+0.132460205 container start e8aa62979a66c05898b00a4278488e851eec52d56cd31d04f126f0727e907a41 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_murdock, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 10 17:17:12 compute-0 busy_murdock[238520]: 167 167
Jan 10 17:17:12 compute-0 systemd[1]: libpod-e8aa62979a66c05898b00a4278488e851eec52d56cd31d04f126f0727e907a41.scope: Deactivated successfully.
Jan 10 17:17:12 compute-0 podman[238504]: 2026-01-10 17:17:12.554577767 +0000 UTC m=+0.136173850 container attach e8aa62979a66c05898b00a4278488e851eec52d56cd31d04f126f0727e907a41 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_murdock, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 10 17:17:12 compute-0 podman[238504]: 2026-01-10 17:17:12.5566484 +0000 UTC m=+0.138244463 container died e8aa62979a66c05898b00a4278488e851eec52d56cd31d04f126f0727e907a41 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_murdock, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 10 17:17:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-1bbe8e70c5e4bc7a6b3405a16233b39006cdfe2084f59ca9a98070bf0eb04b87-merged.mount: Deactivated successfully.
Jan 10 17:17:12 compute-0 podman[238504]: 2026-01-10 17:17:12.601564878 +0000 UTC m=+0.183160981 container remove e8aa62979a66c05898b00a4278488e851eec52d56cd31d04f126f0727e907a41 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_murdock, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030)
Jan 10 17:17:12 compute-0 systemd[1]: libpod-conmon-e8aa62979a66c05898b00a4278488e851eec52d56cd31d04f126f0727e907a41.scope: Deactivated successfully.
Jan 10 17:17:12 compute-0 podman[238543]: 2026-01-10 17:17:12.868196771 +0000 UTC m=+0.076915947 container create 6c17544efddcd5d37c7f8fbf11d11e2e396f391ef6b4b8d6849bd274abf9cab6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_driscoll, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 17:17:12 compute-0 systemd[1]: Started libpod-conmon-6c17544efddcd5d37c7f8fbf11d11e2e396f391ef6b4b8d6849bd274abf9cab6.scope.
Jan 10 17:17:12 compute-0 podman[238543]: 2026-01-10 17:17:12.836683105 +0000 UTC m=+0.045402361 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:17:12 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:17:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ade57e8429191a6bb9615ed64dbc51c1b1d9d968fdad1dfb98f0cfa364f7f9a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 17:17:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ade57e8429191a6bb9615ed64dbc51c1b1d9d968fdad1dfb98f0cfa364f7f9a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 17:17:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ade57e8429191a6bb9615ed64dbc51c1b1d9d968fdad1dfb98f0cfa364f7f9a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 17:17:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ade57e8429191a6bb9615ed64dbc51c1b1d9d968fdad1dfb98f0cfa364f7f9a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 17:17:12 compute-0 podman[238543]: 2026-01-10 17:17:12.96915359 +0000 UTC m=+0.177872776 container init 6c17544efddcd5d37c7f8fbf11d11e2e396f391ef6b4b8d6849bd274abf9cab6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_driscoll, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 17:17:12 compute-0 podman[238543]: 2026-01-10 17:17:12.978855508 +0000 UTC m=+0.187574674 container start 6c17544efddcd5d37c7f8fbf11d11e2e396f391ef6b4b8d6849bd274abf9cab6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_driscoll, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 17:17:12 compute-0 podman[238543]: 2026-01-10 17:17:12.981773003 +0000 UTC m=+0.190492249 container attach 6c17544efddcd5d37c7f8fbf11d11e2e396f391ef6b4b8d6849bd274abf9cab6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_driscoll, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True)
Jan 10 17:17:13 compute-0 strange_driscoll[238560]: {
Jan 10 17:17:13 compute-0 strange_driscoll[238560]:     "0": [
Jan 10 17:17:13 compute-0 strange_driscoll[238560]:         {
Jan 10 17:17:13 compute-0 strange_driscoll[238560]:             "devices": [
Jan 10 17:17:13 compute-0 strange_driscoll[238560]:                 "/dev/loop3"
Jan 10 17:17:13 compute-0 strange_driscoll[238560]:             ],
Jan 10 17:17:13 compute-0 strange_driscoll[238560]:             "lv_name": "ceph_lv0",
Jan 10 17:17:13 compute-0 strange_driscoll[238560]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 10 17:17:13 compute-0 strange_driscoll[238560]:             "lv_size": "21470642176",
Jan 10 17:17:13 compute-0 strange_driscoll[238560]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9aa1dcc9-88f4-49c0-be40-744313964d3e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 17:17:13 compute-0 strange_driscoll[238560]:             "lv_uuid": "fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2",
Jan 10 17:17:13 compute-0 strange_driscoll[238560]:             "name": "ceph_lv0",
Jan 10 17:17:13 compute-0 strange_driscoll[238560]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 10 17:17:13 compute-0 strange_driscoll[238560]:             "tags": {
Jan 10 17:17:13 compute-0 strange_driscoll[238560]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 10 17:17:13 compute-0 strange_driscoll[238560]:                 "ceph.block_uuid": "fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2",
Jan 10 17:17:13 compute-0 strange_driscoll[238560]:                 "ceph.cephx_lockbox_secret": "",
Jan 10 17:17:13 compute-0 strange_driscoll[238560]:                 "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 17:17:13 compute-0 strange_driscoll[238560]:                 "ceph.cluster_name": "ceph",
Jan 10 17:17:13 compute-0 strange_driscoll[238560]:                 "ceph.crush_device_class": "",
Jan 10 17:17:13 compute-0 strange_driscoll[238560]:                 "ceph.encrypted": "0",
Jan 10 17:17:13 compute-0 strange_driscoll[238560]:                 "ceph.objectstore": "bluestore",
Jan 10 17:17:13 compute-0 strange_driscoll[238560]:                 "ceph.osd_fsid": "9aa1dcc9-88f4-49c0-be40-744313964d3e",
Jan 10 17:17:13 compute-0 strange_driscoll[238560]:                 "ceph.osd_id": "0",
Jan 10 17:17:13 compute-0 strange_driscoll[238560]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 10 17:17:13 compute-0 strange_driscoll[238560]:                 "ceph.type": "block",
Jan 10 17:17:13 compute-0 strange_driscoll[238560]:                 "ceph.vdo": "0",
Jan 10 17:17:13 compute-0 strange_driscoll[238560]:                 "ceph.with_tpm": "0"
Jan 10 17:17:13 compute-0 strange_driscoll[238560]:             },
Jan 10 17:17:13 compute-0 strange_driscoll[238560]:             "type": "block",
Jan 10 17:17:13 compute-0 strange_driscoll[238560]:             "vg_name": "ceph_vg0"
Jan 10 17:17:13 compute-0 strange_driscoll[238560]:         }
Jan 10 17:17:13 compute-0 strange_driscoll[238560]:     ],
Jan 10 17:17:13 compute-0 strange_driscoll[238560]:     "1": [
Jan 10 17:17:13 compute-0 strange_driscoll[238560]:         {
Jan 10 17:17:13 compute-0 strange_driscoll[238560]:             "devices": [
Jan 10 17:17:13 compute-0 strange_driscoll[238560]:                 "/dev/loop4"
Jan 10 17:17:13 compute-0 strange_driscoll[238560]:             ],
Jan 10 17:17:13 compute-0 strange_driscoll[238560]:             "lv_name": "ceph_lv1",
Jan 10 17:17:13 compute-0 strange_driscoll[238560]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 10 17:17:13 compute-0 strange_driscoll[238560]:             "lv_size": "21470642176",
Jan 10 17:17:13 compute-0 strange_driscoll[238560]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e8e31518-65ae-476c-891c-e2fc550d0a1c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 17:17:13 compute-0 strange_driscoll[238560]:             "lv_uuid": "jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE",
Jan 10 17:17:13 compute-0 strange_driscoll[238560]:             "name": "ceph_lv1",
Jan 10 17:17:13 compute-0 strange_driscoll[238560]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 10 17:17:13 compute-0 strange_driscoll[238560]:             "tags": {
Jan 10 17:17:13 compute-0 strange_driscoll[238560]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 10 17:17:13 compute-0 strange_driscoll[238560]:                 "ceph.block_uuid": "jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE",
Jan 10 17:17:13 compute-0 strange_driscoll[238560]:                 "ceph.cephx_lockbox_secret": "",
Jan 10 17:17:13 compute-0 strange_driscoll[238560]:                 "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 17:17:13 compute-0 strange_driscoll[238560]:                 "ceph.cluster_name": "ceph",
Jan 10 17:17:13 compute-0 strange_driscoll[238560]:                 "ceph.crush_device_class": "",
Jan 10 17:17:13 compute-0 strange_driscoll[238560]:                 "ceph.encrypted": "0",
Jan 10 17:17:13 compute-0 strange_driscoll[238560]:                 "ceph.objectstore": "bluestore",
Jan 10 17:17:13 compute-0 strange_driscoll[238560]:                 "ceph.osd_fsid": "e8e31518-65ae-476c-891c-e2fc550d0a1c",
Jan 10 17:17:13 compute-0 strange_driscoll[238560]:                 "ceph.osd_id": "1",
Jan 10 17:17:13 compute-0 strange_driscoll[238560]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 10 17:17:13 compute-0 strange_driscoll[238560]:                 "ceph.type": "block",
Jan 10 17:17:13 compute-0 strange_driscoll[238560]:                 "ceph.vdo": "0",
Jan 10 17:17:13 compute-0 strange_driscoll[238560]:                 "ceph.with_tpm": "0"
Jan 10 17:17:13 compute-0 strange_driscoll[238560]:             },
Jan 10 17:17:13 compute-0 strange_driscoll[238560]:             "type": "block",
Jan 10 17:17:13 compute-0 strange_driscoll[238560]:             "vg_name": "ceph_vg1"
Jan 10 17:17:13 compute-0 strange_driscoll[238560]:         }
Jan 10 17:17:13 compute-0 strange_driscoll[238560]:     ],
Jan 10 17:17:13 compute-0 strange_driscoll[238560]:     "2": [
Jan 10 17:17:13 compute-0 strange_driscoll[238560]:         {
Jan 10 17:17:13 compute-0 strange_driscoll[238560]:             "devices": [
Jan 10 17:17:13 compute-0 strange_driscoll[238560]:                 "/dev/loop5"
Jan 10 17:17:13 compute-0 strange_driscoll[238560]:             ],
Jan 10 17:17:13 compute-0 strange_driscoll[238560]:             "lv_name": "ceph_lv2",
Jan 10 17:17:13 compute-0 strange_driscoll[238560]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 10 17:17:13 compute-0 strange_driscoll[238560]:             "lv_size": "21470642176",
Jan 10 17:17:13 compute-0 strange_driscoll[238560]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=87473727-6468-4f68-8371-e0bf60edaa43,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 17:17:13 compute-0 strange_driscoll[238560]:             "lv_uuid": "47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL",
Jan 10 17:17:13 compute-0 strange_driscoll[238560]:             "name": "ceph_lv2",
Jan 10 17:17:13 compute-0 strange_driscoll[238560]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 10 17:17:13 compute-0 strange_driscoll[238560]:             "tags": {
Jan 10 17:17:13 compute-0 strange_driscoll[238560]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 10 17:17:13 compute-0 strange_driscoll[238560]:                 "ceph.block_uuid": "47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL",
Jan 10 17:17:13 compute-0 strange_driscoll[238560]:                 "ceph.cephx_lockbox_secret": "",
Jan 10 17:17:13 compute-0 strange_driscoll[238560]:                 "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 17:17:13 compute-0 strange_driscoll[238560]:                 "ceph.cluster_name": "ceph",
Jan 10 17:17:13 compute-0 strange_driscoll[238560]:                 "ceph.crush_device_class": "",
Jan 10 17:17:13 compute-0 strange_driscoll[238560]:                 "ceph.encrypted": "0",
Jan 10 17:17:13 compute-0 strange_driscoll[238560]:                 "ceph.objectstore": "bluestore",
Jan 10 17:17:13 compute-0 strange_driscoll[238560]:                 "ceph.osd_fsid": "87473727-6468-4f68-8371-e0bf60edaa43",
Jan 10 17:17:13 compute-0 strange_driscoll[238560]:                 "ceph.osd_id": "2",
Jan 10 17:17:13 compute-0 strange_driscoll[238560]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 10 17:17:13 compute-0 strange_driscoll[238560]:                 "ceph.type": "block",
Jan 10 17:17:13 compute-0 strange_driscoll[238560]:                 "ceph.vdo": "0",
Jan 10 17:17:13 compute-0 strange_driscoll[238560]:                 "ceph.with_tpm": "0"
Jan 10 17:17:13 compute-0 strange_driscoll[238560]:             },
Jan 10 17:17:13 compute-0 strange_driscoll[238560]:             "type": "block",
Jan 10 17:17:13 compute-0 strange_driscoll[238560]:             "vg_name": "ceph_vg2"
Jan 10 17:17:13 compute-0 strange_driscoll[238560]:         }
Jan 10 17:17:13 compute-0 strange_driscoll[238560]:     ]
Jan 10 17:17:13 compute-0 strange_driscoll[238560]: }
Jan 10 17:17:13 compute-0 systemd[1]: libpod-6c17544efddcd5d37c7f8fbf11d11e2e396f391ef6b4b8d6849bd274abf9cab6.scope: Deactivated successfully.
Jan 10 17:17:13 compute-0 podman[238543]: 2026-01-10 17:17:13.347807184 +0000 UTC m=+0.556526350 container died 6c17544efddcd5d37c7f8fbf11d11e2e396f391ef6b4b8d6849bd274abf9cab6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_driscoll, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 10 17:17:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-5ade57e8429191a6bb9615ed64dbc51c1b1d9d968fdad1dfb98f0cfa364f7f9a-merged.mount: Deactivated successfully.
Jan 10 17:17:13 compute-0 podman[238543]: 2026-01-10 17:17:13.389477909 +0000 UTC m=+0.598197115 container remove 6c17544efddcd5d37c7f8fbf11d11e2e396f391ef6b4b8d6849bd274abf9cab6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_driscoll, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True)
Jan 10 17:17:13 compute-0 systemd[1]: libpod-conmon-6c17544efddcd5d37c7f8fbf11d11e2e396f391ef6b4b8d6849bd274abf9cab6.scope: Deactivated successfully.
Jan 10 17:17:13 compute-0 sudo[238466]: pam_unix(sudo:session): session closed for user root
Jan 10 17:17:13 compute-0 sudo[238581]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 17:17:13 compute-0 sudo[238581]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:17:13 compute-0 sudo[238581]: pam_unix(sudo:session): session closed for user root
Jan 10 17:17:13 compute-0 sudo[238606]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 -- raw list --format json
Jan 10 17:17:13 compute-0 sudo[238606]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:17:13 compute-0 ceph-mon[75249]: pgmap v646: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:17:13 compute-0 podman[238643]: 2026-01-10 17:17:13.947430255 +0000 UTC m=+0.055855128 container create e0fb3d17b202e74052f42b9f2c030d17628f44d1d2fac89f826400a8bb970c30 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_blackburn, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 10 17:17:13 compute-0 systemd[1]: Started libpod-conmon-e0fb3d17b202e74052f42b9f2c030d17628f44d1d2fac89f826400a8bb970c30.scope.
Jan 10 17:17:14 compute-0 podman[238643]: 2026-01-10 17:17:13.921601265 +0000 UTC m=+0.030026228 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:17:14 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:17:14 compute-0 podman[238643]: 2026-01-10 17:17:14.031564795 +0000 UTC m=+0.139989688 container init e0fb3d17b202e74052f42b9f2c030d17628f44d1d2fac89f826400a8bb970c30 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_blackburn, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 17:17:14 compute-0 podman[238643]: 2026-01-10 17:17:14.043223033 +0000 UTC m=+0.151647946 container start e0fb3d17b202e74052f42b9f2c030d17628f44d1d2fac89f826400a8bb970c30 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_blackburn, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 10 17:17:14 compute-0 podman[238643]: 2026-01-10 17:17:14.047213735 +0000 UTC m=+0.155638618 container attach e0fb3d17b202e74052f42b9f2c030d17628f44d1d2fac89f826400a8bb970c30 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_blackburn, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 10 17:17:14 compute-0 jolly_blackburn[238659]: 167 167
Jan 10 17:17:14 compute-0 systemd[1]: libpod-e0fb3d17b202e74052f42b9f2c030d17628f44d1d2fac89f826400a8bb970c30.scope: Deactivated successfully.
Jan 10 17:17:14 compute-0 podman[238643]: 2026-01-10 17:17:14.049313578 +0000 UTC m=+0.157738461 container died e0fb3d17b202e74052f42b9f2c030d17628f44d1d2fac89f826400a8bb970c30 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_blackburn, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 10 17:17:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-a6ce51383189aadbd701a09827537c3ee1766afc76683e5d3ef5ceb412b36e99-merged.mount: Deactivated successfully.
Jan 10 17:17:14 compute-0 podman[238643]: 2026-01-10 17:17:14.087545555 +0000 UTC m=+0.195970438 container remove e0fb3d17b202e74052f42b9f2c030d17628f44d1d2fac89f826400a8bb970c30 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_blackburn, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 10 17:17:14 compute-0 systemd[1]: libpod-conmon-e0fb3d17b202e74052f42b9f2c030d17628f44d1d2fac89f826400a8bb970c30.scope: Deactivated successfully.
Jan 10 17:17:14 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:17:14 compute-0 podman[238682]: 2026-01-10 17:17:14.334997088 +0000 UTC m=+0.073000466 container create ad2959dc96e97275099415fc4dd41ca14e26cf3eb47220370a4a65acf018af0b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_ride, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 17:17:14 compute-0 systemd[1]: Started libpod-conmon-ad2959dc96e97275099415fc4dd41ca14e26cf3eb47220370a4a65acf018af0b.scope.
Jan 10 17:17:14 compute-0 podman[238682]: 2026-01-10 17:17:14.307984268 +0000 UTC m=+0.045987706 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:17:14 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:17:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a11b2ceb313a06920e7e3904698f3d2add111e6ad713adfa5739b8840a55526/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 17:17:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a11b2ceb313a06920e7e3904698f3d2add111e6ad713adfa5739b8840a55526/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 17:17:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a11b2ceb313a06920e7e3904698f3d2add111e6ad713adfa5739b8840a55526/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 17:17:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a11b2ceb313a06920e7e3904698f3d2add111e6ad713adfa5739b8840a55526/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 17:17:14 compute-0 podman[238682]: 2026-01-10 17:17:14.439547819 +0000 UTC m=+0.177551197 container init ad2959dc96e97275099415fc4dd41ca14e26cf3eb47220370a4a65acf018af0b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_ride, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 17:17:14 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v647: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:17:14 compute-0 podman[238682]: 2026-01-10 17:17:14.452051949 +0000 UTC m=+0.190055337 container start ad2959dc96e97275099415fc4dd41ca14e26cf3eb47220370a4a65acf018af0b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_ride, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 17:17:14 compute-0 podman[238682]: 2026-01-10 17:17:14.457152319 +0000 UTC m=+0.195155727 container attach ad2959dc96e97275099415fc4dd41ca14e26cf3eb47220370a4a65acf018af0b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_ride, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 17:17:15 compute-0 lvm[238776]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 10 17:17:15 compute-0 lvm[238775]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 10 17:17:15 compute-0 lvm[238776]: VG ceph_vg1 finished
Jan 10 17:17:15 compute-0 lvm[238775]: VG ceph_vg0 finished
Jan 10 17:17:15 compute-0 lvm[238778]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 10 17:17:15 compute-0 lvm[238778]: VG ceph_vg2 finished
Jan 10 17:17:15 compute-0 xenodochial_ride[238697]: {}
Jan 10 17:17:15 compute-0 systemd[1]: libpod-ad2959dc96e97275099415fc4dd41ca14e26cf3eb47220370a4a65acf018af0b.scope: Deactivated successfully.
Jan 10 17:17:15 compute-0 podman[238682]: 2026-01-10 17:17:15.307229689 +0000 UTC m=+1.045233037 container died ad2959dc96e97275099415fc4dd41ca14e26cf3eb47220370a4a65acf018af0b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_ride, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 10 17:17:15 compute-0 systemd[1]: libpod-ad2959dc96e97275099415fc4dd41ca14e26cf3eb47220370a4a65acf018af0b.scope: Consumed 1.378s CPU time.
Jan 10 17:17:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-6a11b2ceb313a06920e7e3904698f3d2add111e6ad713adfa5739b8840a55526-merged.mount: Deactivated successfully.
Jan 10 17:17:15 compute-0 podman[238682]: 2026-01-10 17:17:15.348155105 +0000 UTC m=+1.086158463 container remove ad2959dc96e97275099415fc4dd41ca14e26cf3eb47220370a4a65acf018af0b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_ride, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 17:17:15 compute-0 systemd[1]: libpod-conmon-ad2959dc96e97275099415fc4dd41ca14e26cf3eb47220370a4a65acf018af0b.scope: Deactivated successfully.
Jan 10 17:17:15 compute-0 sudo[238606]: pam_unix(sudo:session): session closed for user root
Jan 10 17:17:15 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 10 17:17:15 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:17:15 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 10 17:17:15 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:17:15 compute-0 sudo[238792]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 10 17:17:15 compute-0 sudo[238792]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:17:15 compute-0 sudo[238792]: pam_unix(sudo:session): session closed for user root
Jan 10 17:17:15 compute-0 ceph-mon[75249]: pgmap v647: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:17:15 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:17:15 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:17:16 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v648: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:17:16 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "version", "format": "json"} v 0)
Jan 10 17:17:16 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2471123660' entity='client.openstack' cmd={"prefix": "version", "format": "json"} : dispatch
Jan 10 17:17:16 compute-0 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.14318 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Jan 10 17:17:16 compute-0 ceph-mgr[75538]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Jan 10 17:17:16 compute-0 ceph-mgr[75538]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Jan 10 17:17:17 compute-0 ceph-mon[75249]: pgmap v648: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:17:17 compute-0 ceph-mon[75249]: from='client.? 192.168.122.10:0/2471123660' entity='client.openstack' cmd={"prefix": "version", "format": "json"} : dispatch
Jan 10 17:17:17 compute-0 ceph-mon[75249]: from='client.14318 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Jan 10 17:17:18 compute-0 ceph-mon[75249]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 10 17:17:18 compute-0 ceph-mon[75249]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Cumulative writes: 3105 writes, 13K keys, 3105 commit groups, 1.0 writes per commit group, ingest: 0.01 GB, 0.01 MB/s
                                           Cumulative WAL: 3105 writes, 3105 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1288 writes, 5596 keys, 1288 commit groups, 1.0 writes per commit group, ingest: 5.75 MB, 0.01 MB/s
                                           Interval WAL: 1288 writes, 1288 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     99.5      0.10              0.05         6    0.017       0      0       0.0       0.0
                                             L6      1/0    4.74 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.4    109.1     89.2      0.28              0.16         5    0.056     16K   2269       0.0       0.0
                                            Sum      1/0    4.74 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   3.4     79.8     92.0      0.39              0.21        11    0.035     16K   2269       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.5     84.2     85.8      0.23              0.13         6    0.038     10K   1495       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0    109.1     89.2      0.28              0.16         5    0.056     16K   2269       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    103.1      0.10              0.05         5    0.020       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     13.8      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.010, interval 0.004
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.03 GB write, 0.03 MB/s write, 0.03 GB read, 0.03 MB/s read, 0.4 seconds
                                           Interval compaction: 0.02 GB write, 0.03 MB/s write, 0.02 GB read, 0.03 MB/s read, 0.2 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55efa2bef8d0#2 capacity: 308.00 MB usage: 1.45 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 0.000226 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(99,1.29 MB,0.419983%) FilterBlock(12,55.17 KB,0.0174931%) IndexBlock(12,109.77 KB,0.0348029%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 10 17:17:18 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v649: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:17:19 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:17:19 compute-0 ceph-mon[75249]: pgmap v649: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:17:20 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v650: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:17:21 compute-0 ceph-mon[75249]: pgmap v650: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:17:22 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v651: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:17:23 compute-0 ceph-mon[75249]: pgmap v651: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:17:24 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:17:24 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v652: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:17:25 compute-0 ceph-mon[75249]: pgmap v652: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:17:26 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v653: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:17:27 compute-0 ceph-mon[75249]: pgmap v653: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:17:28 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v654: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:17:29 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:17:29 compute-0 ceph-mon[75249]: pgmap v654: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:17:30 compute-0 podman[238817]: 2026-01-10 17:17:30.100057995 +0000 UTC m=+0.087510677 container health_status 4a66317a3a3660e903224f5e823ead0a57597ae17c6ac34919f8a3aeea25124f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 10 17:17:30 compute-0 podman[238818]: 2026-01-10 17:17:30.152115685 +0000 UTC m=+0.137551376 container health_status a2049b55215718ead6719d161f51721cb7ba61ad8a59a8a1174e05b17008fa6f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 10 17:17:30 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v655: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:17:31 compute-0 ceph-mon[75249]: pgmap v655: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:17:32 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v656: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:17:33 compute-0 ceph-mon[75249]: pgmap v656: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:17:34 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:17:34 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v657: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:17:35 compute-0 ceph-mon[75249]: pgmap v657: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:17:36 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 10 17:17:36 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/799742170' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 10 17:17:36 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 10 17:17:36 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/799742170' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 10 17:17:36 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v658: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:17:36 compute-0 ceph-mon[75249]: from='client.? 192.168.122.10:0/799742170' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 10 17:17:36 compute-0 ceph-mon[75249]: from='client.? 192.168.122.10:0/799742170' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 10 17:17:37 compute-0 ceph-mon[75249]: pgmap v658: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:17:38 compute-0 ceph-mgr[75538]: [balancer INFO root] Optimize plan auto_2026-01-10_17:17:38
Jan 10 17:17:38 compute-0 ceph-mgr[75538]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 10 17:17:38 compute-0 ceph-mgr[75538]: [balancer INFO root] do_upmap
Jan 10 17:17:38 compute-0 ceph-mgr[75538]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'volumes', 'cephfs.cephfs.data', 'vms', 'images', '.mgr', 'backups']
Jan 10 17:17:38 compute-0 ceph-mgr[75538]: [balancer INFO root] prepared 0/10 upmap changes
Jan 10 17:17:38 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v659: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:17:38 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:17:38 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:17:38 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:17:38 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:17:38 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:17:38 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:17:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 10 17:17:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 10 17:17:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 10 17:17:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 10 17:17:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 10 17:17:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 10 17:17:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 10 17:17:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 10 17:17:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 10 17:17:39 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:17:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 10 17:17:39 compute-0 ceph-mon[75249]: pgmap v659: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:17:40 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v660: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:17:42 compute-0 ceph-mon[75249]: pgmap v660: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:17:42 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v661: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:17:44 compute-0 ceph-mon[75249]: pgmap v661: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:17:44 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:17:44 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v662: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:17:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] _maybe_adjust
Jan 10 17:17:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:17:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 10 17:17:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:17:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 17:17:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:17:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 17:17:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:17:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 17:17:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:17:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 17:17:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:17:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 9.302004027771843e-07 of space, bias 4.0, pg target 0.0011162404833326212 quantized to 16 (current 16)
Jan 10 17:17:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:17:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 17:17:46 compute-0 ceph-mon[75249]: pgmap v662: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:17:46 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v663: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:17:47 compute-0 ceph-mon[75249]: pgmap v663: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:17:48 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v664: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:17:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:17:48.918 152671 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 10 17:17:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:17:48.919 152671 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 10 17:17:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:17:48.920 152671 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 10 17:17:49 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:17:49 compute-0 ceph-mon[75249]: pgmap v664: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:17:50 compute-0 nova_compute[237049]: 2026-01-10 17:17:50.069 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:17:50 compute-0 nova_compute[237049]: 2026-01-10 17:17:50.069 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:17:50 compute-0 nova_compute[237049]: 2026-01-10 17:17:50.099 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:17:50 compute-0 nova_compute[237049]: 2026-01-10 17:17:50.099 237053 DEBUG nova.compute.manager [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 10 17:17:50 compute-0 nova_compute[237049]: 2026-01-10 17:17:50.100 237053 DEBUG nova.compute.manager [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 10 17:17:50 compute-0 nova_compute[237049]: 2026-01-10 17:17:50.117 237053 DEBUG nova.compute.manager [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 10 17:17:50 compute-0 nova_compute[237049]: 2026-01-10 17:17:50.119 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:17:50 compute-0 nova_compute[237049]: 2026-01-10 17:17:50.119 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:17:50 compute-0 nova_compute[237049]: 2026-01-10 17:17:50.119 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:17:50 compute-0 nova_compute[237049]: 2026-01-10 17:17:50.160 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 10 17:17:50 compute-0 nova_compute[237049]: 2026-01-10 17:17:50.161 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 10 17:17:50 compute-0 nova_compute[237049]: 2026-01-10 17:17:50.161 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 10 17:17:50 compute-0 nova_compute[237049]: 2026-01-10 17:17:50.162 237053 DEBUG nova.compute.resource_tracker [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 10 17:17:50 compute-0 nova_compute[237049]: 2026-01-10 17:17:50.163 237053 DEBUG oslo_concurrency.processutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 10 17:17:50 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v665: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:17:50 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 10 17:17:50 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3986029489' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 10 17:17:50 compute-0 nova_compute[237049]: 2026-01-10 17:17:50.764 237053 DEBUG oslo_concurrency.processutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.602s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 10 17:17:50 compute-0 nova_compute[237049]: 2026-01-10 17:17:50.978 237053 WARNING nova.virt.libvirt.driver [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 10 17:17:50 compute-0 nova_compute[237049]: 2026-01-10 17:17:50.980 237053 DEBUG nova.compute.resource_tracker [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5298MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 10 17:17:50 compute-0 nova_compute[237049]: 2026-01-10 17:17:50.981 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 10 17:17:50 compute-0 nova_compute[237049]: 2026-01-10 17:17:50.981 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 10 17:17:51 compute-0 nova_compute[237049]: 2026-01-10 17:17:51.072 237053 DEBUG nova.compute.resource_tracker [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 10 17:17:51 compute-0 nova_compute[237049]: 2026-01-10 17:17:51.073 237053 DEBUG nova.compute.resource_tracker [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 10 17:17:51 compute-0 nova_compute[237049]: 2026-01-10 17:17:51.107 237053 DEBUG oslo_concurrency.processutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 10 17:17:51 compute-0 ceph-mon[75249]: pgmap v665: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:17:51 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/3986029489' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 10 17:17:51 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 10 17:17:51 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2766907015' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 10 17:17:51 compute-0 nova_compute[237049]: 2026-01-10 17:17:51.690 237053 DEBUG oslo_concurrency.processutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.583s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 10 17:17:51 compute-0 nova_compute[237049]: 2026-01-10 17:17:51.697 237053 DEBUG nova.compute.provider_tree [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Inventory has not changed in ProviderTree for provider: 5f85855c-8a9b-43b5-ae49-f5846b9dcebe update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 10 17:17:51 compute-0 nova_compute[237049]: 2026-01-10 17:17:51.715 237053 DEBUG nova.scheduler.client.report [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Inventory has not changed for provider 5f85855c-8a9b-43b5-ae49-f5846b9dcebe based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 10 17:17:51 compute-0 nova_compute[237049]: 2026-01-10 17:17:51.718 237053 DEBUG nova.compute.resource_tracker [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 10 17:17:51 compute-0 nova_compute[237049]: 2026-01-10 17:17:51.718 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.737s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 10 17:17:51 compute-0 nova_compute[237049]: 2026-01-10 17:17:51.945 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:17:51 compute-0 nova_compute[237049]: 2026-01-10 17:17:51.946 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:17:51 compute-0 nova_compute[237049]: 2026-01-10 17:17:51.946 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:17:51 compute-0 nova_compute[237049]: 2026-01-10 17:17:51.947 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:17:51 compute-0 nova_compute[237049]: 2026-01-10 17:17:51.947 237053 DEBUG nova.compute.manager [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 10 17:17:52 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v666: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:17:52 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/2766907015' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 10 17:17:53 compute-0 ceph-mon[75249]: pgmap v666: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:17:54 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:17:54 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v667: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:17:55 compute-0 ceph-mon[75249]: pgmap v667: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:17:56 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v668: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:17:57 compute-0 ceph-mon[75249]: pgmap v668: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:17:57 compute-0 sshd-session[238906]: Connection closed by authenticating user root 216.36.124.133 port 51664 [preauth]
Jan 10 17:17:58 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v669: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:17:59 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:17:59 compute-0 ceph-mon[75249]: pgmap v669: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:18:00 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v670: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:18:01 compute-0 podman[238908]: 2026-01-10 17:18:01.05449838 +0000 UTC m=+0.058202029 container health_status 4a66317a3a3660e903224f5e823ead0a57597ae17c6ac34919f8a3aeea25124f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 10 17:18:01 compute-0 podman[238909]: 2026-01-10 17:18:01.106736174 +0000 UTC m=+0.109299502 container health_status a2049b55215718ead6719d161f51721cb7ba61ad8a59a8a1174e05b17008fa6f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, tcib_managed=true)
Jan 10 17:18:01 compute-0 ceph-mon[75249]: pgmap v670: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:18:02 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v671: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:18:03 compute-0 ceph-mon[75249]: pgmap v671: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:18:04 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:18:04 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v672: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:18:05 compute-0 ceph-mon[75249]: pgmap v672: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:18:06 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v673: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:18:07 compute-0 ceph-mon[75249]: pgmap v673: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:18:08 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v674: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:18:08 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:18:08 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:18:08 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:18:08 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:18:08 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:18:08 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:18:09 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:18:09 compute-0 ceph-mon[75249]: pgmap v674: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:18:10 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:18:10.339 152671 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd6:b5:c0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '8e:56:cf:00:80:b3'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 10 17:18:10 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:18:10.341 152671 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 10 17:18:10 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:18:10.346 152671 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=fbd04e21-7be2-4eb3-a385-03f0bb540a40, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 10 17:18:10 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v675: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:18:11 compute-0 ceph-mon[75249]: pgmap v675: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:18:12 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v676: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:18:13 compute-0 ceph-mon[75249]: pgmap v676: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:18:14 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:18:14 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v677: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:18:15 compute-0 sudo[238953]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 17:18:15 compute-0 sudo[238953]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:18:15 compute-0 sudo[238953]: pam_unix(sudo:session): session closed for user root
Jan 10 17:18:15 compute-0 sudo[238978]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host
Jan 10 17:18:15 compute-0 sudo[238978]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:18:15 compute-0 ceph-mon[75249]: pgmap v677: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:18:16 compute-0 sudo[238978]: pam_unix(sudo:session): session closed for user root
Jan 10 17:18:16 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 10 17:18:16 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:18:16 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 10 17:18:16 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:18:16 compute-0 sudo[239023]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 17:18:16 compute-0 sudo[239023]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:18:16 compute-0 sudo[239023]: pam_unix(sudo:session): session closed for user root
Jan 10 17:18:16 compute-0 sudo[239048]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 10 17:18:16 compute-0 sudo[239048]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:18:16 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v678: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:18:16 compute-0 sudo[239048]: pam_unix(sudo:session): session closed for user root
Jan 10 17:18:16 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 17:18:16 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 17:18:16 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 10 17:18:16 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 10 17:18:16 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 10 17:18:16 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:18:17 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 10 17:18:17 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 10 17:18:17 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 10 17:18:17 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 10 17:18:17 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 17:18:17 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 17:18:17 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:18:17 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:18:17 compute-0 ceph-mon[75249]: pgmap v678: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:18:17 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 17:18:17 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 10 17:18:17 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:18:17 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 10 17:18:17 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 10 17:18:17 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 17:18:17 compute-0 sudo[239104]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 17:18:17 compute-0 sudo[239104]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:18:17 compute-0 sudo[239104]: pam_unix(sudo:session): session closed for user root
Jan 10 17:18:17 compute-0 sudo[239129]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 10 17:18:17 compute-0 sudo[239129]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:18:17 compute-0 podman[239166]: 2026-01-10 17:18:17.559941629 +0000 UTC m=+0.068062747 container create 1d829c1195896ab16fa63ad31c069c5764a02a2ca8569eba442da8495fc98f89 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_dewdney, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 17:18:17 compute-0 systemd[1]: Started libpod-conmon-1d829c1195896ab16fa63ad31c069c5764a02a2ca8569eba442da8495fc98f89.scope.
Jan 10 17:18:17 compute-0 podman[239166]: 2026-01-10 17:18:17.530152708 +0000 UTC m=+0.038273876 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:18:17 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:18:17 compute-0 podman[239166]: 2026-01-10 17:18:17.685943004 +0000 UTC m=+0.194064192 container init 1d829c1195896ab16fa63ad31c069c5764a02a2ca8569eba442da8495fc98f89 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_dewdney, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 17:18:17 compute-0 podman[239166]: 2026-01-10 17:18:17.699267871 +0000 UTC m=+0.207389009 container start 1d829c1195896ab16fa63ad31c069c5764a02a2ca8569eba442da8495fc98f89 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_dewdney, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 10 17:18:17 compute-0 podman[239166]: 2026-01-10 17:18:17.703433786 +0000 UTC m=+0.211554924 container attach 1d829c1195896ab16fa63ad31c069c5764a02a2ca8569eba442da8495fc98f89 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_dewdney, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 10 17:18:17 compute-0 amazing_dewdney[239183]: 167 167
Jan 10 17:18:17 compute-0 systemd[1]: libpod-1d829c1195896ab16fa63ad31c069c5764a02a2ca8569eba442da8495fc98f89.scope: Deactivated successfully.
Jan 10 17:18:17 compute-0 conmon[239183]: conmon 1d829c1195896ab16fa6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1d829c1195896ab16fa63ad31c069c5764a02a2ca8569eba442da8495fc98f89.scope/container/memory.events
Jan 10 17:18:17 compute-0 podman[239166]: 2026-01-10 17:18:17.71082237 +0000 UTC m=+0.218943478 container died 1d829c1195896ab16fa63ad31c069c5764a02a2ca8569eba442da8495fc98f89 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_dewdney, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 17:18:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-8fd1eec63f5cfd852851afdaf2a040cfb73b161bb9e74c04aa7d45e324949761-merged.mount: Deactivated successfully.
Jan 10 17:18:17 compute-0 podman[239166]: 2026-01-10 17:18:17.766241788 +0000 UTC m=+0.274362906 container remove 1d829c1195896ab16fa63ad31c069c5764a02a2ca8569eba442da8495fc98f89 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_dewdney, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 17:18:17 compute-0 systemd[1]: libpod-conmon-1d829c1195896ab16fa63ad31c069c5764a02a2ca8569eba442da8495fc98f89.scope: Deactivated successfully.
Jan 10 17:18:18 compute-0 podman[239206]: 2026-01-10 17:18:18.042406192 +0000 UTC m=+0.087179874 container create 91d62dd1b0d9e93e9a455b6824f2f45ef41abb943432419bff024b11d6874b20 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_noether, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 10 17:18:18 compute-0 podman[239206]: 2026-01-10 17:18:18.009569907 +0000 UTC m=+0.054343629 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:18:18 compute-0 systemd[1]: Started libpod-conmon-91d62dd1b0d9e93e9a455b6824f2f45ef41abb943432419bff024b11d6874b20.scope.
Jan 10 17:18:18 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:18:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7ff011782c128e9da4ab5569ef70c6c5e46534ba70abe4aabe66f27c19d76bd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 17:18:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7ff011782c128e9da4ab5569ef70c6c5e46534ba70abe4aabe66f27c19d76bd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 17:18:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7ff011782c128e9da4ab5569ef70c6c5e46534ba70abe4aabe66f27c19d76bd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 17:18:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7ff011782c128e9da4ab5569ef70c6c5e46534ba70abe4aabe66f27c19d76bd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 17:18:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7ff011782c128e9da4ab5569ef70c6c5e46534ba70abe4aabe66f27c19d76bd/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 10 17:18:18 compute-0 podman[239206]: 2026-01-10 17:18:18.21277837 +0000 UTC m=+0.257552092 container init 91d62dd1b0d9e93e9a455b6824f2f45ef41abb943432419bff024b11d6874b20 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_noether, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 17:18:18 compute-0 podman[239206]: 2026-01-10 17:18:18.233668816 +0000 UTC m=+0.278442498 container start 91d62dd1b0d9e93e9a455b6824f2f45ef41abb943432419bff024b11d6874b20 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_noether, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 17:18:18 compute-0 podman[239206]: 2026-01-10 17:18:18.239116426 +0000 UTC m=+0.283890108 container attach 91d62dd1b0d9e93e9a455b6824f2f45ef41abb943432419bff024b11d6874b20 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_noether, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 10 17:18:18 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v679: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:18:18 compute-0 practical_noether[239223]: --> passed data devices: 0 physical, 3 LVM
Jan 10 17:18:18 compute-0 practical_noether[239223]: --> All data devices are unavailable
Jan 10 17:18:18 compute-0 systemd[1]: libpod-91d62dd1b0d9e93e9a455b6824f2f45ef41abb943432419bff024b11d6874b20.scope: Deactivated successfully.
Jan 10 17:18:18 compute-0 podman[239206]: 2026-01-10 17:18:18.882292611 +0000 UTC m=+0.927066283 container died 91d62dd1b0d9e93e9a455b6824f2f45ef41abb943432419bff024b11d6874b20 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_noether, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 10 17:18:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-b7ff011782c128e9da4ab5569ef70c6c5e46534ba70abe4aabe66f27c19d76bd-merged.mount: Deactivated successfully.
Jan 10 17:18:18 compute-0 podman[239206]: 2026-01-10 17:18:18.955324075 +0000 UTC m=+1.000097727 container remove 91d62dd1b0d9e93e9a455b6824f2f45ef41abb943432419bff024b11d6874b20 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_noether, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 10 17:18:18 compute-0 systemd[1]: libpod-conmon-91d62dd1b0d9e93e9a455b6824f2f45ef41abb943432419bff024b11d6874b20.scope: Deactivated successfully.
Jan 10 17:18:19 compute-0 sudo[239129]: pam_unix(sudo:session): session closed for user root
Jan 10 17:18:19 compute-0 sudo[239258]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 17:18:19 compute-0 sudo[239258]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:18:19 compute-0 sudo[239258]: pam_unix(sudo:session): session closed for user root
Jan 10 17:18:19 compute-0 sudo[239283]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 -- lvm list --format json
Jan 10 17:18:19 compute-0 sudo[239283]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:18:19 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:18:19 compute-0 ceph-mon[75249]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #33. Immutable memtables: 0.
Jan 10 17:18:19 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:18:19.340536) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 10 17:18:19 compute-0 ceph-mon[75249]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 33
Jan 10 17:18:19 compute-0 ceph-mon[75249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768065499340755, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 2171, "num_deletes": 505, "total_data_size": 2157472, "memory_usage": 2201280, "flush_reason": "Manual Compaction"}
Jan 10 17:18:19 compute-0 ceph-mon[75249]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #34: started
Jan 10 17:18:19 compute-0 ceph-mon[75249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768065499357074, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 34, "file_size": 2097174, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 12153, "largest_seqno": 14323, "table_properties": {"data_size": 2087901, "index_size": 5323, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2885, "raw_key_size": 21250, "raw_average_key_size": 18, "raw_value_size": 2067248, "raw_average_value_size": 1818, "num_data_blocks": 245, "num_entries": 1137, "num_filter_entries": 1137, "num_deletions": 505, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768065289, "oldest_key_time": 1768065289, "file_creation_time": 1768065499, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f71f9c2-f3c5-4fc3-bcd9-6ffe346ae9d4", "db_session_id": "VPFJD76VNV79HUMFHEYZ", "orig_file_number": 34, "seqno_to_time_mapping": "N/A"}}
Jan 10 17:18:19 compute-0 ceph-mon[75249]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 16567 microseconds, and 6268 cpu microseconds.
Jan 10 17:18:19 compute-0 ceph-mon[75249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 10 17:18:19 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:18:19.357169) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #34: 2097174 bytes OK
Jan 10 17:18:19 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:18:19.357224) [db/memtable_list.cc:519] [default] Level-0 commit table #34 started
Jan 10 17:18:19 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:18:19.359154) [db/memtable_list.cc:722] [default] Level-0 commit table #34: memtable #1 done
Jan 10 17:18:19 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:18:19.359174) EVENT_LOG_v1 {"time_micros": 1768065499359171, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 10 17:18:19 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:18:19.359199) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 10 17:18:19 compute-0 ceph-mon[75249]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 2147338, prev total WAL file size 2147338, number of live WAL files 2.
Jan 10 17:18:19 compute-0 ceph-mon[75249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000030.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 10 17:18:19 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:18:19.360334) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0030' seq:72057594037927935, type:22 .. '6C6F676D00323531' seq:0, type:0; will stop at (end)
Jan 10 17:18:19 compute-0 ceph-mon[75249]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 10 17:18:19 compute-0 ceph-mon[75249]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [34(2048KB)], [32(4850KB)]
Jan 10 17:18:19 compute-0 ceph-mon[75249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768065499360552, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [34], "files_L6": [32], "score": -1, "input_data_size": 7064059, "oldest_snapshot_seqno": -1}
Jan 10 17:18:19 compute-0 ceph-mon[75249]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #35: 3376 keys, 5626957 bytes, temperature: kUnknown
Jan 10 17:18:19 compute-0 ceph-mon[75249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768065499420431, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 35, "file_size": 5626957, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 5601750, "index_size": 15690, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8453, "raw_key_size": 79870, "raw_average_key_size": 23, "raw_value_size": 5538453, "raw_average_value_size": 1640, "num_data_blocks": 678, "num_entries": 3376, "num_filter_entries": 3376, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768064235, "oldest_key_time": 0, "file_creation_time": 1768065499, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f71f9c2-f3c5-4fc3-bcd9-6ffe346ae9d4", "db_session_id": "VPFJD76VNV79HUMFHEYZ", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Jan 10 17:18:19 compute-0 ceph-mon[75249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 10 17:18:19 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:18:19.420921) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 5626957 bytes
Jan 10 17:18:19 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:18:19.423345) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 117.6 rd, 93.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 4.7 +0.0 blob) out(5.4 +0.0 blob), read-write-amplify(6.1) write-amplify(2.7) OK, records in: 4399, records dropped: 1023 output_compression: NoCompression
Jan 10 17:18:19 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:18:19.423398) EVENT_LOG_v1 {"time_micros": 1768065499423362, "job": 14, "event": "compaction_finished", "compaction_time_micros": 60061, "compaction_time_cpu_micros": 32009, "output_level": 6, "num_output_files": 1, "total_output_size": 5626957, "num_input_records": 4399, "num_output_records": 3376, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 10 17:18:19 compute-0 ceph-mon[75249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000034.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 10 17:18:19 compute-0 ceph-mon[75249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768065499424237, "job": 14, "event": "table_file_deletion", "file_number": 34}
Jan 10 17:18:19 compute-0 ceph-mon[75249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 10 17:18:19 compute-0 ceph-mon[75249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768065499425824, "job": 14, "event": "table_file_deletion", "file_number": 32}
Jan 10 17:18:19 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:18:19.360086) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 10 17:18:19 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:18:19.425992) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 10 17:18:19 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:18:19.426003) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 10 17:18:19 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:18:19.426006) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 10 17:18:19 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:18:19.426009) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 10 17:18:19 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:18:19.426013) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 10 17:18:19 compute-0 ceph-mon[75249]: pgmap v679: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:18:19 compute-0 podman[239318]: 2026-01-10 17:18:19.551457622 +0000 UTC m=+0.065118177 container create 2ec0318f10b20410d964ecdd04c340b17d59ad9c8237f63a11ce7137e663ba51 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_lamarr, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 17:18:19 compute-0 systemd[1]: Started libpod-conmon-2ec0318f10b20410d964ecdd04c340b17d59ad9c8237f63a11ce7137e663ba51.scope.
Jan 10 17:18:19 compute-0 podman[239318]: 2026-01-10 17:18:19.524734095 +0000 UTC m=+0.038394740 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:18:19 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:18:19 compute-0 podman[239318]: 2026-01-10 17:18:19.655923212 +0000 UTC m=+0.169583797 container init 2ec0318f10b20410d964ecdd04c340b17d59ad9c8237f63a11ce7137e663ba51 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_lamarr, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 10 17:18:19 compute-0 podman[239318]: 2026-01-10 17:18:19.666499594 +0000 UTC m=+0.180160149 container start 2ec0318f10b20410d964ecdd04c340b17d59ad9c8237f63a11ce7137e663ba51 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_lamarr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 10 17:18:19 compute-0 podman[239318]: 2026-01-10 17:18:19.670755711 +0000 UTC m=+0.184416306 container attach 2ec0318f10b20410d964ecdd04c340b17d59ad9c8237f63a11ce7137e663ba51 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_lamarr, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 10 17:18:19 compute-0 bold_lamarr[239335]: 167 167
Jan 10 17:18:19 compute-0 systemd[1]: libpod-2ec0318f10b20410d964ecdd04c340b17d59ad9c8237f63a11ce7137e663ba51.scope: Deactivated successfully.
Jan 10 17:18:19 compute-0 podman[239340]: 2026-01-10 17:18:19.745250485 +0000 UTC m=+0.049970049 container died 2ec0318f10b20410d964ecdd04c340b17d59ad9c8237f63a11ce7137e663ba51 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_lamarr, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 17:18:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-799e33ae352f9998698c8af4e76b0639d3d9c0b9734684fbeb5dd2991244a9f7-merged.mount: Deactivated successfully.
Jan 10 17:18:19 compute-0 podman[239340]: 2026-01-10 17:18:19.786651417 +0000 UTC m=+0.091370891 container remove 2ec0318f10b20410d964ecdd04c340b17d59ad9c8237f63a11ce7137e663ba51 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_lamarr, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 10 17:18:19 compute-0 systemd[1]: libpod-conmon-2ec0318f10b20410d964ecdd04c340b17d59ad9c8237f63a11ce7137e663ba51.scope: Deactivated successfully.
Jan 10 17:18:20 compute-0 podman[239362]: 2026-01-10 17:18:20.056382334 +0000 UTC m=+0.075610535 container create 0e14d3b36f89cf50c551539fbb04d8e10cb86df0b4f6947575495f0b0170717f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_shirley, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 10 17:18:20 compute-0 systemd[1]: Started libpod-conmon-0e14d3b36f89cf50c551539fbb04d8e10cb86df0b4f6947575495f0b0170717f.scope.
Jan 10 17:18:20 compute-0 podman[239362]: 2026-01-10 17:18:20.028443154 +0000 UTC m=+0.047671455 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:18:20 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:18:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cba08d2caad62faea6e45729da53903965255a00c49d5df15a62a9dc584d85c9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 17:18:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cba08d2caad62faea6e45729da53903965255a00c49d5df15a62a9dc584d85c9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 17:18:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cba08d2caad62faea6e45729da53903965255a00c49d5df15a62a9dc584d85c9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 17:18:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cba08d2caad62faea6e45729da53903965255a00c49d5df15a62a9dc584d85c9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 17:18:20 compute-0 podman[239362]: 2026-01-10 17:18:20.16465016 +0000 UTC m=+0.183878461 container init 0e14d3b36f89cf50c551539fbb04d8e10cb86df0b4f6947575495f0b0170717f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_shirley, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 17:18:20 compute-0 podman[239362]: 2026-01-10 17:18:20.175152359 +0000 UTC m=+0.194380600 container start 0e14d3b36f89cf50c551539fbb04d8e10cb86df0b4f6947575495f0b0170717f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_shirley, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 10 17:18:20 compute-0 podman[239362]: 2026-01-10 17:18:20.179105288 +0000 UTC m=+0.198333519 container attach 0e14d3b36f89cf50c551539fbb04d8e10cb86df0b4f6947575495f0b0170717f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_shirley, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 17:18:20 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v680: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:18:20 compute-0 busy_shirley[239379]: {
Jan 10 17:18:20 compute-0 busy_shirley[239379]:     "0": [
Jan 10 17:18:20 compute-0 busy_shirley[239379]:         {
Jan 10 17:18:20 compute-0 busy_shirley[239379]:             "devices": [
Jan 10 17:18:20 compute-0 busy_shirley[239379]:                 "/dev/loop3"
Jan 10 17:18:20 compute-0 busy_shirley[239379]:             ],
Jan 10 17:18:20 compute-0 busy_shirley[239379]:             "lv_name": "ceph_lv0",
Jan 10 17:18:20 compute-0 busy_shirley[239379]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 10 17:18:20 compute-0 busy_shirley[239379]:             "lv_size": "21470642176",
Jan 10 17:18:20 compute-0 busy_shirley[239379]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9aa1dcc9-88f4-49c0-be40-744313964d3e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 17:18:20 compute-0 busy_shirley[239379]:             "lv_uuid": "fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2",
Jan 10 17:18:20 compute-0 busy_shirley[239379]:             "name": "ceph_lv0",
Jan 10 17:18:20 compute-0 busy_shirley[239379]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 10 17:18:20 compute-0 busy_shirley[239379]:             "tags": {
Jan 10 17:18:20 compute-0 busy_shirley[239379]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 10 17:18:20 compute-0 busy_shirley[239379]:                 "ceph.block_uuid": "fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2",
Jan 10 17:18:20 compute-0 busy_shirley[239379]:                 "ceph.cephx_lockbox_secret": "",
Jan 10 17:18:20 compute-0 busy_shirley[239379]:                 "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 17:18:20 compute-0 busy_shirley[239379]:                 "ceph.cluster_name": "ceph",
Jan 10 17:18:20 compute-0 busy_shirley[239379]:                 "ceph.crush_device_class": "",
Jan 10 17:18:20 compute-0 busy_shirley[239379]:                 "ceph.encrypted": "0",
Jan 10 17:18:20 compute-0 busy_shirley[239379]:                 "ceph.objectstore": "bluestore",
Jan 10 17:18:20 compute-0 busy_shirley[239379]:                 "ceph.osd_fsid": "9aa1dcc9-88f4-49c0-be40-744313964d3e",
Jan 10 17:18:20 compute-0 busy_shirley[239379]:                 "ceph.osd_id": "0",
Jan 10 17:18:20 compute-0 busy_shirley[239379]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 10 17:18:20 compute-0 busy_shirley[239379]:                 "ceph.type": "block",
Jan 10 17:18:20 compute-0 busy_shirley[239379]:                 "ceph.vdo": "0",
Jan 10 17:18:20 compute-0 busy_shirley[239379]:                 "ceph.with_tpm": "0"
Jan 10 17:18:20 compute-0 busy_shirley[239379]:             },
Jan 10 17:18:20 compute-0 busy_shirley[239379]:             "type": "block",
Jan 10 17:18:20 compute-0 busy_shirley[239379]:             "vg_name": "ceph_vg0"
Jan 10 17:18:20 compute-0 busy_shirley[239379]:         }
Jan 10 17:18:20 compute-0 busy_shirley[239379]:     ],
Jan 10 17:18:20 compute-0 busy_shirley[239379]:     "1": [
Jan 10 17:18:20 compute-0 busy_shirley[239379]:         {
Jan 10 17:18:20 compute-0 busy_shirley[239379]:             "devices": [
Jan 10 17:18:20 compute-0 busy_shirley[239379]:                 "/dev/loop4"
Jan 10 17:18:20 compute-0 busy_shirley[239379]:             ],
Jan 10 17:18:20 compute-0 busy_shirley[239379]:             "lv_name": "ceph_lv1",
Jan 10 17:18:20 compute-0 busy_shirley[239379]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 10 17:18:20 compute-0 busy_shirley[239379]:             "lv_size": "21470642176",
Jan 10 17:18:20 compute-0 busy_shirley[239379]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e8e31518-65ae-476c-891c-e2fc550d0a1c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 17:18:20 compute-0 busy_shirley[239379]:             "lv_uuid": "jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE",
Jan 10 17:18:20 compute-0 busy_shirley[239379]:             "name": "ceph_lv1",
Jan 10 17:18:20 compute-0 busy_shirley[239379]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 10 17:18:20 compute-0 busy_shirley[239379]:             "tags": {
Jan 10 17:18:20 compute-0 busy_shirley[239379]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 10 17:18:20 compute-0 busy_shirley[239379]:                 "ceph.block_uuid": "jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE",
Jan 10 17:18:20 compute-0 busy_shirley[239379]:                 "ceph.cephx_lockbox_secret": "",
Jan 10 17:18:20 compute-0 busy_shirley[239379]:                 "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 17:18:20 compute-0 busy_shirley[239379]:                 "ceph.cluster_name": "ceph",
Jan 10 17:18:20 compute-0 busy_shirley[239379]:                 "ceph.crush_device_class": "",
Jan 10 17:18:20 compute-0 busy_shirley[239379]:                 "ceph.encrypted": "0",
Jan 10 17:18:20 compute-0 busy_shirley[239379]:                 "ceph.objectstore": "bluestore",
Jan 10 17:18:20 compute-0 busy_shirley[239379]:                 "ceph.osd_fsid": "e8e31518-65ae-476c-891c-e2fc550d0a1c",
Jan 10 17:18:20 compute-0 busy_shirley[239379]:                 "ceph.osd_id": "1",
Jan 10 17:18:20 compute-0 busy_shirley[239379]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 10 17:18:20 compute-0 busy_shirley[239379]:                 "ceph.type": "block",
Jan 10 17:18:20 compute-0 busy_shirley[239379]:                 "ceph.vdo": "0",
Jan 10 17:18:20 compute-0 busy_shirley[239379]:                 "ceph.with_tpm": "0"
Jan 10 17:18:20 compute-0 busy_shirley[239379]:             },
Jan 10 17:18:20 compute-0 busy_shirley[239379]:             "type": "block",
Jan 10 17:18:20 compute-0 busy_shirley[239379]:             "vg_name": "ceph_vg1"
Jan 10 17:18:20 compute-0 busy_shirley[239379]:         }
Jan 10 17:18:20 compute-0 busy_shirley[239379]:     ],
Jan 10 17:18:20 compute-0 busy_shirley[239379]:     "2": [
Jan 10 17:18:20 compute-0 busy_shirley[239379]:         {
Jan 10 17:18:20 compute-0 busy_shirley[239379]:             "devices": [
Jan 10 17:18:20 compute-0 busy_shirley[239379]:                 "/dev/loop5"
Jan 10 17:18:20 compute-0 busy_shirley[239379]:             ],
Jan 10 17:18:20 compute-0 busy_shirley[239379]:             "lv_name": "ceph_lv2",
Jan 10 17:18:20 compute-0 busy_shirley[239379]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 10 17:18:20 compute-0 busy_shirley[239379]:             "lv_size": "21470642176",
Jan 10 17:18:20 compute-0 busy_shirley[239379]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=87473727-6468-4f68-8371-e0bf60edaa43,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 17:18:20 compute-0 busy_shirley[239379]:             "lv_uuid": "47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL",
Jan 10 17:18:20 compute-0 busy_shirley[239379]:             "name": "ceph_lv2",
Jan 10 17:18:20 compute-0 busy_shirley[239379]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 10 17:18:20 compute-0 busy_shirley[239379]:             "tags": {
Jan 10 17:18:20 compute-0 busy_shirley[239379]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 10 17:18:20 compute-0 busy_shirley[239379]:                 "ceph.block_uuid": "47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL",
Jan 10 17:18:20 compute-0 busy_shirley[239379]:                 "ceph.cephx_lockbox_secret": "",
Jan 10 17:18:20 compute-0 busy_shirley[239379]:                 "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 17:18:20 compute-0 busy_shirley[239379]:                 "ceph.cluster_name": "ceph",
Jan 10 17:18:20 compute-0 busy_shirley[239379]:                 "ceph.crush_device_class": "",
Jan 10 17:18:20 compute-0 busy_shirley[239379]:                 "ceph.encrypted": "0",
Jan 10 17:18:20 compute-0 busy_shirley[239379]:                 "ceph.objectstore": "bluestore",
Jan 10 17:18:20 compute-0 busy_shirley[239379]:                 "ceph.osd_fsid": "87473727-6468-4f68-8371-e0bf60edaa43",
Jan 10 17:18:20 compute-0 busy_shirley[239379]:                 "ceph.osd_id": "2",
Jan 10 17:18:20 compute-0 busy_shirley[239379]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 10 17:18:20 compute-0 busy_shirley[239379]:                 "ceph.type": "block",
Jan 10 17:18:20 compute-0 busy_shirley[239379]:                 "ceph.vdo": "0",
Jan 10 17:18:20 compute-0 busy_shirley[239379]:                 "ceph.with_tpm": "0"
Jan 10 17:18:20 compute-0 busy_shirley[239379]:             },
Jan 10 17:18:20 compute-0 busy_shirley[239379]:             "type": "block",
Jan 10 17:18:20 compute-0 busy_shirley[239379]:             "vg_name": "ceph_vg2"
Jan 10 17:18:20 compute-0 busy_shirley[239379]:         }
Jan 10 17:18:20 compute-0 busy_shirley[239379]:     ]
Jan 10 17:18:20 compute-0 busy_shirley[239379]: }
Jan 10 17:18:20 compute-0 systemd[1]: libpod-0e14d3b36f89cf50c551539fbb04d8e10cb86df0b4f6947575495f0b0170717f.scope: Deactivated successfully.
Jan 10 17:18:20 compute-0 podman[239388]: 2026-01-10 17:18:20.590195033 +0000 UTC m=+0.029725370 container died 0e14d3b36f89cf50c551539fbb04d8e10cb86df0b4f6947575495f0b0170717f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_shirley, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 10 17:18:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-cba08d2caad62faea6e45729da53903965255a00c49d5df15a62a9dc584d85c9-merged.mount: Deactivated successfully.
Jan 10 17:18:20 compute-0 podman[239388]: 2026-01-10 17:18:20.634179416 +0000 UTC m=+0.073709753 container remove 0e14d3b36f89cf50c551539fbb04d8e10cb86df0b4f6947575495f0b0170717f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_shirley, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 17:18:20 compute-0 systemd[1]: libpod-conmon-0e14d3b36f89cf50c551539fbb04d8e10cb86df0b4f6947575495f0b0170717f.scope: Deactivated successfully.
Jan 10 17:18:20 compute-0 sudo[239283]: pam_unix(sudo:session): session closed for user root
Jan 10 17:18:20 compute-0 sudo[239403]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 17:18:20 compute-0 sudo[239403]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:18:20 compute-0 sudo[239403]: pam_unix(sudo:session): session closed for user root
Jan 10 17:18:20 compute-0 sudo[239428]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 -- raw list --format json
Jan 10 17:18:20 compute-0 sudo[239428]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:18:21 compute-0 podman[239464]: 2026-01-10 17:18:21.241997925 +0000 UTC m=+0.072951163 container create 5a397329ba6c216482f0d3a513a53b0938a33b74e92289a5c5082811cbe9b6df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_taussig, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 10 17:18:21 compute-0 systemd[1]: Started libpod-conmon-5a397329ba6c216482f0d3a513a53b0938a33b74e92289a5c5082811cbe9b6df.scope.
Jan 10 17:18:21 compute-0 podman[239464]: 2026-01-10 17:18:21.212180022 +0000 UTC m=+0.043133240 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:18:21 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:18:21 compute-0 podman[239464]: 2026-01-10 17:18:21.327513542 +0000 UTC m=+0.158466820 container init 5a397329ba6c216482f0d3a513a53b0938a33b74e92289a5c5082811cbe9b6df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_taussig, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 10 17:18:21 compute-0 podman[239464]: 2026-01-10 17:18:21.337515788 +0000 UTC m=+0.168468966 container start 5a397329ba6c216482f0d3a513a53b0938a33b74e92289a5c5082811cbe9b6df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_taussig, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 10 17:18:21 compute-0 podman[239464]: 2026-01-10 17:18:21.341990642 +0000 UTC m=+0.172943790 container attach 5a397329ba6c216482f0d3a513a53b0938a33b74e92289a5c5082811cbe9b6df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_taussig, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 17:18:21 compute-0 systemd[1]: libpod-5a397329ba6c216482f0d3a513a53b0938a33b74e92289a5c5082811cbe9b6df.scope: Deactivated successfully.
Jan 10 17:18:21 compute-0 romantic_taussig[239480]: 167 167
Jan 10 17:18:21 compute-0 conmon[239480]: conmon 5a397329ba6c216482f0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5a397329ba6c216482f0d3a513a53b0938a33b74e92289a5c5082811cbe9b6df.scope/container/memory.events
Jan 10 17:18:21 compute-0 podman[239464]: 2026-01-10 17:18:21.344291585 +0000 UTC m=+0.175244743 container died 5a397329ba6c216482f0d3a513a53b0938a33b74e92289a5c5082811cbe9b6df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_taussig, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 17:18:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-4bd51b188ba02604762d25eb661fad88ae5e887422816707c79e157fed427014-merged.mount: Deactivated successfully.
Jan 10 17:18:21 compute-0 podman[239464]: 2026-01-10 17:18:21.389101541 +0000 UTC m=+0.220054709 container remove 5a397329ba6c216482f0d3a513a53b0938a33b74e92289a5c5082811cbe9b6df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_taussig, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 10 17:18:21 compute-0 systemd[1]: libpod-conmon-5a397329ba6c216482f0d3a513a53b0938a33b74e92289a5c5082811cbe9b6df.scope: Deactivated successfully.
Jan 10 17:18:21 compute-0 ceph-mon[75249]: pgmap v680: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:18:21 compute-0 podman[239505]: 2026-01-10 17:18:21.59251827 +0000 UTC m=+0.069710784 container create 173ab1378d17f2f33bc074ade0e459888e5587a38a8cf351fe305d276c1df1f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_cori, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True)
Jan 10 17:18:21 compute-0 systemd[1]: Started libpod-conmon-173ab1378d17f2f33bc074ade0e459888e5587a38a8cf351fe305d276c1df1f1.scope.
Jan 10 17:18:21 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:18:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2e466b67a0d647ec4a92636bb17b3395c6660819a84b98339119bfad232b643/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 17:18:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2e466b67a0d647ec4a92636bb17b3395c6660819a84b98339119bfad232b643/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 17:18:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2e466b67a0d647ec4a92636bb17b3395c6660819a84b98339119bfad232b643/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 17:18:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2e466b67a0d647ec4a92636bb17b3395c6660819a84b98339119bfad232b643/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 17:18:21 compute-0 podman[239505]: 2026-01-10 17:18:21.565508245 +0000 UTC m=+0.042700859 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:18:21 compute-0 podman[239505]: 2026-01-10 17:18:21.671185999 +0000 UTC m=+0.148378613 container init 173ab1378d17f2f33bc074ade0e459888e5587a38a8cf351fe305d276c1df1f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_cori, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 10 17:18:21 compute-0 podman[239505]: 2026-01-10 17:18:21.67993118 +0000 UTC m=+0.157123734 container start 173ab1378d17f2f33bc074ade0e459888e5587a38a8cf351fe305d276c1df1f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_cori, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 17:18:21 compute-0 podman[239505]: 2026-01-10 17:18:21.684791874 +0000 UTC m=+0.161984418 container attach 173ab1378d17f2f33bc074ade0e459888e5587a38a8cf351fe305d276c1df1f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_cori, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 17:18:22 compute-0 lvm[239600]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 10 17:18:22 compute-0 lvm[239599]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 10 17:18:22 compute-0 lvm[239600]: VG ceph_vg1 finished
Jan 10 17:18:22 compute-0 lvm[239599]: VG ceph_vg0 finished
Jan 10 17:18:22 compute-0 lvm[239601]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 10 17:18:22 compute-0 lvm[239601]: VG ceph_vg2 finished
Jan 10 17:18:22 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v681: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:18:22 compute-0 happy_cori[239520]: {}
Jan 10 17:18:22 compute-0 systemd[1]: libpod-173ab1378d17f2f33bc074ade0e459888e5587a38a8cf351fe305d276c1df1f1.scope: Deactivated successfully.
Jan 10 17:18:22 compute-0 podman[239505]: 2026-01-10 17:18:22.570152266 +0000 UTC m=+1.047344780 container died 173ab1378d17f2f33bc074ade0e459888e5587a38a8cf351fe305d276c1df1f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_cori, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 17:18:22 compute-0 systemd[1]: libpod-173ab1378d17f2f33bc074ade0e459888e5587a38a8cf351fe305d276c1df1f1.scope: Consumed 1.399s CPU time.
Jan 10 17:18:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-a2e466b67a0d647ec4a92636bb17b3395c6660819a84b98339119bfad232b643-merged.mount: Deactivated successfully.
Jan 10 17:18:22 compute-0 podman[239505]: 2026-01-10 17:18:22.746717034 +0000 UTC m=+1.223909558 container remove 173ab1378d17f2f33bc074ade0e459888e5587a38a8cf351fe305d276c1df1f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_cori, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 17:18:22 compute-0 systemd[1]: libpod-conmon-173ab1378d17f2f33bc074ade0e459888e5587a38a8cf351fe305d276c1df1f1.scope: Deactivated successfully.
Jan 10 17:18:22 compute-0 sudo[239428]: pam_unix(sudo:session): session closed for user root
Jan 10 17:18:22 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 10 17:18:22 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:18:22 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 10 17:18:22 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:18:22 compute-0 sudo[239618]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 10 17:18:22 compute-0 sudo[239618]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:18:22 compute-0 sudo[239618]: pam_unix(sudo:session): session closed for user root
Jan 10 17:18:23 compute-0 ceph-mon[75249]: pgmap v681: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:18:23 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:18:23 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:18:24 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:18:24 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v682: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:18:25 compute-0 ceph-mon[75249]: pgmap v682: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:18:26 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v683: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:18:27 compute-0 ceph-mon[75249]: pgmap v683: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:18:28 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v684: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:18:29 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:18:29 compute-0 ceph-osd[85764]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 10 17:18:29 compute-0 ceph-osd[85764]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Cumulative writes: 4379 writes, 20K keys, 4379 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 4379 writes, 468 syncs, 9.36 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560f2dc198d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 0.000197 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560f2dc198d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 0.000197 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560f2dc198d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 0.000197 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560f2dc198d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 0.000197 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560f2dc198d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 0.000197 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560f2dc198d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 0.000197 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560f2dc198d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 0.000197 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560f2dc19a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560f2dc19a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560f2dc19a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560f2dc198d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 0.000197 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560f2dc198d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 0.000197 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 10 17:18:29 compute-0 ceph-mon[75249]: pgmap v684: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:18:30 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v685: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:18:31 compute-0 ceph-mon[75249]: pgmap v685: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:18:32 compute-0 podman[239643]: 2026-01-10 17:18:32.124919347 +0000 UTC m=+0.114349584 container health_status 4a66317a3a3660e903224f5e823ead0a57597ae17c6ac34919f8a3aeea25124f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 10 17:18:32 compute-0 podman[239644]: 2026-01-10 17:18:32.167225603 +0000 UTC m=+0.154804599 container health_status a2049b55215718ead6719d161f51721cb7ba61ad8a59a8a1174e05b17008fa6f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 17:18:32 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v686: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:18:33 compute-0 ceph-mon[75249]: pgmap v686: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:18:34 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:18:34 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v687: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:18:35 compute-0 ceph-osd[86809]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 10 17:18:35 compute-0 ceph-osd[86809]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Cumulative writes: 4552 writes, 20K keys, 4552 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 4552 writes, 515 syncs, 8.84 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.019       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.019       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.019       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d5952838d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d5952838d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d5952838d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d5952838d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.01              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d5952838d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d5952838d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d5952838d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d595283a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d595283a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.02              0.00         1    0.025       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.02              0.00         1    0.025       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.02              0.00         1    0.025       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d595283a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d5952838d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d5952838d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 10 17:18:35 compute-0 ceph-mon[75249]: pgmap v687: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:18:36 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 10 17:18:36 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1046424119' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 10 17:18:36 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 10 17:18:36 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1046424119' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 10 17:18:36 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v688: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:18:36 compute-0 ceph-mon[75249]: from='client.? 192.168.122.10:0/1046424119' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 10 17:18:36 compute-0 ceph-mon[75249]: from='client.? 192.168.122.10:0/1046424119' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 10 17:18:37 compute-0 ceph-mon[75249]: pgmap v688: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:18:38 compute-0 ceph-mgr[75538]: [balancer INFO root] Optimize plan auto_2026-01-10_17:18:38
Jan 10 17:18:38 compute-0 ceph-mgr[75538]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 10 17:18:38 compute-0 ceph-mgr[75538]: [balancer INFO root] do_upmap
Jan 10 17:18:38 compute-0 ceph-mgr[75538]: [balancer INFO root] pools ['vms', 'images', 'backups', 'volumes', '.mgr', 'cephfs.cephfs.meta', 'cephfs.cephfs.data']
Jan 10 17:18:38 compute-0 ceph-mgr[75538]: [balancer INFO root] prepared 0/10 upmap changes
Jan 10 17:18:38 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v689: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:18:38 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:18:38 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:18:39 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:18:39 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:18:39 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:18:39 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:18:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 10 17:18:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 10 17:18:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 10 17:18:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 10 17:18:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 10 17:18:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 10 17:18:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 10 17:18:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 10 17:18:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 10 17:18:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 10 17:18:39 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:18:39 compute-0 ceph-mon[75249]: pgmap v689: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:18:40 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v690: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:18:41 compute-0 ceph-mon[75249]: pgmap v690: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:18:42 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v691: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:18:42 compute-0 ceph-osd[87867]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 10 17:18:42 compute-0 ceph-osd[87867]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Cumulative writes: 4222 writes, 19K keys, 4222 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 4222 writes, 393 syncs, 10.74 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5621ddea9a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 0.000109 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5621ddea9a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 0.000109 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5621ddea9a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 0.000109 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5621ddea9a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 0.000109 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5621ddea9a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 0.000109 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5621ddea9a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 0.000109 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5621ddea9a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 0.000109 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5621ddea94b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5621ddea94b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5621ddea94b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5621ddea9a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 0.000109 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5621ddea9a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 0.000109 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 10 17:18:43 compute-0 ceph-mon[75249]: pgmap v691: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:18:44 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:18:44 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v692: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:18:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] _maybe_adjust
Jan 10 17:18:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:18:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 10 17:18:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:18:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 17:18:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:18:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 17:18:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:18:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 17:18:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:18:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 17:18:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:18:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 9.302004027771843e-07 of space, bias 4.0, pg target 0.0011162404833326212 quantized to 16 (current 16)
Jan 10 17:18:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:18:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 17:18:45 compute-0 ceph-mon[75249]: pgmap v692: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:18:46 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v693: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:18:47 compute-0 ceph-mon[75249]: pgmap v693: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:18:48 compute-0 ceph-mgr[75538]: [devicehealth INFO root] Check health
Jan 10 17:18:48 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v694: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:18:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:18:48.920 152671 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 10 17:18:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:18:48.922 152671 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 10 17:18:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:18:48.922 152671 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 10 17:18:49 compute-0 nova_compute[237049]: 2026-01-10 17:18:49.336 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:18:49 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:18:49 compute-0 nova_compute[237049]: 2026-01-10 17:18:49.345 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:18:49 compute-0 nova_compute[237049]: 2026-01-10 17:18:49.345 237053 DEBUG nova.compute.manager [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 10 17:18:49 compute-0 nova_compute[237049]: 2026-01-10 17:18:49.345 237053 DEBUG nova.compute.manager [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 10 17:18:49 compute-0 nova_compute[237049]: 2026-01-10 17:18:49.451 237053 DEBUG nova.compute.manager [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 10 17:18:49 compute-0 ceph-mon[75249]: pgmap v694: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:18:50 compute-0 nova_compute[237049]: 2026-01-10 17:18:50.345 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:18:50 compute-0 nova_compute[237049]: 2026-01-10 17:18:50.346 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:18:50 compute-0 nova_compute[237049]: 2026-01-10 17:18:50.346 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:18:50 compute-0 nova_compute[237049]: 2026-01-10 17:18:50.347 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:18:50 compute-0 nova_compute[237049]: 2026-01-10 17:18:50.347 237053 DEBUG nova.compute.manager [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 10 17:18:50 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v695: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:18:51 compute-0 nova_compute[237049]: 2026-01-10 17:18:51.346 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:18:51 compute-0 nova_compute[237049]: 2026-01-10 17:18:51.346 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:18:51 compute-0 nova_compute[237049]: 2026-01-10 17:18:51.382 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 10 17:18:51 compute-0 nova_compute[237049]: 2026-01-10 17:18:51.383 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 10 17:18:51 compute-0 nova_compute[237049]: 2026-01-10 17:18:51.383 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 10 17:18:51 compute-0 nova_compute[237049]: 2026-01-10 17:18:51.384 237053 DEBUG nova.compute.resource_tracker [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 10 17:18:51 compute-0 nova_compute[237049]: 2026-01-10 17:18:51.384 237053 DEBUG oslo_concurrency.processutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 10 17:18:51 compute-0 ceph-mon[75249]: pgmap v695: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:18:51 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 10 17:18:51 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3506908907' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 10 17:18:51 compute-0 nova_compute[237049]: 2026-01-10 17:18:51.996 237053 DEBUG oslo_concurrency.processutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.611s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 10 17:18:52 compute-0 nova_compute[237049]: 2026-01-10 17:18:52.230 237053 WARNING nova.virt.libvirt.driver [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 10 17:18:52 compute-0 nova_compute[237049]: 2026-01-10 17:18:52.232 237053 DEBUG nova.compute.resource_tracker [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5286MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 10 17:18:52 compute-0 nova_compute[237049]: 2026-01-10 17:18:52.232 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 10 17:18:52 compute-0 nova_compute[237049]: 2026-01-10 17:18:52.233 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 10 17:18:52 compute-0 nova_compute[237049]: 2026-01-10 17:18:52.310 237053 DEBUG nova.compute.resource_tracker [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 10 17:18:52 compute-0 nova_compute[237049]: 2026-01-10 17:18:52.310 237053 DEBUG nova.compute.resource_tracker [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 10 17:18:52 compute-0 nova_compute[237049]: 2026-01-10 17:18:52.329 237053 DEBUG oslo_concurrency.processutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 10 17:18:52 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v696: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:18:52 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/3506908907' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 10 17:18:52 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 10 17:18:52 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2637204984' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 10 17:18:52 compute-0 nova_compute[237049]: 2026-01-10 17:18:52.904 237053 DEBUG oslo_concurrency.processutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.575s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 10 17:18:52 compute-0 nova_compute[237049]: 2026-01-10 17:18:52.910 237053 DEBUG nova.compute.provider_tree [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Inventory has not changed in ProviderTree for provider: 5f85855c-8a9b-43b5-ae49-f5846b9dcebe update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 10 17:18:52 compute-0 nova_compute[237049]: 2026-01-10 17:18:52.931 237053 DEBUG nova.scheduler.client.report [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Inventory has not changed for provider 5f85855c-8a9b-43b5-ae49-f5846b9dcebe based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 10 17:18:52 compute-0 nova_compute[237049]: 2026-01-10 17:18:52.934 237053 DEBUG nova.compute.resource_tracker [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 10 17:18:52 compute-0 nova_compute[237049]: 2026-01-10 17:18:52.935 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.702s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 10 17:18:53 compute-0 ceph-mon[75249]: pgmap v696: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:18:53 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/2637204984' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 10 17:18:53 compute-0 nova_compute[237049]: 2026-01-10 17:18:53.935 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:18:54 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:18:54 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v697: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:18:55 compute-0 ceph-mon[75249]: pgmap v697: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:18:56 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v698: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:18:57 compute-0 ceph-mon[75249]: pgmap v698: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:18:58 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v699: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:18:59 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:18:59 compute-0 ceph-mon[75249]: pgmap v699: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:19:00 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v700: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:19:01 compute-0 ceph-mon[75249]: pgmap v700: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:19:02 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v701: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:19:03 compute-0 podman[239733]: 2026-01-10 17:19:03.094415741 +0000 UTC m=+0.080895871 container health_status 4a66317a3a3660e903224f5e823ead0a57597ae17c6ac34919f8a3aeea25124f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 10 17:19:03 compute-0 podman[239734]: 2026-01-10 17:19:03.160225678 +0000 UTC m=+0.153758795 container health_status a2049b55215718ead6719d161f51721cb7ba61ad8a59a8a1174e05b17008fa6f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller)
Jan 10 17:19:03 compute-0 ceph-mon[75249]: pgmap v701: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:19:04 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:19:04 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v702: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:19:05 compute-0 ceph-mon[75249]: pgmap v702: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:19:06 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v703: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:19:07 compute-0 ceph-mon[75249]: pgmap v703: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:19:08 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v704: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:19:08 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:19:08 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:19:09 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:19:09 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:19:09 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:19:09 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:19:09 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:19:09 compute-0 ceph-mon[75249]: pgmap v704: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:19:10 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v705: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:19:11 compute-0 ceph-mon[75249]: pgmap v705: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:19:12 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v706: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:19:13 compute-0 ceph-mon[75249]: pgmap v706: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:19:14 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:19:14 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v707: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:19:15 compute-0 ceph-mon[75249]: pgmap v707: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:19:16 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v708: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:19:17 compute-0 ceph-mon[75249]: pgmap v708: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:19:18 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v709: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:19:19 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:19:19 compute-0 ceph-mon[75249]: pgmap v709: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:19:20 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v710: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:19:21 compute-0 ceph-mon[75249]: pgmap v710: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:19:22 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v711: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:19:23 compute-0 sudo[239775]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 17:19:23 compute-0 sudo[239775]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:19:23 compute-0 sudo[239775]: pam_unix(sudo:session): session closed for user root
Jan 10 17:19:23 compute-0 sudo[239800]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 10 17:19:23 compute-0 sudo[239800]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:19:23 compute-0 sudo[239800]: pam_unix(sudo:session): session closed for user root
Jan 10 17:19:23 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Jan 10 17:19:23 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 10 17:19:23 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 17:19:23 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 17:19:23 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 10 17:19:23 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 10 17:19:23 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 10 17:19:23 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:19:23 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 10 17:19:23 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 10 17:19:23 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 10 17:19:23 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 10 17:19:23 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 17:19:23 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 17:19:23 compute-0 ceph-mon[75249]: pgmap v711: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:19:23 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 10 17:19:23 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 17:19:23 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 10 17:19:23 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:19:23 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 10 17:19:23 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 10 17:19:23 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 17:19:23 compute-0 sudo[239856]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 17:19:23 compute-0 sudo[239856]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:19:23 compute-0 sudo[239856]: pam_unix(sudo:session): session closed for user root
Jan 10 17:19:24 compute-0 sudo[239881]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 10 17:19:24 compute-0 sudo[239881]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:19:24 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:19:24 compute-0 podman[239918]: 2026-01-10 17:19:24.374666668 +0000 UTC m=+0.062127816 container create bdc3362d52fdb6d961997ef1eff93f3d029420ca2ed5c8e3370a8b9f39f20651 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_napier, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 10 17:19:24 compute-0 systemd[1]: Started libpod-conmon-bdc3362d52fdb6d961997ef1eff93f3d029420ca2ed5c8e3370a8b9f39f20651.scope.
Jan 10 17:19:24 compute-0 podman[239918]: 2026-01-10 17:19:24.353105786 +0000 UTC m=+0.040566994 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:19:24 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:19:24 compute-0 podman[239918]: 2026-01-10 17:19:24.5011088 +0000 UTC m=+0.188569988 container init bdc3362d52fdb6d961997ef1eff93f3d029420ca2ed5c8e3370a8b9f39f20651 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_napier, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 10 17:19:24 compute-0 podman[239918]: 2026-01-10 17:19:24.511603063 +0000 UTC m=+0.199064221 container start bdc3362d52fdb6d961997ef1eff93f3d029420ca2ed5c8e3370a8b9f39f20651 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_napier, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 17:19:24 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v712: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:19:24 compute-0 podman[239918]: 2026-01-10 17:19:24.516201111 +0000 UTC m=+0.203662309 container attach bdc3362d52fdb6d961997ef1eff93f3d029420ca2ed5c8e3370a8b9f39f20651 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_napier, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 10 17:19:24 compute-0 naughty_napier[239935]: 167 167
Jan 10 17:19:24 compute-0 systemd[1]: libpod-bdc3362d52fdb6d961997ef1eff93f3d029420ca2ed5c8e3370a8b9f39f20651.scope: Deactivated successfully.
Jan 10 17:19:24 compute-0 podman[239918]: 2026-01-10 17:19:24.522305172 +0000 UTC m=+0.209766320 container died bdc3362d52fdb6d961997ef1eff93f3d029420ca2ed5c8e3370a8b9f39f20651 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_napier, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 10 17:19:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-b88f179598aba8e2980910a4320a947d19b5738f9664e388c841d42d3bad2e32-merged.mount: Deactivated successfully.
Jan 10 17:19:24 compute-0 podman[239918]: 2026-01-10 17:19:24.579564401 +0000 UTC m=+0.267025549 container remove bdc3362d52fdb6d961997ef1eff93f3d029420ca2ed5c8e3370a8b9f39f20651 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_napier, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 10 17:19:24 compute-0 systemd[1]: libpod-conmon-bdc3362d52fdb6d961997ef1eff93f3d029420ca2ed5c8e3370a8b9f39f20651.scope: Deactivated successfully.
Jan 10 17:19:24 compute-0 podman[239960]: 2026-01-10 17:19:24.801621132 +0000 UTC m=+0.065785638 container create 10bbf51fa2e65fcd5003cf4f18e9ee137771664bb2d8a1200658761e93dbda72 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_gagarin, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 10 17:19:24 compute-0 systemd[1]: Started libpod-conmon-10bbf51fa2e65fcd5003cf4f18e9ee137771664bb2d8a1200658761e93dbda72.scope.
Jan 10 17:19:24 compute-0 podman[239960]: 2026-01-10 17:19:24.778410584 +0000 UTC m=+0.042575070 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:19:24 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:19:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3102a5280c3c8099b38050107b075a9ed0ebb1b45ac4df6ba3aef8455a7420c2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 17:19:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3102a5280c3c8099b38050107b075a9ed0ebb1b45ac4df6ba3aef8455a7420c2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 17:19:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3102a5280c3c8099b38050107b075a9ed0ebb1b45ac4df6ba3aef8455a7420c2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 17:19:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3102a5280c3c8099b38050107b075a9ed0ebb1b45ac4df6ba3aef8455a7420c2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 17:19:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3102a5280c3c8099b38050107b075a9ed0ebb1b45ac4df6ba3aef8455a7420c2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 10 17:19:24 compute-0 podman[239960]: 2026-01-10 17:19:24.927556939 +0000 UTC m=+0.191721445 container init 10bbf51fa2e65fcd5003cf4f18e9ee137771664bb2d8a1200658761e93dbda72 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_gagarin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 10 17:19:24 compute-0 podman[239960]: 2026-01-10 17:19:24.942209848 +0000 UTC m=+0.206374354 container start 10bbf51fa2e65fcd5003cf4f18e9ee137771664bb2d8a1200658761e93dbda72 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_gagarin, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 17:19:24 compute-0 podman[239960]: 2026-01-10 17:19:24.947125996 +0000 UTC m=+0.211290562 container attach 10bbf51fa2e65fcd5003cf4f18e9ee137771664bb2d8a1200658761e93dbda72 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_gagarin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True)
Jan 10 17:19:25 compute-0 quizzical_gagarin[239976]: --> passed data devices: 0 physical, 3 LVM
Jan 10 17:19:25 compute-0 quizzical_gagarin[239976]: --> All data devices are unavailable
Jan 10 17:19:25 compute-0 systemd[1]: libpod-10bbf51fa2e65fcd5003cf4f18e9ee137771664bb2d8a1200658761e93dbda72.scope: Deactivated successfully.
Jan 10 17:19:25 compute-0 podman[239960]: 2026-01-10 17:19:25.604200705 +0000 UTC m=+0.868365271 container died 10bbf51fa2e65fcd5003cf4f18e9ee137771664bb2d8a1200658761e93dbda72 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_gagarin, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 17:19:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-3102a5280c3c8099b38050107b075a9ed0ebb1b45ac4df6ba3aef8455a7420c2-merged.mount: Deactivated successfully.
Jan 10 17:19:25 compute-0 podman[239960]: 2026-01-10 17:19:25.667919885 +0000 UTC m=+0.932084361 container remove 10bbf51fa2e65fcd5003cf4f18e9ee137771664bb2d8a1200658761e93dbda72 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_gagarin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 10 17:19:25 compute-0 systemd[1]: libpod-conmon-10bbf51fa2e65fcd5003cf4f18e9ee137771664bb2d8a1200658761e93dbda72.scope: Deactivated successfully.
Jan 10 17:19:25 compute-0 sudo[239881]: pam_unix(sudo:session): session closed for user root
Jan 10 17:19:25 compute-0 sudo[240006]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 17:19:25 compute-0 sudo[240006]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:19:25 compute-0 sudo[240006]: pam_unix(sudo:session): session closed for user root
Jan 10 17:19:25 compute-0 sudo[240031]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 -- lvm list --format json
Jan 10 17:19:25 compute-0 sudo[240031]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:19:25 compute-0 ceph-mon[75249]: pgmap v712: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:19:26 compute-0 podman[240068]: 2026-01-10 17:19:26.194075589 +0000 UTC m=+0.072664331 container create 58b87e0d44cc79f4b960139c6018452e58a5ca8e5e1250840d829a64b2bbb312 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_bardeen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 17:19:26 compute-0 systemd[1]: Started libpod-conmon-58b87e0d44cc79f4b960139c6018452e58a5ca8e5e1250840d829a64b2bbb312.scope.
Jan 10 17:19:26 compute-0 podman[240068]: 2026-01-10 17:19:26.165988924 +0000 UTC m=+0.044577716 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:19:26 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:19:26 compute-0 podman[240068]: 2026-01-10 17:19:26.272914241 +0000 UTC m=+0.151503003 container init 58b87e0d44cc79f4b960139c6018452e58a5ca8e5e1250840d829a64b2bbb312 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_bardeen, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 10 17:19:26 compute-0 podman[240068]: 2026-01-10 17:19:26.279633178 +0000 UTC m=+0.158221920 container start 58b87e0d44cc79f4b960139c6018452e58a5ca8e5e1250840d829a64b2bbb312 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_bardeen, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 17:19:26 compute-0 podman[240068]: 2026-01-10 17:19:26.283449705 +0000 UTC m=+0.162038447 container attach 58b87e0d44cc79f4b960139c6018452e58a5ca8e5e1250840d829a64b2bbb312 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_bardeen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 17:19:26 compute-0 elated_bardeen[240085]: 167 167
Jan 10 17:19:26 compute-0 systemd[1]: libpod-58b87e0d44cc79f4b960139c6018452e58a5ca8e5e1250840d829a64b2bbb312.scope: Deactivated successfully.
Jan 10 17:19:26 compute-0 podman[240068]: 2026-01-10 17:19:26.285975725 +0000 UTC m=+0.164564427 container died 58b87e0d44cc79f4b960139c6018452e58a5ca8e5e1250840d829a64b2bbb312 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_bardeen, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 10 17:19:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-2b3d661a01937e737fdac9bebbdfe41422e977c5e424381b0268aaaa764e8163-merged.mount: Deactivated successfully.
Jan 10 17:19:26 compute-0 podman[240068]: 2026-01-10 17:19:26.323957186 +0000 UTC m=+0.202545918 container remove 58b87e0d44cc79f4b960139c6018452e58a5ca8e5e1250840d829a64b2bbb312 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_bardeen, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 17:19:26 compute-0 systemd[1]: libpod-conmon-58b87e0d44cc79f4b960139c6018452e58a5ca8e5e1250840d829a64b2bbb312.scope: Deactivated successfully.
Jan 10 17:19:26 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v713: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:19:26 compute-0 podman[240107]: 2026-01-10 17:19:26.534977079 +0000 UTC m=+0.063856004 container create a4cb2fb2335858411f80ed43a95c208bb7207b0be66d43ee5eb4402a1ee0c4dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_robinson, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 17:19:26 compute-0 systemd[1]: Started libpod-conmon-a4cb2fb2335858411f80ed43a95c208bb7207b0be66d43ee5eb4402a1ee0c4dd.scope.
Jan 10 17:19:26 compute-0 podman[240107]: 2026-01-10 17:19:26.508740927 +0000 UTC m=+0.037619922 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:19:26 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:19:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b66e2437ae4c2704af21c9d4bd7aef803bae28c537fa62ea8654aff7710dfc5c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 17:19:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b66e2437ae4c2704af21c9d4bd7aef803bae28c537fa62ea8654aff7710dfc5c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 17:19:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b66e2437ae4c2704af21c9d4bd7aef803bae28c537fa62ea8654aff7710dfc5c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 17:19:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b66e2437ae4c2704af21c9d4bd7aef803bae28c537fa62ea8654aff7710dfc5c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 17:19:26 compute-0 podman[240107]: 2026-01-10 17:19:26.644807276 +0000 UTC m=+0.173686291 container init a4cb2fb2335858411f80ed43a95c208bb7207b0be66d43ee5eb4402a1ee0c4dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_robinson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 17:19:26 compute-0 podman[240107]: 2026-01-10 17:19:26.658989173 +0000 UTC m=+0.187868128 container start a4cb2fb2335858411f80ed43a95c208bb7207b0be66d43ee5eb4402a1ee0c4dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_robinson, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 17:19:26 compute-0 podman[240107]: 2026-01-10 17:19:26.663990882 +0000 UTC m=+0.192869887 container attach a4cb2fb2335858411f80ed43a95c208bb7207b0be66d43ee5eb4402a1ee0c4dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_robinson, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 17:19:26 compute-0 gracious_robinson[240123]: {
Jan 10 17:19:26 compute-0 gracious_robinson[240123]:     "0": [
Jan 10 17:19:26 compute-0 gracious_robinson[240123]:         {
Jan 10 17:19:26 compute-0 gracious_robinson[240123]:             "devices": [
Jan 10 17:19:26 compute-0 gracious_robinson[240123]:                 "/dev/loop3"
Jan 10 17:19:26 compute-0 gracious_robinson[240123]:             ],
Jan 10 17:19:26 compute-0 gracious_robinson[240123]:             "lv_name": "ceph_lv0",
Jan 10 17:19:26 compute-0 gracious_robinson[240123]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 10 17:19:26 compute-0 gracious_robinson[240123]:             "lv_size": "21470642176",
Jan 10 17:19:26 compute-0 gracious_robinson[240123]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9aa1dcc9-88f4-49c0-be40-744313964d3e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 17:19:26 compute-0 gracious_robinson[240123]:             "lv_uuid": "fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2",
Jan 10 17:19:26 compute-0 gracious_robinson[240123]:             "name": "ceph_lv0",
Jan 10 17:19:26 compute-0 gracious_robinson[240123]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 10 17:19:26 compute-0 gracious_robinson[240123]:             "tags": {
Jan 10 17:19:26 compute-0 gracious_robinson[240123]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 10 17:19:26 compute-0 gracious_robinson[240123]:                 "ceph.block_uuid": "fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2",
Jan 10 17:19:26 compute-0 gracious_robinson[240123]:                 "ceph.cephx_lockbox_secret": "",
Jan 10 17:19:26 compute-0 gracious_robinson[240123]:                 "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 17:19:26 compute-0 gracious_robinson[240123]:                 "ceph.cluster_name": "ceph",
Jan 10 17:19:26 compute-0 gracious_robinson[240123]:                 "ceph.crush_device_class": "",
Jan 10 17:19:26 compute-0 gracious_robinson[240123]:                 "ceph.encrypted": "0",
Jan 10 17:19:26 compute-0 gracious_robinson[240123]:                 "ceph.objectstore": "bluestore",
Jan 10 17:19:26 compute-0 gracious_robinson[240123]:                 "ceph.osd_fsid": "9aa1dcc9-88f4-49c0-be40-744313964d3e",
Jan 10 17:19:26 compute-0 gracious_robinson[240123]:                 "ceph.osd_id": "0",
Jan 10 17:19:26 compute-0 gracious_robinson[240123]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 10 17:19:26 compute-0 gracious_robinson[240123]:                 "ceph.type": "block",
Jan 10 17:19:26 compute-0 gracious_robinson[240123]:                 "ceph.vdo": "0",
Jan 10 17:19:26 compute-0 gracious_robinson[240123]:                 "ceph.with_tpm": "0"
Jan 10 17:19:26 compute-0 gracious_robinson[240123]:             },
Jan 10 17:19:26 compute-0 gracious_robinson[240123]:             "type": "block",
Jan 10 17:19:26 compute-0 gracious_robinson[240123]:             "vg_name": "ceph_vg0"
Jan 10 17:19:26 compute-0 gracious_robinson[240123]:         }
Jan 10 17:19:26 compute-0 gracious_robinson[240123]:     ],
Jan 10 17:19:26 compute-0 gracious_robinson[240123]:     "1": [
Jan 10 17:19:26 compute-0 gracious_robinson[240123]:         {
Jan 10 17:19:26 compute-0 gracious_robinson[240123]:             "devices": [
Jan 10 17:19:26 compute-0 gracious_robinson[240123]:                 "/dev/loop4"
Jan 10 17:19:26 compute-0 gracious_robinson[240123]:             ],
Jan 10 17:19:26 compute-0 gracious_robinson[240123]:             "lv_name": "ceph_lv1",
Jan 10 17:19:26 compute-0 gracious_robinson[240123]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 10 17:19:26 compute-0 gracious_robinson[240123]:             "lv_size": "21470642176",
Jan 10 17:19:26 compute-0 gracious_robinson[240123]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e8e31518-65ae-476c-891c-e2fc550d0a1c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 17:19:26 compute-0 gracious_robinson[240123]:             "lv_uuid": "jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE",
Jan 10 17:19:26 compute-0 gracious_robinson[240123]:             "name": "ceph_lv1",
Jan 10 17:19:26 compute-0 gracious_robinson[240123]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 10 17:19:26 compute-0 gracious_robinson[240123]:             "tags": {
Jan 10 17:19:26 compute-0 gracious_robinson[240123]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 10 17:19:26 compute-0 gracious_robinson[240123]:                 "ceph.block_uuid": "jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE",
Jan 10 17:19:26 compute-0 gracious_robinson[240123]:                 "ceph.cephx_lockbox_secret": "",
Jan 10 17:19:26 compute-0 gracious_robinson[240123]:                 "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 17:19:26 compute-0 gracious_robinson[240123]:                 "ceph.cluster_name": "ceph",
Jan 10 17:19:26 compute-0 gracious_robinson[240123]:                 "ceph.crush_device_class": "",
Jan 10 17:19:26 compute-0 gracious_robinson[240123]:                 "ceph.encrypted": "0",
Jan 10 17:19:26 compute-0 gracious_robinson[240123]:                 "ceph.objectstore": "bluestore",
Jan 10 17:19:26 compute-0 gracious_robinson[240123]:                 "ceph.osd_fsid": "e8e31518-65ae-476c-891c-e2fc550d0a1c",
Jan 10 17:19:26 compute-0 gracious_robinson[240123]:                 "ceph.osd_id": "1",
Jan 10 17:19:26 compute-0 gracious_robinson[240123]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 10 17:19:26 compute-0 gracious_robinson[240123]:                 "ceph.type": "block",
Jan 10 17:19:26 compute-0 gracious_robinson[240123]:                 "ceph.vdo": "0",
Jan 10 17:19:26 compute-0 gracious_robinson[240123]:                 "ceph.with_tpm": "0"
Jan 10 17:19:26 compute-0 gracious_robinson[240123]:             },
Jan 10 17:19:26 compute-0 gracious_robinson[240123]:             "type": "block",
Jan 10 17:19:26 compute-0 gracious_robinson[240123]:             "vg_name": "ceph_vg1"
Jan 10 17:19:26 compute-0 gracious_robinson[240123]:         }
Jan 10 17:19:26 compute-0 gracious_robinson[240123]:     ],
Jan 10 17:19:26 compute-0 gracious_robinson[240123]:     "2": [
Jan 10 17:19:26 compute-0 gracious_robinson[240123]:         {
Jan 10 17:19:26 compute-0 gracious_robinson[240123]:             "devices": [
Jan 10 17:19:26 compute-0 gracious_robinson[240123]:                 "/dev/loop5"
Jan 10 17:19:26 compute-0 gracious_robinson[240123]:             ],
Jan 10 17:19:26 compute-0 gracious_robinson[240123]:             "lv_name": "ceph_lv2",
Jan 10 17:19:26 compute-0 gracious_robinson[240123]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 10 17:19:26 compute-0 gracious_robinson[240123]:             "lv_size": "21470642176",
Jan 10 17:19:26 compute-0 gracious_robinson[240123]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=87473727-6468-4f68-8371-e0bf60edaa43,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 17:19:26 compute-0 gracious_robinson[240123]:             "lv_uuid": "47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL",
Jan 10 17:19:26 compute-0 gracious_robinson[240123]:             "name": "ceph_lv2",
Jan 10 17:19:26 compute-0 gracious_robinson[240123]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 10 17:19:26 compute-0 gracious_robinson[240123]:             "tags": {
Jan 10 17:19:26 compute-0 gracious_robinson[240123]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 10 17:19:26 compute-0 gracious_robinson[240123]:                 "ceph.block_uuid": "47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL",
Jan 10 17:19:26 compute-0 gracious_robinson[240123]:                 "ceph.cephx_lockbox_secret": "",
Jan 10 17:19:26 compute-0 gracious_robinson[240123]:                 "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 17:19:26 compute-0 gracious_robinson[240123]:                 "ceph.cluster_name": "ceph",
Jan 10 17:19:26 compute-0 gracious_robinson[240123]:                 "ceph.crush_device_class": "",
Jan 10 17:19:26 compute-0 gracious_robinson[240123]:                 "ceph.encrypted": "0",
Jan 10 17:19:26 compute-0 gracious_robinson[240123]:                 "ceph.objectstore": "bluestore",
Jan 10 17:19:26 compute-0 gracious_robinson[240123]:                 "ceph.osd_fsid": "87473727-6468-4f68-8371-e0bf60edaa43",
Jan 10 17:19:26 compute-0 gracious_robinson[240123]:                 "ceph.osd_id": "2",
Jan 10 17:19:26 compute-0 gracious_robinson[240123]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 10 17:19:26 compute-0 gracious_robinson[240123]:                 "ceph.type": "block",
Jan 10 17:19:26 compute-0 gracious_robinson[240123]:                 "ceph.vdo": "0",
Jan 10 17:19:26 compute-0 gracious_robinson[240123]:                 "ceph.with_tpm": "0"
Jan 10 17:19:26 compute-0 gracious_robinson[240123]:             },
Jan 10 17:19:26 compute-0 gracious_robinson[240123]:             "type": "block",
Jan 10 17:19:26 compute-0 gracious_robinson[240123]:             "vg_name": "ceph_vg2"
Jan 10 17:19:26 compute-0 gracious_robinson[240123]:         }
Jan 10 17:19:26 compute-0 gracious_robinson[240123]:     ]
Jan 10 17:19:26 compute-0 gracious_robinson[240123]: }
Jan 10 17:19:27 compute-0 systemd[1]: libpod-a4cb2fb2335858411f80ed43a95c208bb7207b0be66d43ee5eb4402a1ee0c4dd.scope: Deactivated successfully.
Jan 10 17:19:27 compute-0 podman[240132]: 2026-01-10 17:19:27.080855264 +0000 UTC m=+0.038088845 container died a4cb2fb2335858411f80ed43a95c208bb7207b0be66d43ee5eb4402a1ee0c4dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_robinson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 17:19:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-b66e2437ae4c2704af21c9d4bd7aef803bae28c537fa62ea8654aff7710dfc5c-merged.mount: Deactivated successfully.
Jan 10 17:19:27 compute-0 podman[240132]: 2026-01-10 17:19:27.135685395 +0000 UTC m=+0.092918876 container remove a4cb2fb2335858411f80ed43a95c208bb7207b0be66d43ee5eb4402a1ee0c4dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_robinson, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 17:19:27 compute-0 systemd[1]: libpod-conmon-a4cb2fb2335858411f80ed43a95c208bb7207b0be66d43ee5eb4402a1ee0c4dd.scope: Deactivated successfully.
Jan 10 17:19:27 compute-0 sudo[240031]: pam_unix(sudo:session): session closed for user root
Jan 10 17:19:27 compute-0 sudo[240149]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 17:19:27 compute-0 sudo[240149]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:19:27 compute-0 sudo[240149]: pam_unix(sudo:session): session closed for user root
Jan 10 17:19:27 compute-0 sudo[240174]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 -- raw list --format json
Jan 10 17:19:27 compute-0 sudo[240174]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:19:27 compute-0 podman[240211]: 2026-01-10 17:19:27.674945686 +0000 UTC m=+0.055007268 container create 60d1bb2fd33f115261fc201e29c286c511789c5d65f821d24067491be9921b31 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_ride, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 17:19:27 compute-0 systemd[1]: Started libpod-conmon-60d1bb2fd33f115261fc201e29c286c511789c5d65f821d24067491be9921b31.scope.
Jan 10 17:19:27 compute-0 podman[240211]: 2026-01-10 17:19:27.651415988 +0000 UTC m=+0.031477560 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:19:27 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:19:27 compute-0 podman[240211]: 2026-01-10 17:19:27.77248385 +0000 UTC m=+0.152545452 container init 60d1bb2fd33f115261fc201e29c286c511789c5d65f821d24067491be9921b31 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_ride, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 10 17:19:27 compute-0 podman[240211]: 2026-01-10 17:19:27.782238382 +0000 UTC m=+0.162299954 container start 60d1bb2fd33f115261fc201e29c286c511789c5d65f821d24067491be9921b31 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_ride, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 10 17:19:27 compute-0 podman[240211]: 2026-01-10 17:19:27.787289693 +0000 UTC m=+0.167351265 container attach 60d1bb2fd33f115261fc201e29c286c511789c5d65f821d24067491be9921b31 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_ride, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 17:19:27 compute-0 systemd[1]: libpod-60d1bb2fd33f115261fc201e29c286c511789c5d65f821d24067491be9921b31.scope: Deactivated successfully.
Jan 10 17:19:27 compute-0 angry_ride[240227]: 167 167
Jan 10 17:19:27 compute-0 podman[240211]: 2026-01-10 17:19:27.78933375 +0000 UTC m=+0.169395332 container died 60d1bb2fd33f115261fc201e29c286c511789c5d65f821d24067491be9921b31 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_ride, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 17:19:27 compute-0 conmon[240227]: conmon 60d1bb2fd33f115261fc <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-60d1bb2fd33f115261fc201e29c286c511789c5d65f821d24067491be9921b31.scope/container/memory.events
Jan 10 17:19:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-30a4c2041e86a141d794cb4353c659acea9f4ea5ffd6d896d4627cc1a6d52169-merged.mount: Deactivated successfully.
Jan 10 17:19:27 compute-0 podman[240211]: 2026-01-10 17:19:27.831988582 +0000 UTC m=+0.212050134 container remove 60d1bb2fd33f115261fc201e29c286c511789c5d65f821d24067491be9921b31 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_ride, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 10 17:19:27 compute-0 systemd[1]: libpod-conmon-60d1bb2fd33f115261fc201e29c286c511789c5d65f821d24067491be9921b31.scope: Deactivated successfully.
Jan 10 17:19:27 compute-0 ceph-mon[75249]: pgmap v713: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:19:28 compute-0 podman[240250]: 2026-01-10 17:19:28.075091831 +0000 UTC m=+0.064795221 container create 6a4201c776d675dbc473fa5aaa4275e1ebe090ac05e400ae4f71ec4a5ca92502 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_jepsen, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 10 17:19:28 compute-0 systemd[1]: Started libpod-conmon-6a4201c776d675dbc473fa5aaa4275e1ebe090ac05e400ae4f71ec4a5ca92502.scope.
Jan 10 17:19:28 compute-0 podman[240250]: 2026-01-10 17:19:28.050847734 +0000 UTC m=+0.040551184 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:19:28 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:19:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/907351faf6aeeec66a0351f7b788893422b972c0e4da53feb9ba56a2047629ae/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 17:19:28 compute-0 sshd-session[240143]: Connection closed by authenticating user root 216.36.124.133 port 52664 [preauth]
Jan 10 17:19:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/907351faf6aeeec66a0351f7b788893422b972c0e4da53feb9ba56a2047629ae/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 17:19:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/907351faf6aeeec66a0351f7b788893422b972c0e4da53feb9ba56a2047629ae/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 17:19:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/907351faf6aeeec66a0351f7b788893422b972c0e4da53feb9ba56a2047629ae/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 17:19:28 compute-0 podman[240250]: 2026-01-10 17:19:28.185435862 +0000 UTC m=+0.175139262 container init 6a4201c776d675dbc473fa5aaa4275e1ebe090ac05e400ae4f71ec4a5ca92502 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_jepsen, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 10 17:19:28 compute-0 podman[240250]: 2026-01-10 17:19:28.196847121 +0000 UTC m=+0.186550521 container start 6a4201c776d675dbc473fa5aaa4275e1ebe090ac05e400ae4f71ec4a5ca92502 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_jepsen, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 17:19:28 compute-0 podman[240250]: 2026-01-10 17:19:28.201925273 +0000 UTC m=+0.191628673 container attach 6a4201c776d675dbc473fa5aaa4275e1ebe090ac05e400ae4f71ec4a5ca92502 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_jepsen, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 10 17:19:28 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v714: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:19:28 compute-0 lvm[240347]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 10 17:19:28 compute-0 lvm[240347]: VG ceph_vg1 finished
Jan 10 17:19:28 compute-0 lvm[240345]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 10 17:19:28 compute-0 lvm[240345]: VG ceph_vg0 finished
Jan 10 17:19:28 compute-0 lvm[240348]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 10 17:19:28 compute-0 lvm[240348]: VG ceph_vg2 finished
Jan 10 17:19:29 compute-0 zen_jepsen[240266]: {}
Jan 10 17:19:29 compute-0 systemd[1]: libpod-6a4201c776d675dbc473fa5aaa4275e1ebe090ac05e400ae4f71ec4a5ca92502.scope: Deactivated successfully.
Jan 10 17:19:29 compute-0 systemd[1]: libpod-6a4201c776d675dbc473fa5aaa4275e1ebe090ac05e400ae4f71ec4a5ca92502.scope: Consumed 1.311s CPU time.
Jan 10 17:19:29 compute-0 podman[240250]: 2026-01-10 17:19:29.038427964 +0000 UTC m=+1.028131354 container died 6a4201c776d675dbc473fa5aaa4275e1ebe090ac05e400ae4f71ec4a5ca92502 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_jepsen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 10 17:19:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-907351faf6aeeec66a0351f7b788893422b972c0e4da53feb9ba56a2047629ae-merged.mount: Deactivated successfully.
Jan 10 17:19:29 compute-0 podman[240250]: 2026-01-10 17:19:29.08305243 +0000 UTC m=+1.072755800 container remove 6a4201c776d675dbc473fa5aaa4275e1ebe090ac05e400ae4f71ec4a5ca92502 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_jepsen, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 10 17:19:29 compute-0 systemd[1]: libpod-conmon-6a4201c776d675dbc473fa5aaa4275e1ebe090ac05e400ae4f71ec4a5ca92502.scope: Deactivated successfully.
Jan 10 17:19:29 compute-0 sudo[240174]: pam_unix(sudo:session): session closed for user root
Jan 10 17:19:29 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 10 17:19:29 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:19:29 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 10 17:19:29 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:19:29 compute-0 sudo[240363]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 10 17:19:29 compute-0 sudo[240363]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:19:29 compute-0 sudo[240363]: pam_unix(sudo:session): session closed for user root
Jan 10 17:19:29 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:19:29 compute-0 ceph-mon[75249]: pgmap v714: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:19:29 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:19:29 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:19:30 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v715: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:19:31 compute-0 ceph-mon[75249]: pgmap v715: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:19:32 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v716: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:19:33 compute-0 ceph-mon[75249]: pgmap v716: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:19:34 compute-0 podman[240388]: 2026-01-10 17:19:34.083079386 +0000 UTC m=+0.076578990 container health_status 4a66317a3a3660e903224f5e823ead0a57597ae17c6ac34919f8a3aeea25124f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 10 17:19:34 compute-0 podman[240389]: 2026-01-10 17:19:34.133027831 +0000 UTC m=+0.126292398 container health_status a2049b55215718ead6719d161f51721cb7ba61ad8a59a8a1174e05b17008fa6f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 10 17:19:34 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:19:34 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v717: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:19:35 compute-0 ceph-mon[75249]: pgmap v717: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:19:36 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 10 17:19:36 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1945124111' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 10 17:19:36 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 10 17:19:36 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1945124111' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 10 17:19:36 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v718: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:19:36 compute-0 ceph-mon[75249]: from='client.? 192.168.122.10:0/1945124111' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 10 17:19:36 compute-0 ceph-mon[75249]: from='client.? 192.168.122.10:0/1945124111' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 10 17:19:37 compute-0 ceph-mon[75249]: pgmap v718: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:19:38 compute-0 ceph-mgr[75538]: [balancer INFO root] Optimize plan auto_2026-01-10_17:19:38
Jan 10 17:19:38 compute-0 ceph-mgr[75538]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 10 17:19:38 compute-0 ceph-mgr[75538]: [balancer INFO root] do_upmap
Jan 10 17:19:38 compute-0 ceph-mgr[75538]: [balancer INFO root] pools ['.mgr', 'images', 'cephfs.cephfs.data', 'vms', 'backups', 'volumes', 'cephfs.cephfs.meta']
Jan 10 17:19:38 compute-0 ceph-mgr[75538]: [balancer INFO root] prepared 0/10 upmap changes
Jan 10 17:19:38 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v719: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:19:39 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:19:39 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:19:39 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:19:39 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:19:39 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:19:39 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:19:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 10 17:19:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 10 17:19:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 10 17:19:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 10 17:19:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 10 17:19:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 10 17:19:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 10 17:19:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 10 17:19:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 10 17:19:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 10 17:19:39 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:19:39 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 do_prune osdmap full prune enabled
Jan 10 17:19:39 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e69 e69: 3 total, 3 up, 3 in
Jan 10 17:19:39 compute-0 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e69: 3 total, 3 up, 3 in
Jan 10 17:19:40 compute-0 ceph-mon[75249]: pgmap v719: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:19:40 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v721: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 102 B/s wr, 0 op/s
Jan 10 17:19:41 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e69 do_prune osdmap full prune enabled
Jan 10 17:19:41 compute-0 ceph-mon[75249]: osdmap e69: 3 total, 3 up, 3 in
Jan 10 17:19:41 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e70 e70: 3 total, 3 up, 3 in
Jan 10 17:19:41 compute-0 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e70: 3 total, 3 up, 3 in
Jan 10 17:19:42 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e70 do_prune osdmap full prune enabled
Jan 10 17:19:42 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e71 e71: 3 total, 3 up, 3 in
Jan 10 17:19:42 compute-0 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e71: 3 total, 3 up, 3 in
Jan 10 17:19:42 compute-0 ceph-mon[75249]: pgmap v721: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 102 B/s wr, 0 op/s
Jan 10 17:19:42 compute-0 ceph-mon[75249]: osdmap e70: 3 total, 3 up, 3 in
Jan 10 17:19:42 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v724: 177 pgs: 177 active+clean; 8.4 MiB data, 89 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s wr, 0 op/s
Jan 10 17:19:43 compute-0 ceph-mon[75249]: osdmap e71: 3 total, 3 up, 3 in
Jan 10 17:19:44 compute-0 ceph-mon[75249]: pgmap v724: 177 pgs: 177 active+clean; 8.4 MiB data, 89 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s wr, 0 op/s
Jan 10 17:19:44 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e71 do_prune osdmap full prune enabled
Jan 10 17:19:44 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e72 e72: 3 total, 3 up, 3 in
Jan 10 17:19:44 compute-0 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e72: 3 total, 3 up, 3 in
Jan 10 17:19:44 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e72 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:19:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] _maybe_adjust
Jan 10 17:19:44 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v726: 177 pgs: 177 active+clean; 8.4 MiB data, 89 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s wr, 0 op/s
Jan 10 17:19:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:19:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 10 17:19:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:19:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 17:19:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:19:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 17:19:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:19:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 17:19:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:19:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00013036095103210262 of space, bias 1.0, pg target 0.039108285309630786 quantized to 32 (current 32)
Jan 10 17:19:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:19:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 9.310698069856682e-07 of space, bias 4.0, pg target 0.0011172837683828018 quantized to 16 (current 16)
Jan 10 17:19:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:19:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 17:19:45 compute-0 ceph-mon[75249]: osdmap e72: 3 total, 3 up, 3 in
Jan 10 17:19:45 compute-0 ceph-mon[75249]: pgmap v726: 177 pgs: 177 active+clean; 8.4 MiB data, 89 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s wr, 0 op/s
Jan 10 17:19:46 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v727: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 6.8 MiB/s wr, 43 op/s
Jan 10 17:19:47 compute-0 ceph-mon[75249]: pgmap v727: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 6.8 MiB/s wr, 43 op/s
Jan 10 17:19:48 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v728: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 5.5 MiB/s wr, 50 op/s
Jan 10 17:19:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:19:48.921 152671 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 10 17:19:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:19:48.921 152671 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 10 17:19:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:19:48.922 152671 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 10 17:19:49 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e72 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:19:49 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e72 do_prune osdmap full prune enabled
Jan 10 17:19:49 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e73 e73: 3 total, 3 up, 3 in
Jan 10 17:19:49 compute-0 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e73: 3 total, 3 up, 3 in
Jan 10 17:19:49 compute-0 ceph-mon[75249]: pgmap v728: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 5.5 MiB/s wr, 50 op/s
Jan 10 17:19:49 compute-0 ceph-mon[75249]: osdmap e73: 3 total, 3 up, 3 in
Jan 10 17:19:50 compute-0 nova_compute[237049]: 2026-01-10 17:19:50.334 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:19:50 compute-0 nova_compute[237049]: 2026-01-10 17:19:50.345 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:19:50 compute-0 nova_compute[237049]: 2026-01-10 17:19:50.345 237053 DEBUG nova.compute.manager [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 10 17:19:50 compute-0 nova_compute[237049]: 2026-01-10 17:19:50.346 237053 DEBUG nova.compute.manager [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 10 17:19:50 compute-0 nova_compute[237049]: 2026-01-10 17:19:50.375 237053 DEBUG nova.compute.manager [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 10 17:19:50 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v730: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 4.1 MiB/s wr, 47 op/s
Jan 10 17:19:51 compute-0 nova_compute[237049]: 2026-01-10 17:19:51.345 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:19:51 compute-0 ceph-mon[75249]: pgmap v730: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 4.1 MiB/s wr, 47 op/s
Jan 10 17:19:52 compute-0 nova_compute[237049]: 2026-01-10 17:19:52.335 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:19:52 compute-0 nova_compute[237049]: 2026-01-10 17:19:52.351 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:19:52 compute-0 nova_compute[237049]: 2026-01-10 17:19:52.351 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:19:52 compute-0 nova_compute[237049]: 2026-01-10 17:19:52.351 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:19:52 compute-0 nova_compute[237049]: 2026-01-10 17:19:52.352 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:19:52 compute-0 nova_compute[237049]: 2026-01-10 17:19:52.352 237053 DEBUG nova.compute.manager [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 10 17:19:52 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v731: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 4.0 MiB/s wr, 45 op/s
Jan 10 17:19:53 compute-0 nova_compute[237049]: 2026-01-10 17:19:53.346 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:19:53 compute-0 nova_compute[237049]: 2026-01-10 17:19:53.346 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:19:53 compute-0 nova_compute[237049]: 2026-01-10 17:19:53.377 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 10 17:19:53 compute-0 nova_compute[237049]: 2026-01-10 17:19:53.377 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 10 17:19:53 compute-0 nova_compute[237049]: 2026-01-10 17:19:53.378 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 10 17:19:53 compute-0 nova_compute[237049]: 2026-01-10 17:19:53.378 237053 DEBUG nova.compute.resource_tracker [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 10 17:19:53 compute-0 nova_compute[237049]: 2026-01-10 17:19:53.378 237053 DEBUG oslo_concurrency.processutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 10 17:19:53 compute-0 ceph-mon[75249]: pgmap v731: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 4.0 MiB/s wr, 45 op/s
Jan 10 17:19:53 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 10 17:19:53 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4075959427' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 10 17:19:53 compute-0 nova_compute[237049]: 2026-01-10 17:19:53.970 237053 DEBUG oslo_concurrency.processutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.592s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 10 17:19:54 compute-0 nova_compute[237049]: 2026-01-10 17:19:54.162 237053 WARNING nova.virt.libvirt.driver [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 10 17:19:54 compute-0 nova_compute[237049]: 2026-01-10 17:19:54.163 237053 DEBUG nova.compute.resource_tracker [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5278MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 10 17:19:54 compute-0 nova_compute[237049]: 2026-01-10 17:19:54.163 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 10 17:19:54 compute-0 nova_compute[237049]: 2026-01-10 17:19:54.164 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 10 17:19:54 compute-0 nova_compute[237049]: 2026-01-10 17:19:54.245 237053 DEBUG nova.compute.resource_tracker [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 10 17:19:54 compute-0 nova_compute[237049]: 2026-01-10 17:19:54.246 237053 DEBUG nova.compute.resource_tracker [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 10 17:19:54 compute-0 nova_compute[237049]: 2026-01-10 17:19:54.278 237053 DEBUG oslo_concurrency.processutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 10 17:19:54 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:19:54 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v732: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 3.3 MiB/s wr, 37 op/s
Jan 10 17:19:54 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/4075959427' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 10 17:19:54 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 10 17:19:54 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3280756129' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 10 17:19:54 compute-0 nova_compute[237049]: 2026-01-10 17:19:54.874 237053 DEBUG oslo_concurrency.processutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.595s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 10 17:19:54 compute-0 nova_compute[237049]: 2026-01-10 17:19:54.880 237053 DEBUG nova.compute.provider_tree [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Inventory has not changed in ProviderTree for provider: 5f85855c-8a9b-43b5-ae49-f5846b9dcebe update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 10 17:19:54 compute-0 nova_compute[237049]: 2026-01-10 17:19:54.897 237053 DEBUG nova.scheduler.client.report [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Inventory has not changed for provider 5f85855c-8a9b-43b5-ae49-f5846b9dcebe based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 10 17:19:54 compute-0 nova_compute[237049]: 2026-01-10 17:19:54.898 237053 DEBUG nova.compute.resource_tracker [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 10 17:19:54 compute-0 nova_compute[237049]: 2026-01-10 17:19:54.898 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.735s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 10 17:19:55 compute-0 ceph-mon[75249]: pgmap v732: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 3.3 MiB/s wr, 37 op/s
Jan 10 17:19:55 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/3280756129' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 10 17:19:56 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v733: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail; 9.6 KiB/s rd, 715 B/s wr, 12 op/s
Jan 10 17:19:57 compute-0 ceph-mon[75249]: pgmap v733: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail; 9.6 KiB/s rd, 715 B/s wr, 12 op/s
Jan 10 17:19:58 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v734: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:19:59 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:19:59 compute-0 ceph-mon[75249]: pgmap v734: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:20:00 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v735: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:20:01 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:20:01.729 152671 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd6:b5:c0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '8e:56:cf:00:80:b3'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 10 17:20:01 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:20:01.731 152671 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 10 17:20:01 compute-0 ceph-mon[75249]: pgmap v735: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:20:02 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v736: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:20:03 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:20:03.735 152671 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=fbd04e21-7be2-4eb3-a385-03f0bb540a40, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 10 17:20:03 compute-0 ceph-mon[75249]: pgmap v736: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:20:04 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:20:04 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v737: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:20:05 compute-0 podman[240471]: 2026-01-10 17:20:05.076949551 +0000 UTC m=+0.070937076 container health_status 4a66317a3a3660e903224f5e823ead0a57597ae17c6ac34919f8a3aeea25124f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 10 17:20:05 compute-0 podman[240472]: 2026-01-10 17:20:05.11108639 +0000 UTC m=+0.107218486 container health_status a2049b55215718ead6719d161f51721cb7ba61ad8a59a8a1174e05b17008fa6f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 10 17:20:05 compute-0 ceph-mon[75249]: pgmap v737: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:20:06 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v738: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:20:07 compute-0 ceph-mon[75249]: pgmap v738: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:20:08 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v739: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:20:09 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:20:09 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:20:09 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:20:09 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:20:09 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:20:09 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:20:09 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:20:09 compute-0 ceph-mon[75249]: pgmap v739: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:20:10 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v740: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:20:11 compute-0 ceph-mon[75249]: pgmap v740: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:20:12 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v741: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:20:13 compute-0 ceph-mon[75249]: pgmap v741: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:20:14 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:20:14 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v742: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:20:15 compute-0 ceph-mon[75249]: pgmap v742: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:20:16 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v743: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:20:17 compute-0 ceph-mon[75249]: pgmap v743: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:20:18 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v744: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:20:19 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:20:19 compute-0 ceph-mon[75249]: pgmap v744: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:20:20 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v745: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:20:21 compute-0 ceph-mon[75249]: pgmap v745: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:20:22 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v746: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:20:23 compute-0 ceph-mon[75249]: pgmap v746: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:20:24 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:20:24 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v747: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:20:25 compute-0 ceph-mon[75249]: pgmap v747: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:20:26 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v748: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:20:26 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e73 do_prune osdmap full prune enabled
Jan 10 17:20:26 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e74 e74: 3 total, 3 up, 3 in
Jan 10 17:20:26 compute-0 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e74: 3 total, 3 up, 3 in
Jan 10 17:20:27 compute-0 ceph-mon[75249]: pgmap v748: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:20:27 compute-0 ceph-mon[75249]: osdmap e74: 3 total, 3 up, 3 in
Jan 10 17:20:27 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e74 do_prune osdmap full prune enabled
Jan 10 17:20:27 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e75 e75: 3 total, 3 up, 3 in
Jan 10 17:20:27 compute-0 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e75: 3 total, 3 up, 3 in
Jan 10 17:20:28 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v751: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail; 127 B/s rd, 255 B/s wr, 0 op/s
Jan 10 17:20:28 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e75 do_prune osdmap full prune enabled
Jan 10 17:20:28 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e76 e76: 3 total, 3 up, 3 in
Jan 10 17:20:28 compute-0 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e76: 3 total, 3 up, 3 in
Jan 10 17:20:28 compute-0 ceph-mon[75249]: osdmap e75: 3 total, 3 up, 3 in
Jan 10 17:20:29 compute-0 sudo[240517]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 17:20:29 compute-0 sudo[240517]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:20:29 compute-0 sudo[240517]: pam_unix(sudo:session): session closed for user root
Jan 10 17:20:29 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e76 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:20:29 compute-0 sudo[240542]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 10 17:20:29 compute-0 sudo[240542]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:20:29 compute-0 ceph-mon[75249]: pgmap v751: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail; 127 B/s rd, 255 B/s wr, 0 op/s
Jan 10 17:20:29 compute-0 ceph-mon[75249]: osdmap e76: 3 total, 3 up, 3 in
Jan 10 17:20:30 compute-0 sudo[240542]: pam_unix(sudo:session): session closed for user root
Jan 10 17:20:30 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 17:20:30 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 17:20:30 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 10 17:20:30 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 10 17:20:30 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 10 17:20:30 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:20:30 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 10 17:20:30 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 10 17:20:30 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 10 17:20:30 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 10 17:20:30 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 17:20:30 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 17:20:30 compute-0 sudo[240597]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 17:20:30 compute-0 sudo[240597]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:20:30 compute-0 sudo[240597]: pam_unix(sudo:session): session closed for user root
Jan 10 17:20:30 compute-0 sudo[240622]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 10 17:20:30 compute-0 sudo[240622]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:20:30 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v753: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail; 53 KiB/s rd, 6.3 KiB/s wr, 76 op/s
Jan 10 17:20:30 compute-0 podman[240660]: 2026-01-10 17:20:30.653071525 +0000 UTC m=+0.054484522 container create 594228df637c5125985becb01f6488c6a269bb6aa094b8bea1190848b24aad59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_hopper, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 10 17:20:30 compute-0 systemd[1]: Started libpod-conmon-594228df637c5125985becb01f6488c6a269bb6aa094b8bea1190848b24aad59.scope.
Jan 10 17:20:30 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:20:30 compute-0 podman[240660]: 2026-01-10 17:20:30.636097894 +0000 UTC m=+0.037510921 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:20:30 compute-0 podman[240660]: 2026-01-10 17:20:30.748308423 +0000 UTC m=+0.149721510 container init 594228df637c5125985becb01f6488c6a269bb6aa094b8bea1190848b24aad59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_hopper, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 10 17:20:30 compute-0 podman[240660]: 2026-01-10 17:20:30.756648259 +0000 UTC m=+0.158061296 container start 594228df637c5125985becb01f6488c6a269bb6aa094b8bea1190848b24aad59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_hopper, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 17:20:30 compute-0 podman[240660]: 2026-01-10 17:20:30.760943751 +0000 UTC m=+0.162356858 container attach 594228df637c5125985becb01f6488c6a269bb6aa094b8bea1190848b24aad59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_hopper, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 10 17:20:30 compute-0 systemd[1]: libpod-594228df637c5125985becb01f6488c6a269bb6aa094b8bea1190848b24aad59.scope: Deactivated successfully.
Jan 10 17:20:30 compute-0 reverent_hopper[240676]: 167 167
Jan 10 17:20:30 compute-0 conmon[240676]: conmon 594228df637c5125985b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-594228df637c5125985becb01f6488c6a269bb6aa094b8bea1190848b24aad59.scope/container/memory.events
Jan 10 17:20:30 compute-0 podman[240660]: 2026-01-10 17:20:30.767981373 +0000 UTC m=+0.169394380 container died 594228df637c5125985becb01f6488c6a269bb6aa094b8bea1190848b24aad59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_hopper, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 17:20:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-1221e55b60758a1ce757af36c64b3bbe9dda4c19c776f6849e3c429481d79fdc-merged.mount: Deactivated successfully.
Jan 10 17:20:30 compute-0 podman[240660]: 2026-01-10 17:20:30.821630637 +0000 UTC m=+0.223043664 container remove 594228df637c5125985becb01f6488c6a269bb6aa094b8bea1190848b24aad59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_hopper, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 17:20:30 compute-0 systemd[1]: libpod-conmon-594228df637c5125985becb01f6488c6a269bb6aa094b8bea1190848b24aad59.scope: Deactivated successfully.
Jan 10 17:20:30 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e76 do_prune osdmap full prune enabled
Jan 10 17:20:30 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 17:20:30 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 10 17:20:30 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:20:30 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 10 17:20:30 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 10 17:20:30 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 17:20:30 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e77 e77: 3 total, 3 up, 3 in
Jan 10 17:20:30 compute-0 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e77: 3 total, 3 up, 3 in
Jan 10 17:20:31 compute-0 podman[240699]: 2026-01-10 17:20:31.066314746 +0000 UTC m=+0.042729264 container create ef06f0cf83e4362f88aef48355da021a899ea5962e0fd0df01f41a7e88f3a922 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_feynman, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 10 17:20:31 compute-0 systemd[1]: Started libpod-conmon-ef06f0cf83e4362f88aef48355da021a899ea5962e0fd0df01f41a7e88f3a922.scope.
Jan 10 17:20:31 compute-0 podman[240699]: 2026-01-10 17:20:31.049874432 +0000 UTC m=+0.026288960 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:20:31 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:20:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5247d9d152d236770a0a0e2ae91dc7d16a6f72458b639ef5b6c4f102ee8f7cf6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 17:20:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5247d9d152d236770a0a0e2ae91dc7d16a6f72458b639ef5b6c4f102ee8f7cf6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 17:20:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5247d9d152d236770a0a0e2ae91dc7d16a6f72458b639ef5b6c4f102ee8f7cf6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 17:20:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5247d9d152d236770a0a0e2ae91dc7d16a6f72458b639ef5b6c4f102ee8f7cf6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 17:20:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5247d9d152d236770a0a0e2ae91dc7d16a6f72458b639ef5b6c4f102ee8f7cf6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 10 17:20:31 compute-0 podman[240699]: 2026-01-10 17:20:31.161673458 +0000 UTC m=+0.138088036 container init ef06f0cf83e4362f88aef48355da021a899ea5962e0fd0df01f41a7e88f3a922 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_feynman, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 17:20:31 compute-0 podman[240699]: 2026-01-10 17:20:31.171645288 +0000 UTC m=+0.148059836 container start ef06f0cf83e4362f88aef48355da021a899ea5962e0fd0df01f41a7e88f3a922 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_feynman, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 17:20:31 compute-0 podman[240699]: 2026-01-10 17:20:31.175394772 +0000 UTC m=+0.151809370 container attach ef06f0cf83e4362f88aef48355da021a899ea5962e0fd0df01f41a7e88f3a922 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_feynman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 10 17:20:31 compute-0 magical_feynman[240716]: --> passed data devices: 0 physical, 3 LVM
Jan 10 17:20:31 compute-0 magical_feynman[240716]: --> All data devices are unavailable
Jan 10 17:20:31 compute-0 systemd[1]: libpod-ef06f0cf83e4362f88aef48355da021a899ea5962e0fd0df01f41a7e88f3a922.scope: Deactivated successfully.
Jan 10 17:20:31 compute-0 podman[240699]: 2026-01-10 17:20:31.746729269 +0000 UTC m=+0.723143787 container died ef06f0cf83e4362f88aef48355da021a899ea5962e0fd0df01f41a7e88f3a922 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_feynman, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 17:20:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-5247d9d152d236770a0a0e2ae91dc7d16a6f72458b639ef5b6c4f102ee8f7cf6-merged.mount: Deactivated successfully.
Jan 10 17:20:31 compute-0 podman[240699]: 2026-01-10 17:20:31.794221439 +0000 UTC m=+0.770635977 container remove ef06f0cf83e4362f88aef48355da021a899ea5962e0fd0df01f41a7e88f3a922 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_feynman, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 10 17:20:31 compute-0 systemd[1]: libpod-conmon-ef06f0cf83e4362f88aef48355da021a899ea5962e0fd0df01f41a7e88f3a922.scope: Deactivated successfully.
Jan 10 17:20:31 compute-0 sudo[240622]: pam_unix(sudo:session): session closed for user root
Jan 10 17:20:31 compute-0 sudo[240748]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 17:20:31 compute-0 sudo[240748]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:20:31 compute-0 sudo[240748]: pam_unix(sudo:session): session closed for user root
Jan 10 17:20:31 compute-0 ceph-mon[75249]: pgmap v753: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail; 53 KiB/s rd, 6.3 KiB/s wr, 76 op/s
Jan 10 17:20:31 compute-0 ceph-mon[75249]: osdmap e77: 3 total, 3 up, 3 in
Jan 10 17:20:31 compute-0 sudo[240773]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 -- lvm list --format json
Jan 10 17:20:31 compute-0 sudo[240773]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:20:32 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e77 do_prune osdmap full prune enabled
Jan 10 17:20:32 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e78 e78: 3 total, 3 up, 3 in
Jan 10 17:20:32 compute-0 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e78: 3 total, 3 up, 3 in
Jan 10 17:20:32 compute-0 podman[240809]: 2026-01-10 17:20:32.360247421 +0000 UTC m=+0.063032985 container create 1514bfd898c28cc14ab99d3e0fb3ba52fc46f73346e400cb222d515296a27031 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_haslett, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 10 17:20:32 compute-0 systemd[1]: Started libpod-conmon-1514bfd898c28cc14ab99d3e0fb3ba52fc46f73346e400cb222d515296a27031.scope.
Jan 10 17:20:32 compute-0 podman[240809]: 2026-01-10 17:20:32.334540371 +0000 UTC m=+0.037325975 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:20:32 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:20:32 compute-0 podman[240809]: 2026-01-10 17:20:32.451075183 +0000 UTC m=+0.153860777 container init 1514bfd898c28cc14ab99d3e0fb3ba52fc46f73346e400cb222d515296a27031 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_haslett, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 17:20:32 compute-0 podman[240809]: 2026-01-10 17:20:32.460691981 +0000 UTC m=+0.163477585 container start 1514bfd898c28cc14ab99d3e0fb3ba52fc46f73346e400cb222d515296a27031 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_haslett, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 17:20:32 compute-0 podman[240809]: 2026-01-10 17:20:32.465759249 +0000 UTC m=+0.168544853 container attach 1514bfd898c28cc14ab99d3e0fb3ba52fc46f73346e400cb222d515296a27031 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_haslett, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 17:20:32 compute-0 reverent_haslett[240826]: 167 167
Jan 10 17:20:32 compute-0 systemd[1]: libpod-1514bfd898c28cc14ab99d3e0fb3ba52fc46f73346e400cb222d515296a27031.scope: Deactivated successfully.
Jan 10 17:20:32 compute-0 podman[240809]: 2026-01-10 17:20:32.467446644 +0000 UTC m=+0.170232258 container died 1514bfd898c28cc14ab99d3e0fb3ba52fc46f73346e400cb222d515296a27031 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_haslett, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 10 17:20:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-38bb5680184a3e82fb62090727af5a5b31649fa705278874e7185280d5f0b431-merged.mount: Deactivated successfully.
Jan 10 17:20:32 compute-0 podman[240809]: 2026-01-10 17:20:32.515475582 +0000 UTC m=+0.218261186 container remove 1514bfd898c28cc14ab99d3e0fb3ba52fc46f73346e400cb222d515296a27031 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_haslett, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 17:20:32 compute-0 systemd[1]: libpod-conmon-1514bfd898c28cc14ab99d3e0fb3ba52fc46f73346e400cb222d515296a27031.scope: Deactivated successfully.
Jan 10 17:20:32 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v756: 177 pgs: 177 active+clean; 41 MiB data, 123 MiB used, 60 GiB / 60 GiB avail; 150 KiB/s rd, 16 KiB/s wr, 207 op/s
Jan 10 17:20:32 compute-0 podman[240851]: 2026-01-10 17:20:32.744661269 +0000 UTC m=+0.065121314 container create ad6cfa723908408f932e5d90a8036b80b545a4fe0234425913b34b13feb66058 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_heyrovsky, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 17:20:32 compute-0 systemd[1]: Started libpod-conmon-ad6cfa723908408f932e5d90a8036b80b545a4fe0234425913b34b13feb66058.scope.
Jan 10 17:20:32 compute-0 podman[240851]: 2026-01-10 17:20:32.718274346 +0000 UTC m=+0.038734421 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:20:32 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:20:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c2deab5d650d3f17743ee054fc1f7ef454023dbc0f5cec0a6458639194fb6d7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 17:20:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c2deab5d650d3f17743ee054fc1f7ef454023dbc0f5cec0a6458639194fb6d7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 17:20:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c2deab5d650d3f17743ee054fc1f7ef454023dbc0f5cec0a6458639194fb6d7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 17:20:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c2deab5d650d3f17743ee054fc1f7ef454023dbc0f5cec0a6458639194fb6d7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 17:20:32 compute-0 podman[240851]: 2026-01-10 17:20:32.864577963 +0000 UTC m=+0.185038078 container init ad6cfa723908408f932e5d90a8036b80b545a4fe0234425913b34b13feb66058 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_heyrovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 17:20:32 compute-0 podman[240851]: 2026-01-10 17:20:32.875834585 +0000 UTC m=+0.196294630 container start ad6cfa723908408f932e5d90a8036b80b545a4fe0234425913b34b13feb66058 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_heyrovsky, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 10 17:20:32 compute-0 podman[240851]: 2026-01-10 17:20:32.970564277 +0000 UTC m=+0.291024342 container attach ad6cfa723908408f932e5d90a8036b80b545a4fe0234425913b34b13feb66058 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_heyrovsky, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 10 17:20:33 compute-0 mystifying_heyrovsky[240867]: {
Jan 10 17:20:33 compute-0 mystifying_heyrovsky[240867]:     "0": [
Jan 10 17:20:33 compute-0 mystifying_heyrovsky[240867]:         {
Jan 10 17:20:33 compute-0 mystifying_heyrovsky[240867]:             "devices": [
Jan 10 17:20:33 compute-0 mystifying_heyrovsky[240867]:                 "/dev/loop3"
Jan 10 17:20:33 compute-0 mystifying_heyrovsky[240867]:             ],
Jan 10 17:20:33 compute-0 mystifying_heyrovsky[240867]:             "lv_name": "ceph_lv0",
Jan 10 17:20:33 compute-0 mystifying_heyrovsky[240867]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 10 17:20:33 compute-0 mystifying_heyrovsky[240867]:             "lv_size": "21470642176",
Jan 10 17:20:33 compute-0 mystifying_heyrovsky[240867]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9aa1dcc9-88f4-49c0-be40-744313964d3e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 17:20:33 compute-0 mystifying_heyrovsky[240867]:             "lv_uuid": "fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2",
Jan 10 17:20:33 compute-0 mystifying_heyrovsky[240867]:             "name": "ceph_lv0",
Jan 10 17:20:33 compute-0 mystifying_heyrovsky[240867]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 10 17:20:33 compute-0 mystifying_heyrovsky[240867]:             "tags": {
Jan 10 17:20:33 compute-0 mystifying_heyrovsky[240867]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 10 17:20:33 compute-0 mystifying_heyrovsky[240867]:                 "ceph.block_uuid": "fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2",
Jan 10 17:20:33 compute-0 mystifying_heyrovsky[240867]:                 "ceph.cephx_lockbox_secret": "",
Jan 10 17:20:33 compute-0 mystifying_heyrovsky[240867]:                 "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 17:20:33 compute-0 mystifying_heyrovsky[240867]:                 "ceph.cluster_name": "ceph",
Jan 10 17:20:33 compute-0 mystifying_heyrovsky[240867]:                 "ceph.crush_device_class": "",
Jan 10 17:20:33 compute-0 mystifying_heyrovsky[240867]:                 "ceph.encrypted": "0",
Jan 10 17:20:33 compute-0 mystifying_heyrovsky[240867]:                 "ceph.objectstore": "bluestore",
Jan 10 17:20:33 compute-0 mystifying_heyrovsky[240867]:                 "ceph.osd_fsid": "9aa1dcc9-88f4-49c0-be40-744313964d3e",
Jan 10 17:20:33 compute-0 mystifying_heyrovsky[240867]:                 "ceph.osd_id": "0",
Jan 10 17:20:33 compute-0 mystifying_heyrovsky[240867]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 10 17:20:33 compute-0 mystifying_heyrovsky[240867]:                 "ceph.type": "block",
Jan 10 17:20:33 compute-0 mystifying_heyrovsky[240867]:                 "ceph.vdo": "0",
Jan 10 17:20:33 compute-0 mystifying_heyrovsky[240867]:                 "ceph.with_tpm": "0"
Jan 10 17:20:33 compute-0 mystifying_heyrovsky[240867]:             },
Jan 10 17:20:33 compute-0 mystifying_heyrovsky[240867]:             "type": "block",
Jan 10 17:20:33 compute-0 mystifying_heyrovsky[240867]:             "vg_name": "ceph_vg0"
Jan 10 17:20:33 compute-0 mystifying_heyrovsky[240867]:         }
Jan 10 17:20:33 compute-0 mystifying_heyrovsky[240867]:     ],
Jan 10 17:20:33 compute-0 mystifying_heyrovsky[240867]:     "1": [
Jan 10 17:20:33 compute-0 mystifying_heyrovsky[240867]:         {
Jan 10 17:20:33 compute-0 mystifying_heyrovsky[240867]:             "devices": [
Jan 10 17:20:33 compute-0 mystifying_heyrovsky[240867]:                 "/dev/loop4"
Jan 10 17:20:33 compute-0 mystifying_heyrovsky[240867]:             ],
Jan 10 17:20:33 compute-0 mystifying_heyrovsky[240867]:             "lv_name": "ceph_lv1",
Jan 10 17:20:33 compute-0 mystifying_heyrovsky[240867]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 10 17:20:33 compute-0 mystifying_heyrovsky[240867]:             "lv_size": "21470642176",
Jan 10 17:20:33 compute-0 mystifying_heyrovsky[240867]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e8e31518-65ae-476c-891c-e2fc550d0a1c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 17:20:33 compute-0 mystifying_heyrovsky[240867]:             "lv_uuid": "jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE",
Jan 10 17:20:33 compute-0 mystifying_heyrovsky[240867]:             "name": "ceph_lv1",
Jan 10 17:20:33 compute-0 mystifying_heyrovsky[240867]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 10 17:20:33 compute-0 mystifying_heyrovsky[240867]:             "tags": {
Jan 10 17:20:33 compute-0 mystifying_heyrovsky[240867]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 10 17:20:33 compute-0 mystifying_heyrovsky[240867]:                 "ceph.block_uuid": "jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE",
Jan 10 17:20:33 compute-0 mystifying_heyrovsky[240867]:                 "ceph.cephx_lockbox_secret": "",
Jan 10 17:20:33 compute-0 mystifying_heyrovsky[240867]:                 "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 17:20:33 compute-0 mystifying_heyrovsky[240867]:                 "ceph.cluster_name": "ceph",
Jan 10 17:20:33 compute-0 mystifying_heyrovsky[240867]:                 "ceph.crush_device_class": "",
Jan 10 17:20:33 compute-0 mystifying_heyrovsky[240867]:                 "ceph.encrypted": "0",
Jan 10 17:20:33 compute-0 mystifying_heyrovsky[240867]:                 "ceph.objectstore": "bluestore",
Jan 10 17:20:33 compute-0 mystifying_heyrovsky[240867]:                 "ceph.osd_fsid": "e8e31518-65ae-476c-891c-e2fc550d0a1c",
Jan 10 17:20:33 compute-0 mystifying_heyrovsky[240867]:                 "ceph.osd_id": "1",
Jan 10 17:20:33 compute-0 mystifying_heyrovsky[240867]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 10 17:20:33 compute-0 mystifying_heyrovsky[240867]:                 "ceph.type": "block",
Jan 10 17:20:33 compute-0 mystifying_heyrovsky[240867]:                 "ceph.vdo": "0",
Jan 10 17:20:33 compute-0 mystifying_heyrovsky[240867]:                 "ceph.with_tpm": "0"
Jan 10 17:20:33 compute-0 mystifying_heyrovsky[240867]:             },
Jan 10 17:20:33 compute-0 mystifying_heyrovsky[240867]:             "type": "block",
Jan 10 17:20:33 compute-0 mystifying_heyrovsky[240867]:             "vg_name": "ceph_vg1"
Jan 10 17:20:33 compute-0 mystifying_heyrovsky[240867]:         }
Jan 10 17:20:33 compute-0 mystifying_heyrovsky[240867]:     ],
Jan 10 17:20:33 compute-0 mystifying_heyrovsky[240867]:     "2": [
Jan 10 17:20:33 compute-0 mystifying_heyrovsky[240867]:         {
Jan 10 17:20:33 compute-0 mystifying_heyrovsky[240867]:             "devices": [
Jan 10 17:20:33 compute-0 mystifying_heyrovsky[240867]:                 "/dev/loop5"
Jan 10 17:20:33 compute-0 mystifying_heyrovsky[240867]:             ],
Jan 10 17:20:33 compute-0 mystifying_heyrovsky[240867]:             "lv_name": "ceph_lv2",
Jan 10 17:20:33 compute-0 mystifying_heyrovsky[240867]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 10 17:20:33 compute-0 mystifying_heyrovsky[240867]:             "lv_size": "21470642176",
Jan 10 17:20:33 compute-0 mystifying_heyrovsky[240867]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=87473727-6468-4f68-8371-e0bf60edaa43,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 17:20:33 compute-0 mystifying_heyrovsky[240867]:             "lv_uuid": "47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL",
Jan 10 17:20:33 compute-0 mystifying_heyrovsky[240867]:             "name": "ceph_lv2",
Jan 10 17:20:33 compute-0 mystifying_heyrovsky[240867]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 10 17:20:33 compute-0 mystifying_heyrovsky[240867]:             "tags": {
Jan 10 17:20:33 compute-0 mystifying_heyrovsky[240867]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 10 17:20:33 compute-0 mystifying_heyrovsky[240867]:                 "ceph.block_uuid": "47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL",
Jan 10 17:20:33 compute-0 mystifying_heyrovsky[240867]:                 "ceph.cephx_lockbox_secret": "",
Jan 10 17:20:33 compute-0 mystifying_heyrovsky[240867]:                 "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 17:20:33 compute-0 mystifying_heyrovsky[240867]:                 "ceph.cluster_name": "ceph",
Jan 10 17:20:33 compute-0 mystifying_heyrovsky[240867]:                 "ceph.crush_device_class": "",
Jan 10 17:20:33 compute-0 mystifying_heyrovsky[240867]:                 "ceph.encrypted": "0",
Jan 10 17:20:33 compute-0 mystifying_heyrovsky[240867]:                 "ceph.objectstore": "bluestore",
Jan 10 17:20:33 compute-0 mystifying_heyrovsky[240867]:                 "ceph.osd_fsid": "87473727-6468-4f68-8371-e0bf60edaa43",
Jan 10 17:20:33 compute-0 mystifying_heyrovsky[240867]:                 "ceph.osd_id": "2",
Jan 10 17:20:33 compute-0 mystifying_heyrovsky[240867]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 10 17:20:33 compute-0 mystifying_heyrovsky[240867]:                 "ceph.type": "block",
Jan 10 17:20:33 compute-0 mystifying_heyrovsky[240867]:                 "ceph.vdo": "0",
Jan 10 17:20:33 compute-0 mystifying_heyrovsky[240867]:                 "ceph.with_tpm": "0"
Jan 10 17:20:33 compute-0 mystifying_heyrovsky[240867]:             },
Jan 10 17:20:33 compute-0 mystifying_heyrovsky[240867]:             "type": "block",
Jan 10 17:20:33 compute-0 mystifying_heyrovsky[240867]:             "vg_name": "ceph_vg2"
Jan 10 17:20:33 compute-0 mystifying_heyrovsky[240867]:         }
Jan 10 17:20:33 compute-0 mystifying_heyrovsky[240867]:     ]
Jan 10 17:20:33 compute-0 mystifying_heyrovsky[240867]: }
Jan 10 17:20:33 compute-0 ceph-mon[75249]: osdmap e78: 3 total, 3 up, 3 in
Jan 10 17:20:33 compute-0 ceph-mon[75249]: pgmap v756: 177 pgs: 177 active+clean; 41 MiB data, 123 MiB used, 60 GiB / 60 GiB avail; 150 KiB/s rd, 16 KiB/s wr, 207 op/s
Jan 10 17:20:33 compute-0 systemd[1]: libpod-ad6cfa723908408f932e5d90a8036b80b545a4fe0234425913b34b13feb66058.scope: Deactivated successfully.
Jan 10 17:20:33 compute-0 podman[240851]: 2026-01-10 17:20:33.224433159 +0000 UTC m=+0.544893204 container died ad6cfa723908408f932e5d90a8036b80b545a4fe0234425913b34b13feb66058 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_heyrovsky, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS)
Jan 10 17:20:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-8c2deab5d650d3f17743ee054fc1f7ef454023dbc0f5cec0a6458639194fb6d7-merged.mount: Deactivated successfully.
Jan 10 17:20:33 compute-0 podman[240851]: 2026-01-10 17:20:33.280357728 +0000 UTC m=+0.600817753 container remove ad6cfa723908408f932e5d90a8036b80b545a4fe0234425913b34b13feb66058 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_heyrovsky, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 10 17:20:33 compute-0 systemd[1]: libpod-conmon-ad6cfa723908408f932e5d90a8036b80b545a4fe0234425913b34b13feb66058.scope: Deactivated successfully.
Jan 10 17:20:33 compute-0 sudo[240773]: pam_unix(sudo:session): session closed for user root
Jan 10 17:20:33 compute-0 sudo[240888]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 17:20:33 compute-0 sudo[240888]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:20:33 compute-0 sudo[240888]: pam_unix(sudo:session): session closed for user root
Jan 10 17:20:33 compute-0 sudo[240913]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 -- raw list --format json
Jan 10 17:20:33 compute-0 sudo[240913]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:20:33 compute-0 podman[240950]: 2026-01-10 17:20:33.830663979 +0000 UTC m=+0.061644659 container create 356d3d66a21544b0cd6a14d2f24960d9eb45ad554d8ab26ffeeb27c13169fbd4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_euler, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 10 17:20:33 compute-0 systemd[1]: Started libpod-conmon-356d3d66a21544b0cd6a14d2f24960d9eb45ad554d8ab26ffeeb27c13169fbd4.scope.
Jan 10 17:20:33 compute-0 podman[240950]: 2026-01-10 17:20:33.797653158 +0000 UTC m=+0.028633898 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:20:33 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:20:33 compute-0 podman[240950]: 2026-01-10 17:20:33.929412494 +0000 UTC m=+0.160393214 container init 356d3d66a21544b0cd6a14d2f24960d9eb45ad554d8ab26ffeeb27c13169fbd4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_euler, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 10 17:20:33 compute-0 podman[240950]: 2026-01-10 17:20:33.941217824 +0000 UTC m=+0.172198474 container start 356d3d66a21544b0cd6a14d2f24960d9eb45ad554d8ab26ffeeb27c13169fbd4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_euler, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 10 17:20:33 compute-0 podman[240950]: 2026-01-10 17:20:33.94563143 +0000 UTC m=+0.176612150 container attach 356d3d66a21544b0cd6a14d2f24960d9eb45ad554d8ab26ffeeb27c13169fbd4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_euler, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 17:20:33 compute-0 charming_euler[240966]: 167 167
Jan 10 17:20:33 compute-0 systemd[1]: libpod-356d3d66a21544b0cd6a14d2f24960d9eb45ad554d8ab26ffeeb27c13169fbd4.scope: Deactivated successfully.
Jan 10 17:20:33 compute-0 podman[240950]: 2026-01-10 17:20:33.948815286 +0000 UTC m=+0.179795956 container died 356d3d66a21544b0cd6a14d2f24960d9eb45ad554d8ab26ffeeb27c13169fbd4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_euler, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2)
Jan 10 17:20:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-1fde6b052b243de59a46030432a20b254fa172135cd371c67d443bd86082f79c-merged.mount: Deactivated successfully.
Jan 10 17:20:33 compute-0 podman[240950]: 2026-01-10 17:20:33.994149645 +0000 UTC m=+0.225130295 container remove 356d3d66a21544b0cd6a14d2f24960d9eb45ad554d8ab26ffeeb27c13169fbd4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_euler, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 17:20:34 compute-0 systemd[1]: libpod-conmon-356d3d66a21544b0cd6a14d2f24960d9eb45ad554d8ab26ffeeb27c13169fbd4.scope: Deactivated successfully.
Jan 10 17:20:34 compute-0 podman[240990]: 2026-01-10 17:20:34.160639758 +0000 UTC m=+0.041337487 container create 882bcdb8a175ef9fc546001426942bc4115ee80443a1d307cb171c91bd9d6bae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_poincare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 17:20:34 compute-0 systemd[1]: Started libpod-conmon-882bcdb8a175ef9fc546001426942bc4115ee80443a1d307cb171c91bd9d6bae.scope.
Jan 10 17:20:34 compute-0 podman[240990]: 2026-01-10 17:20:34.142688125 +0000 UTC m=+0.023385874 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:20:34 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:20:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96ad079141dadcd1c070da3ceaaa40c6f365c06fb61783f2cb6e697b7f522f6a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 17:20:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96ad079141dadcd1c070da3ceaaa40c6f365c06fb61783f2cb6e697b7f522f6a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 17:20:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96ad079141dadcd1c070da3ceaaa40c6f365c06fb61783f2cb6e697b7f522f6a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 17:20:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96ad079141dadcd1c070da3ceaaa40c6f365c06fb61783f2cb6e697b7f522f6a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 17:20:34 compute-0 podman[240990]: 2026-01-10 17:20:34.259172996 +0000 UTC m=+0.139870745 container init 882bcdb8a175ef9fc546001426942bc4115ee80443a1d307cb171c91bd9d6bae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_poincare, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 17:20:34 compute-0 podman[240990]: 2026-01-10 17:20:34.271863015 +0000 UTC m=+0.152560754 container start 882bcdb8a175ef9fc546001426942bc4115ee80443a1d307cb171c91bd9d6bae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_poincare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 17:20:34 compute-0 podman[240990]: 2026-01-10 17:20:34.275269138 +0000 UTC m=+0.155966897 container attach 882bcdb8a175ef9fc546001426942bc4115ee80443a1d307cb171c91bd9d6bae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_poincare, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 17:20:34 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e78 do_prune osdmap full prune enabled
Jan 10 17:20:34 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e79 e79: 3 total, 3 up, 3 in
Jan 10 17:20:34 compute-0 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e79: 3 total, 3 up, 3 in
Jan 10 17:20:34 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e79 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:20:34 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e79 do_prune osdmap full prune enabled
Jan 10 17:20:34 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e80 e80: 3 total, 3 up, 3 in
Jan 10 17:20:34 compute-0 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e80: 3 total, 3 up, 3 in
Jan 10 17:20:34 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v759: 177 pgs: 177 active+clean; 41 MiB data, 123 MiB used, 60 GiB / 60 GiB avail; 95 KiB/s rd, 9.2 KiB/s wr, 127 op/s
Jan 10 17:20:34 compute-0 lvm[241083]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 10 17:20:34 compute-0 lvm[241085]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 10 17:20:34 compute-0 lvm[241085]: VG ceph_vg1 finished
Jan 10 17:20:34 compute-0 lvm[241083]: VG ceph_vg0 finished
Jan 10 17:20:34 compute-0 lvm[241087]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 10 17:20:34 compute-0 lvm[241087]: VG ceph_vg2 finished
Jan 10 17:20:35 compute-0 lvm[241088]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 10 17:20:35 compute-0 lvm[241088]: VG ceph_vg1 finished
Jan 10 17:20:35 compute-0 wizardly_poincare[241006]: {}
Jan 10 17:20:35 compute-0 systemd[1]: libpod-882bcdb8a175ef9fc546001426942bc4115ee80443a1d307cb171c91bd9d6bae.scope: Deactivated successfully.
Jan 10 17:20:35 compute-0 systemd[1]: libpod-882bcdb8a175ef9fc546001426942bc4115ee80443a1d307cb171c91bd9d6bae.scope: Consumed 1.352s CPU time.
Jan 10 17:20:35 compute-0 podman[240990]: 2026-01-10 17:20:35.114192661 +0000 UTC m=+0.994890420 container died 882bcdb8a175ef9fc546001426942bc4115ee80443a1d307cb171c91bd9d6bae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_poincare, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 10 17:20:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-96ad079141dadcd1c070da3ceaaa40c6f365c06fb61783f2cb6e697b7f522f6a-merged.mount: Deactivated successfully.
Jan 10 17:20:35 compute-0 podman[240990]: 2026-01-10 17:20:35.18280483 +0000 UTC m=+1.063502559 container remove 882bcdb8a175ef9fc546001426942bc4115ee80443a1d307cb171c91bd9d6bae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_poincare, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 10 17:20:35 compute-0 systemd[1]: libpod-conmon-882bcdb8a175ef9fc546001426942bc4115ee80443a1d307cb171c91bd9d6bae.scope: Deactivated successfully.
Jan 10 17:20:35 compute-0 sudo[240913]: pam_unix(sudo:session): session closed for user root
Jan 10 17:20:35 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 10 17:20:35 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:20:35 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 10 17:20:35 compute-0 podman[241092]: 2026-01-10 17:20:35.256601279 +0000 UTC m=+0.102389326 container health_status 4a66317a3a3660e903224f5e823ead0a57597ae17c6ac34919f8a3aeea25124f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 10 17:20:35 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:20:35 compute-0 podman[241100]: 2026-01-10 17:20:35.262279677 +0000 UTC m=+0.109540772 container health_status a2049b55215718ead6719d161f51721cb7ba61ad8a59a8a1174e05b17008fa6f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 10 17:20:35 compute-0 ceph-mon[75249]: osdmap e79: 3 total, 3 up, 3 in
Jan 10 17:20:35 compute-0 ceph-mon[75249]: osdmap e80: 3 total, 3 up, 3 in
Jan 10 17:20:35 compute-0 ceph-mon[75249]: pgmap v759: 177 pgs: 177 active+clean; 41 MiB data, 123 MiB used, 60 GiB / 60 GiB avail; 95 KiB/s rd, 9.2 KiB/s wr, 127 op/s
Jan 10 17:20:35 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:20:35 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:20:35 compute-0 sudo[241147]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 10 17:20:35 compute-0 sudo[241147]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:20:35 compute-0 sudo[241147]: pam_unix(sudo:session): session closed for user root
Jan 10 17:20:35 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e80 do_prune osdmap full prune enabled
Jan 10 17:20:35 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e81 e81: 3 total, 3 up, 3 in
Jan 10 17:20:35 compute-0 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e81: 3 total, 3 up, 3 in
Jan 10 17:20:35 compute-0 ceph-mon[75249]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #36. Immutable memtables: 0.
Jan 10 17:20:35 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:20:35.390768) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 10 17:20:35 compute-0 ceph-mon[75249]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 36
Jan 10 17:20:35 compute-0 ceph-mon[75249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768065635391030, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 1488, "num_deletes": 251, "total_data_size": 1571170, "memory_usage": 1602816, "flush_reason": "Manual Compaction"}
Jan 10 17:20:35 compute-0 ceph-mon[75249]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #37: started
Jan 10 17:20:35 compute-0 ceph-mon[75249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768065635407882, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 37, "file_size": 1528471, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14324, "largest_seqno": 15811, "table_properties": {"data_size": 1521421, "index_size": 4125, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 14420, "raw_average_key_size": 19, "raw_value_size": 1507279, "raw_average_value_size": 2079, "num_data_blocks": 188, "num_entries": 725, "num_filter_entries": 725, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768065499, "oldest_key_time": 1768065499, "file_creation_time": 1768065635, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f71f9c2-f3c5-4fc3-bcd9-6ffe346ae9d4", "db_session_id": "VPFJD76VNV79HUMFHEYZ", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Jan 10 17:20:35 compute-0 ceph-mon[75249]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 17128 microseconds, and 10542 cpu microseconds.
Jan 10 17:20:35 compute-0 ceph-mon[75249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 10 17:20:35 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:20:35.408010) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #37: 1528471 bytes OK
Jan 10 17:20:35 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:20:35.408052) [db/memtable_list.cc:519] [default] Level-0 commit table #37 started
Jan 10 17:20:35 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:20:35.409819) [db/memtable_list.cc:722] [default] Level-0 commit table #37: memtable #1 done
Jan 10 17:20:35 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:20:35.409848) EVENT_LOG_v1 {"time_micros": 1768065635409842, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 10 17:20:35 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:20:35.409876) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 10 17:20:35 compute-0 ceph-mon[75249]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 1564619, prev total WAL file size 1564619, number of live WAL files 2.
Jan 10 17:20:35 compute-0 ceph-mon[75249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000033.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 10 17:20:35 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:20:35.410898) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031303034' seq:72057594037927935, type:22 .. '7061786F730031323536' seq:0, type:0; will stop at (end)
Jan 10 17:20:35 compute-0 ceph-mon[75249]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 10 17:20:35 compute-0 ceph-mon[75249]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [37(1492KB)], [35(5495KB)]
Jan 10 17:20:35 compute-0 ceph-mon[75249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768065635411001, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [37], "files_L6": [35], "score": -1, "input_data_size": 7155428, "oldest_snapshot_seqno": -1}
Jan 10 17:20:35 compute-0 ceph-mon[75249]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #38: 3583 keys, 5955592 bytes, temperature: kUnknown
Jan 10 17:20:35 compute-0 ceph-mon[75249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768065635453019, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 38, "file_size": 5955592, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 5928449, "index_size": 17113, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8965, "raw_key_size": 84755, "raw_average_key_size": 23, "raw_value_size": 5860824, "raw_average_value_size": 1635, "num_data_blocks": 736, "num_entries": 3583, "num_filter_entries": 3583, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768064235, "oldest_key_time": 0, "file_creation_time": 1768065635, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f71f9c2-f3c5-4fc3-bcd9-6ffe346ae9d4", "db_session_id": "VPFJD76VNV79HUMFHEYZ", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}}
Jan 10 17:20:35 compute-0 ceph-mon[75249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 10 17:20:35 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:20:35.453440) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 5955592 bytes
Jan 10 17:20:35 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:20:35.454992) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 169.6 rd, 141.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 5.4 +0.0 blob) out(5.7 +0.0 blob), read-write-amplify(8.6) write-amplify(3.9) OK, records in: 4101, records dropped: 518 output_compression: NoCompression
Jan 10 17:20:35 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:20:35.455022) EVENT_LOG_v1 {"time_micros": 1768065635455008, "job": 16, "event": "compaction_finished", "compaction_time_micros": 42179, "compaction_time_cpu_micros": 19546, "output_level": 6, "num_output_files": 1, "total_output_size": 5955592, "num_input_records": 4101, "num_output_records": 3583, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 10 17:20:35 compute-0 ceph-mon[75249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000037.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 10 17:20:35 compute-0 ceph-mon[75249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768065635455660, "job": 16, "event": "table_file_deletion", "file_number": 37}
Jan 10 17:20:35 compute-0 ceph-mon[75249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 10 17:20:35 compute-0 ceph-mon[75249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768065635457844, "job": 16, "event": "table_file_deletion", "file_number": 35}
Jan 10 17:20:35 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:20:35.410638) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 10 17:20:35 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:20:35.457956) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 10 17:20:35 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:20:35.457966) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 10 17:20:35 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:20:35.457969) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 10 17:20:35 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:20:35.457972) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 10 17:20:35 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:20:35.457975) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 10 17:20:36 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 10 17:20:36 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2838137479' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 10 17:20:36 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 10 17:20:36 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2838137479' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 10 17:20:36 compute-0 ceph-mon[75249]: osdmap e81: 3 total, 3 up, 3 in
Jan 10 17:20:36 compute-0 ceph-mon[75249]: from='client.? 192.168.122.10:0/2838137479' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 10 17:20:36 compute-0 ceph-mon[75249]: from='client.? 192.168.122.10:0/2838137479' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 10 17:20:36 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e81 do_prune osdmap full prune enabled
Jan 10 17:20:36 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e82 e82: 3 total, 3 up, 3 in
Jan 10 17:20:36 compute-0 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e82: 3 total, 3 up, 3 in
Jan 10 17:20:36 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v762: 177 pgs: 177 active+clean; 41 MiB data, 123 MiB used, 60 GiB / 60 GiB avail; 123 KiB/s rd, 11 KiB/s wr, 168 op/s
Jan 10 17:20:37 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e82 do_prune osdmap full prune enabled
Jan 10 17:20:37 compute-0 ceph-mon[75249]: osdmap e82: 3 total, 3 up, 3 in
Jan 10 17:20:37 compute-0 ceph-mon[75249]: pgmap v762: 177 pgs: 177 active+clean; 41 MiB data, 123 MiB used, 60 GiB / 60 GiB avail; 123 KiB/s rd, 11 KiB/s wr, 168 op/s
Jan 10 17:20:37 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e83 e83: 3 total, 3 up, 3 in
Jan 10 17:20:37 compute-0 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e83: 3 total, 3 up, 3 in
Jan 10 17:20:38 compute-0 ceph-mgr[75538]: [balancer INFO root] Optimize plan auto_2026-01-10_17:20:38
Jan 10 17:20:38 compute-0 ceph-mgr[75538]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 10 17:20:38 compute-0 ceph-mgr[75538]: [balancer INFO root] do_upmap
Jan 10 17:20:38 compute-0 ceph-mgr[75538]: [balancer INFO root] pools ['backups', 'volumes', 'vms', 'images', '.mgr', 'cephfs.cephfs.data', 'cephfs.cephfs.meta']
Jan 10 17:20:38 compute-0 ceph-mgr[75538]: [balancer INFO root] prepared 0/10 upmap changes
Jan 10 17:20:38 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e83 do_prune osdmap full prune enabled
Jan 10 17:20:38 compute-0 ceph-mon[75249]: osdmap e83: 3 total, 3 up, 3 in
Jan 10 17:20:38 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e84 e84: 3 total, 3 up, 3 in
Jan 10 17:20:38 compute-0 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e84: 3 total, 3 up, 3 in
Jan 10 17:20:38 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v765: 177 pgs: 177 active+clean; 41 MiB data, 123 MiB used, 60 GiB / 60 GiB avail; 145 KiB/s rd, 18 KiB/s wr, 206 op/s
Jan 10 17:20:39 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:20:39 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:20:39 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:20:39 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:20:39 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:20:39 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:20:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 10 17:20:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 10 17:20:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 10 17:20:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 10 17:20:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 10 17:20:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 10 17:20:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 10 17:20:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 10 17:20:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 10 17:20:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 10 17:20:39 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e84 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:20:39 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e84 do_prune osdmap full prune enabled
Jan 10 17:20:39 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e85 e85: 3 total, 3 up, 3 in
Jan 10 17:20:39 compute-0 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e85: 3 total, 3 up, 3 in
Jan 10 17:20:39 compute-0 ceph-mon[75249]: osdmap e84: 3 total, 3 up, 3 in
Jan 10 17:20:39 compute-0 ceph-mon[75249]: pgmap v765: 177 pgs: 177 active+clean; 41 MiB data, 123 MiB used, 60 GiB / 60 GiB avail; 145 KiB/s rd, 18 KiB/s wr, 206 op/s
Jan 10 17:20:39 compute-0 ceph-mon[75249]: osdmap e85: 3 total, 3 up, 3 in
Jan 10 17:20:40 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v767: 177 pgs: 177 active+clean; 65 MiB data, 131 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 5.8 MiB/s wr, 52 op/s
Jan 10 17:20:41 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e85 do_prune osdmap full prune enabled
Jan 10 17:20:41 compute-0 ceph-mon[75249]: pgmap v767: 177 pgs: 177 active+clean; 65 MiB data, 131 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 5.8 MiB/s wr, 52 op/s
Jan 10 17:20:41 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e86 e86: 3 total, 3 up, 3 in
Jan 10 17:20:41 compute-0 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e86: 3 total, 3 up, 3 in
Jan 10 17:20:42 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v769: 177 pgs: 177 active+clean; 105 MiB data, 163 MiB used, 60 GiB / 60 GiB avail; 129 KiB/s rd, 12 MiB/s wr, 189 op/s
Jan 10 17:20:42 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e86 do_prune osdmap full prune enabled
Jan 10 17:20:42 compute-0 ceph-mon[75249]: osdmap e86: 3 total, 3 up, 3 in
Jan 10 17:20:42 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e87 e87: 3 total, 3 up, 3 in
Jan 10 17:20:42 compute-0 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e87: 3 total, 3 up, 3 in
Jan 10 17:20:43 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e87 do_prune osdmap full prune enabled
Jan 10 17:20:43 compute-0 ceph-mon[75249]: pgmap v769: 177 pgs: 177 active+clean; 105 MiB data, 163 MiB used, 60 GiB / 60 GiB avail; 129 KiB/s rd, 12 MiB/s wr, 189 op/s
Jan 10 17:20:43 compute-0 ceph-mon[75249]: osdmap e87: 3 total, 3 up, 3 in
Jan 10 17:20:43 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e88 e88: 3 total, 3 up, 3 in
Jan 10 17:20:43 compute-0 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e88: 3 total, 3 up, 3 in
Jan 10 17:20:44 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:20:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] _maybe_adjust
Jan 10 17:20:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:20:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 10 17:20:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:20:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 17:20:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:20:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 17:20:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:20:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 17:20:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:20:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0017108615739368682 of space, bias 1.0, pg target 0.5132584721810605 quantized to 32 (current 32)
Jan 10 17:20:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:20:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 8.068536806985287e-07 of space, bias 4.0, pg target 0.0009682244168382343 quantized to 16 (current 16)
Jan 10 17:20:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:20:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 17:20:44 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v772: 177 pgs: 177 active+clean; 105 MiB data, 163 MiB used, 60 GiB / 60 GiB avail; 111 KiB/s rd, 12 MiB/s wr, 158 op/s
Jan 10 17:20:44 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e88 do_prune osdmap full prune enabled
Jan 10 17:20:44 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e89 e89: 3 total, 3 up, 3 in
Jan 10 17:20:44 compute-0 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e89: 3 total, 3 up, 3 in
Jan 10 17:20:44 compute-0 ceph-mon[75249]: osdmap e88: 3 total, 3 up, 3 in
Jan 10 17:20:45 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e89 do_prune osdmap full prune enabled
Jan 10 17:20:45 compute-0 ceph-mon[75249]: pgmap v772: 177 pgs: 177 active+clean; 105 MiB data, 163 MiB used, 60 GiB / 60 GiB avail; 111 KiB/s rd, 12 MiB/s wr, 158 op/s
Jan 10 17:20:45 compute-0 ceph-mon[75249]: osdmap e89: 3 total, 3 up, 3 in
Jan 10 17:20:45 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e90 e90: 3 total, 3 up, 3 in
Jan 10 17:20:45 compute-0 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e90: 3 total, 3 up, 3 in
Jan 10 17:20:46 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v775: 177 pgs: 177 active+clean; 73 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 161 KiB/s rd, 12 MiB/s wr, 227 op/s
Jan 10 17:20:46 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e90 do_prune osdmap full prune enabled
Jan 10 17:20:46 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e91 e91: 3 total, 3 up, 3 in
Jan 10 17:20:46 compute-0 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e91: 3 total, 3 up, 3 in
Jan 10 17:20:46 compute-0 ceph-mon[75249]: osdmap e90: 3 total, 3 up, 3 in
Jan 10 17:20:47 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e91 do_prune osdmap full prune enabled
Jan 10 17:20:47 compute-0 ceph-mon[75249]: pgmap v775: 177 pgs: 177 active+clean; 73 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 161 KiB/s rd, 12 MiB/s wr, 227 op/s
Jan 10 17:20:47 compute-0 ceph-mon[75249]: osdmap e91: 3 total, 3 up, 3 in
Jan 10 17:20:47 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e92 e92: 3 total, 3 up, 3 in
Jan 10 17:20:47 compute-0 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e92: 3 total, 3 up, 3 in
Jan 10 17:20:48 compute-0 nova_compute[237049]: 2026-01-10 17:20:48.346 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:20:48 compute-0 nova_compute[237049]: 2026-01-10 17:20:48.347 237053 DEBUG nova.compute.manager [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 10 17:20:48 compute-0 nova_compute[237049]: 2026-01-10 17:20:48.376 237053 DEBUG nova.compute.manager [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 10 17:20:48 compute-0 nova_compute[237049]: 2026-01-10 17:20:48.378 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:20:48 compute-0 nova_compute[237049]: 2026-01-10 17:20:48.378 237053 DEBUG nova.compute.manager [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 10 17:20:48 compute-0 nova_compute[237049]: 2026-01-10 17:20:48.396 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:20:48 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v778: 177 pgs: 177 active+clean; 41 MiB data, 158 MiB used, 60 GiB / 60 GiB avail; 253 KiB/s rd, 12 MiB/s wr, 357 op/s
Jan 10 17:20:48 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e92 do_prune osdmap full prune enabled
Jan 10 17:20:48 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e93 e93: 3 total, 3 up, 3 in
Jan 10 17:20:48 compute-0 ceph-mon[75249]: osdmap e92: 3 total, 3 up, 3 in
Jan 10 17:20:48 compute-0 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e93: 3 total, 3 up, 3 in
Jan 10 17:20:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:20:48.922 152671 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 10 17:20:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:20:48.922 152671 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 10 17:20:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:20:48.922 152671 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 10 17:20:49 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:20:49 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e93 do_prune osdmap full prune enabled
Jan 10 17:20:49 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e94 e94: 3 total, 3 up, 3 in
Jan 10 17:20:49 compute-0 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e94: 3 total, 3 up, 3 in
Jan 10 17:20:49 compute-0 ceph-mon[75249]: pgmap v778: 177 pgs: 177 active+clean; 41 MiB data, 158 MiB used, 60 GiB / 60 GiB avail; 253 KiB/s rd, 12 MiB/s wr, 357 op/s
Jan 10 17:20:49 compute-0 ceph-mon[75249]: osdmap e93: 3 total, 3 up, 3 in
Jan 10 17:20:49 compute-0 ceph-mon[75249]: osdmap e94: 3 total, 3 up, 3 in
Jan 10 17:20:50 compute-0 nova_compute[237049]: 2026-01-10 17:20:50.415 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:20:50 compute-0 nova_compute[237049]: 2026-01-10 17:20:50.416 237053 DEBUG nova.compute.manager [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 10 17:20:50 compute-0 nova_compute[237049]: 2026-01-10 17:20:50.416 237053 DEBUG nova.compute.manager [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 10 17:20:50 compute-0 nova_compute[237049]: 2026-01-10 17:20:50.438 237053 DEBUG nova.compute.manager [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 10 17:20:50 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v781: 177 pgs: 177 active+clean; 41 MiB data, 158 MiB used, 60 GiB / 60 GiB avail; 138 KiB/s rd, 13 KiB/s wr, 192 op/s
Jan 10 17:20:51 compute-0 ceph-mon[75249]: pgmap v781: 177 pgs: 177 active+clean; 41 MiB data, 158 MiB used, 60 GiB / 60 GiB avail; 138 KiB/s rd, 13 KiB/s wr, 192 op/s
Jan 10 17:20:52 compute-0 nova_compute[237049]: 2026-01-10 17:20:52.345 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:20:52 compute-0 nova_compute[237049]: 2026-01-10 17:20:52.345 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:20:52 compute-0 nova_compute[237049]: 2026-01-10 17:20:52.346 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:20:52 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v782: 177 pgs: 177 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail; 170 KiB/s rd, 15 KiB/s wr, 234 op/s
Jan 10 17:20:53 compute-0 nova_compute[237049]: 2026-01-10 17:20:53.345 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:20:53 compute-0 nova_compute[237049]: 2026-01-10 17:20:53.345 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:20:53 compute-0 nova_compute[237049]: 2026-01-10 17:20:53.346 237053 DEBUG nova.compute.manager [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 10 17:20:53 compute-0 nova_compute[237049]: 2026-01-10 17:20:53.346 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:20:53 compute-0 nova_compute[237049]: 2026-01-10 17:20:53.374 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 10 17:20:53 compute-0 nova_compute[237049]: 2026-01-10 17:20:53.375 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 10 17:20:53 compute-0 nova_compute[237049]: 2026-01-10 17:20:53.375 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 10 17:20:53 compute-0 nova_compute[237049]: 2026-01-10 17:20:53.376 237053 DEBUG nova.compute.resource_tracker [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 10 17:20:53 compute-0 nova_compute[237049]: 2026-01-10 17:20:53.376 237053 DEBUG oslo_concurrency.processutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 10 17:20:53 compute-0 ceph-mon[75249]: pgmap v782: 177 pgs: 177 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail; 170 KiB/s rd, 15 KiB/s wr, 234 op/s
Jan 10 17:20:53 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 10 17:20:53 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3534176182' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 10 17:20:53 compute-0 nova_compute[237049]: 2026-01-10 17:20:53.906 237053 DEBUG oslo_concurrency.processutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.529s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 10 17:20:54 compute-0 nova_compute[237049]: 2026-01-10 17:20:54.137 237053 WARNING nova.virt.libvirt.driver [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 10 17:20:54 compute-0 nova_compute[237049]: 2026-01-10 17:20:54.140 237053 DEBUG nova.compute.resource_tracker [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5251MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 10 17:20:54 compute-0 nova_compute[237049]: 2026-01-10 17:20:54.140 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 10 17:20:54 compute-0 nova_compute[237049]: 2026-01-10 17:20:54.141 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 10 17:20:54 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e94 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:20:54 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e94 do_prune osdmap full prune enabled
Jan 10 17:20:54 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e95 e95: 3 total, 3 up, 3 in
Jan 10 17:20:54 compute-0 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e95: 3 total, 3 up, 3 in
Jan 10 17:20:54 compute-0 nova_compute[237049]: 2026-01-10 17:20:54.428 237053 DEBUG nova.compute.resource_tracker [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 10 17:20:54 compute-0 nova_compute[237049]: 2026-01-10 17:20:54.429 237053 DEBUG nova.compute.resource_tracker [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 10 17:20:54 compute-0 nova_compute[237049]: 2026-01-10 17:20:54.535 237053 DEBUG nova.scheduler.client.report [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Refreshing inventories for resource provider 5f85855c-8a9b-43b5-ae49-f5846b9dcebe _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 10 17:20:54 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v784: 177 pgs: 177 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail; 105 KiB/s rd, 8.3 KiB/s wr, 142 op/s
Jan 10 17:20:54 compute-0 nova_compute[237049]: 2026-01-10 17:20:54.598 237053 DEBUG nova.scheduler.client.report [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Updating ProviderTree inventory for provider 5f85855c-8a9b-43b5-ae49-f5846b9dcebe from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 10 17:20:54 compute-0 nova_compute[237049]: 2026-01-10 17:20:54.599 237053 DEBUG nova.compute.provider_tree [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Updating inventory in ProviderTree for provider 5f85855c-8a9b-43b5-ae49-f5846b9dcebe with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 10 17:20:54 compute-0 nova_compute[237049]: 2026-01-10 17:20:54.619 237053 DEBUG nova.scheduler.client.report [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Refreshing aggregate associations for resource provider 5f85855c-8a9b-43b5-ae49-f5846b9dcebe, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 10 17:20:54 compute-0 nova_compute[237049]: 2026-01-10 17:20:54.641 237053 DEBUG nova.scheduler.client.report [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Refreshing trait associations for resource provider 5f85855c-8a9b-43b5-ae49-f5846b9dcebe, traits: COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AVX,HW_CPU_X86_CLMUL,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_MMX,HW_CPU_X86_SSE,HW_CPU_X86_F16C,COMPUTE_ACCELERATORS,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_ABM,HW_CPU_X86_SHA,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE41,HW_CPU_X86_AMD_SVM,HW_CPU_X86_FMA3,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_BMI2,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_RESCUE_BFV,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NODE,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_FDC,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_BMI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE42,HW_CPU_X86_AVX2,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_SOCKET_PCI_NUMA_AFFINITY _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 10 17:20:54 compute-0 nova_compute[237049]: 2026-01-10 17:20:54.658 237053 DEBUG oslo_concurrency.processutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 10 17:20:54 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/3534176182' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 10 17:20:54 compute-0 ceph-mon[75249]: osdmap e95: 3 total, 3 up, 3 in
Jan 10 17:20:55 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 10 17:20:55 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2872049099' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 10 17:20:55 compute-0 nova_compute[237049]: 2026-01-10 17:20:55.275 237053 DEBUG oslo_concurrency.processutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.616s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 10 17:20:55 compute-0 nova_compute[237049]: 2026-01-10 17:20:55.282 237053 DEBUG nova.compute.provider_tree [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Inventory has not changed in ProviderTree for provider: 5f85855c-8a9b-43b5-ae49-f5846b9dcebe update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 10 17:20:55 compute-0 nova_compute[237049]: 2026-01-10 17:20:55.301 237053 DEBUG nova.scheduler.client.report [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Inventory has not changed for provider 5f85855c-8a9b-43b5-ae49-f5846b9dcebe based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 10 17:20:55 compute-0 nova_compute[237049]: 2026-01-10 17:20:55.302 237053 DEBUG nova.compute.resource_tracker [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 10 17:20:55 compute-0 nova_compute[237049]: 2026-01-10 17:20:55.302 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.161s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 10 17:20:55 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e95 do_prune osdmap full prune enabled
Jan 10 17:20:55 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e96 e96: 3 total, 3 up, 3 in
Jan 10 17:20:55 compute-0 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e96: 3 total, 3 up, 3 in
Jan 10 17:20:55 compute-0 ceph-mon[75249]: pgmap v784: 177 pgs: 177 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail; 105 KiB/s rd, 8.3 KiB/s wr, 142 op/s
Jan 10 17:20:55 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/2872049099' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 10 17:20:55 compute-0 ceph-mon[75249]: osdmap e96: 3 total, 3 up, 3 in
Jan 10 17:20:56 compute-0 nova_compute[237049]: 2026-01-10 17:20:56.302 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:20:56 compute-0 nova_compute[237049]: 2026-01-10 17:20:56.303 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:20:56 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v786: 177 pgs: 177 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail; 105 KiB/s rd, 7.7 KiB/s wr, 139 op/s
Jan 10 17:20:57 compute-0 ceph-mon[75249]: pgmap v786: 177 pgs: 177 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail; 105 KiB/s rd, 7.7 KiB/s wr, 139 op/s
Jan 10 17:20:58 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v787: 177 pgs: 177 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail; 78 KiB/s rd, 6.0 KiB/s wr, 104 op/s
Jan 10 17:20:59 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e96 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:20:59 compute-0 ceph-mon[75249]: pgmap v787: 177 pgs: 177 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail; 78 KiB/s rd, 6.0 KiB/s wr, 104 op/s
Jan 10 17:20:59 compute-0 nova_compute[237049]: 2026-01-10 17:20:59.897 237053 DEBUG oslo_concurrency.lockutils [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Acquiring lock "6290fedf-9ecb-464c-8d5e-b6af64859702" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 10 17:20:59 compute-0 nova_compute[237049]: 2026-01-10 17:20:59.898 237053 DEBUG oslo_concurrency.lockutils [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Lock "6290fedf-9ecb-464c-8d5e-b6af64859702" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 10 17:20:59 compute-0 nova_compute[237049]: 2026-01-10 17:20:59.960 237053 DEBUG nova.compute.manager [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 6290fedf-9ecb-464c-8d5e-b6af64859702] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 10 17:21:00 compute-0 nova_compute[237049]: 2026-01-10 17:21:00.105 237053 DEBUG oslo_concurrency.lockutils [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 10 17:21:00 compute-0 nova_compute[237049]: 2026-01-10 17:21:00.106 237053 DEBUG oslo_concurrency.lockutils [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 10 17:21:00 compute-0 nova_compute[237049]: 2026-01-10 17:21:00.113 237053 DEBUG nova.virt.hardware [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 10 17:21:00 compute-0 nova_compute[237049]: 2026-01-10 17:21:00.114 237053 INFO nova.compute.claims [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 6290fedf-9ecb-464c-8d5e-b6af64859702] Claim successful on node compute-0.ctlplane.example.com
Jan 10 17:21:00 compute-0 nova_compute[237049]: 2026-01-10 17:21:00.385 237053 DEBUG oslo_concurrency.processutils [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 10 17:21:00 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v788: 177 pgs: 177 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 1.9 KiB/s wr, 31 op/s
Jan 10 17:21:00 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 10 17:21:00 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3335636917' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 10 17:21:00 compute-0 nova_compute[237049]: 2026-01-10 17:21:00.926 237053 DEBUG oslo_concurrency.processutils [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.541s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 10 17:21:00 compute-0 nova_compute[237049]: 2026-01-10 17:21:00.933 237053 DEBUG nova.compute.provider_tree [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Inventory has not changed in ProviderTree for provider: 5f85855c-8a9b-43b5-ae49-f5846b9dcebe update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 10 17:21:00 compute-0 nova_compute[237049]: 2026-01-10 17:21:00.950 237053 DEBUG nova.scheduler.client.report [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Inventory has not changed for provider 5f85855c-8a9b-43b5-ae49-f5846b9dcebe based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 10 17:21:00 compute-0 nova_compute[237049]: 2026-01-10 17:21:00.977 237053 DEBUG oslo_concurrency.lockutils [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.871s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 10 17:21:00 compute-0 nova_compute[237049]: 2026-01-10 17:21:00.978 237053 DEBUG nova.compute.manager [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 6290fedf-9ecb-464c-8d5e-b6af64859702] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 10 17:21:01 compute-0 nova_compute[237049]: 2026-01-10 17:21:01.025 237053 DEBUG nova.compute.manager [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 6290fedf-9ecb-464c-8d5e-b6af64859702] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 10 17:21:01 compute-0 nova_compute[237049]: 2026-01-10 17:21:01.026 237053 DEBUG nova.network.neutron [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 6290fedf-9ecb-464c-8d5e-b6af64859702] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 10 17:21:01 compute-0 nova_compute[237049]: 2026-01-10 17:21:01.058 237053 INFO nova.virt.libvirt.driver [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 6290fedf-9ecb-464c-8d5e-b6af64859702] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 10 17:21:01 compute-0 nova_compute[237049]: 2026-01-10 17:21:01.081 237053 DEBUG nova.compute.manager [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 6290fedf-9ecb-464c-8d5e-b6af64859702] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 10 17:21:01 compute-0 nova_compute[237049]: 2026-01-10 17:21:01.120 237053 INFO nova.virt.block_device [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 6290fedf-9ecb-464c-8d5e-b6af64859702] Booting with volume 77e9b8e1-774e-41cc-88ba-d21e1643cb3e at /dev/vda
Jan 10 17:21:01 compute-0 nova_compute[237049]: 2026-01-10 17:21:01.681 237053 DEBUG os_brick.utils [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Jan 10 17:21:01 compute-0 nova_compute[237049]: 2026-01-10 17:21:01.683 237053 INFO oslo.privsep.daemon [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'os_brick.privileged.default', '--privsep_sock_path', '/tmp/tmpwdcyow_v/privsep.sock']
Jan 10 17:21:01 compute-0 ceph-mon[75249]: pgmap v788: 177 pgs: 177 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 1.9 KiB/s wr, 31 op/s
Jan 10 17:21:01 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/3335636917' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 10 17:21:02 compute-0 nova_compute[237049]: 2026-01-10 17:21:02.401 237053 INFO oslo.privsep.daemon [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Spawned new privsep daemon via rootwrap
Jan 10 17:21:02 compute-0 nova_compute[237049]: 2026-01-10 17:21:02.252 241246 INFO oslo.privsep.daemon [-] privsep daemon starting
Jan 10 17:21:02 compute-0 nova_compute[237049]: 2026-01-10 17:21:02.257 241246 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Jan 10 17:21:02 compute-0 nova_compute[237049]: 2026-01-10 17:21:02.259 241246 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none
Jan 10 17:21:02 compute-0 nova_compute[237049]: 2026-01-10 17:21:02.259 241246 INFO oslo.privsep.daemon [-] privsep daemon running as pid 241246
Jan 10 17:21:02 compute-0 nova_compute[237049]: 2026-01-10 17:21:02.408 241246 DEBUG oslo.privsep.daemon [-] privsep: reply[63eb29c4-8493-4999-a06c-c1e51d930df1]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 10 17:21:02 compute-0 nova_compute[237049]: 2026-01-10 17:21:02.519 241246 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 10 17:21:02 compute-0 nova_compute[237049]: 2026-01-10 17:21:02.538 241246 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.020s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 10 17:21:02 compute-0 nova_compute[237049]: 2026-01-10 17:21:02.539 241246 DEBUG oslo.privsep.daemon [-] privsep: reply[26762a29-5109-4905-ab2e-6c6be2e08ff6]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 10 17:21:02 compute-0 nova_compute[237049]: 2026-01-10 17:21:02.541 241246 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 10 17:21:02 compute-0 nova_compute[237049]: 2026-01-10 17:21:02.548 237053 DEBUG nova.network.neutron [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 6290fedf-9ecb-464c-8d5e-b6af64859702] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Jan 10 17:21:02 compute-0 nova_compute[237049]: 2026-01-10 17:21:02.549 237053 DEBUG nova.compute.manager [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 6290fedf-9ecb-464c-8d5e-b6af64859702] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 10 17:21:02 compute-0 nova_compute[237049]: 2026-01-10 17:21:02.557 241246 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.016s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 10 17:21:02 compute-0 nova_compute[237049]: 2026-01-10 17:21:02.557 241246 DEBUG oslo.privsep.daemon [-] privsep: reply[11d26d22-90a2-42c4-826b-48443f4e0bd3]: (4, ('InitiatorName=iqn.1994-05.com.redhat:a9da3fcdfda', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 10 17:21:02 compute-0 nova_compute[237049]: 2026-01-10 17:21:02.562 241246 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 10 17:21:02 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v789: 177 pgs: 177 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 1.8 KiB/s wr, 30 op/s
Jan 10 17:21:02 compute-0 nova_compute[237049]: 2026-01-10 17:21:02.580 241246 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.019s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 10 17:21:02 compute-0 nova_compute[237049]: 2026-01-10 17:21:02.581 241246 DEBUG oslo.privsep.daemon [-] privsep: reply[e54c8356-1b07-4a90-81cf-6ba86471bb0c]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 10 17:21:02 compute-0 nova_compute[237049]: 2026-01-10 17:21:02.584 241246 DEBUG oslo.privsep.daemon [-] privsep: reply[030ab6c7-1178-4c49-8a13-87d9629ce676]: (4, 'a9d7d544-72dd-4b08-9e5e-495057bde287') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 10 17:21:02 compute-0 nova_compute[237049]: 2026-01-10 17:21:02.585 237053 DEBUG oslo_concurrency.processutils [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 10 17:21:02 compute-0 nova_compute[237049]: 2026-01-10 17:21:02.601 237053 DEBUG oslo_concurrency.processutils [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] CMD "nvme version" returned: 0 in 0.016s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 10 17:21:02 compute-0 nova_compute[237049]: 2026-01-10 17:21:02.604 237053 DEBUG os_brick.initiator.connectors.lightos [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Jan 10 17:21:02 compute-0 nova_compute[237049]: 2026-01-10 17:21:02.605 237053 DEBUG os_brick.initiator.connectors.lightos [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Jan 10 17:21:02 compute-0 nova_compute[237049]: 2026-01-10 17:21:02.605 237053 DEBUG os_brick.initiator.connectors.lightos [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:3f2d999e-37e2-4333-aca5-637ccade160f dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Jan 10 17:21:02 compute-0 nova_compute[237049]: 2026-01-10 17:21:02.606 237053 DEBUG os_brick.utils [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] <== get_connector_properties: return (923ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:a9da3fcdfda', 'do_local_attach': False, 'nvme_hostid': '3f2d999e-37e2-4333-aca5-637ccade160f', 'system uuid': 'a9d7d544-72dd-4b08-9e5e-495057bde287', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:3f2d999e-37e2-4333-aca5-637ccade160f', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Jan 10 17:21:02 compute-0 nova_compute[237049]: 2026-01-10 17:21:02.606 237053 DEBUG nova.virt.block_device [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 6290fedf-9ecb-464c-8d5e-b6af64859702] Updating existing volume attachment record: f34e40bd-e482-449a-95f8-cbab65899fc7 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Jan 10 17:21:03 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 10 17:21:03 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1874764883' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 10 17:21:03 compute-0 ceph-mon[75249]: pgmap v789: 177 pgs: 177 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 1.8 KiB/s wr, 30 op/s
Jan 10 17:21:03 compute-0 ceph-mon[75249]: from='client.? 192.168.122.10:0/1874764883' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 10 17:21:04 compute-0 nova_compute[237049]: 2026-01-10 17:21:04.293 237053 DEBUG nova.compute.manager [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 6290fedf-9ecb-464c-8d5e-b6af64859702] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 10 17:21:04 compute-0 nova_compute[237049]: 2026-01-10 17:21:04.296 237053 DEBUG nova.virt.libvirt.driver [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 6290fedf-9ecb-464c-8d5e-b6af64859702] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 10 17:21:04 compute-0 nova_compute[237049]: 2026-01-10 17:21:04.297 237053 INFO nova.virt.libvirt.driver [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 6290fedf-9ecb-464c-8d5e-b6af64859702] Creating image(s)
Jan 10 17:21:04 compute-0 nova_compute[237049]: 2026-01-10 17:21:04.298 237053 DEBUG nova.virt.libvirt.driver [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 6290fedf-9ecb-464c-8d5e-b6af64859702] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Jan 10 17:21:04 compute-0 nova_compute[237049]: 2026-01-10 17:21:04.299 237053 DEBUG nova.virt.libvirt.driver [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 6290fedf-9ecb-464c-8d5e-b6af64859702] Ensure instance console log exists: /var/lib/nova/instances/6290fedf-9ecb-464c-8d5e-b6af64859702/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 10 17:21:04 compute-0 nova_compute[237049]: 2026-01-10 17:21:04.299 237053 DEBUG oslo_concurrency.lockutils [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 10 17:21:04 compute-0 nova_compute[237049]: 2026-01-10 17:21:04.300 237053 DEBUG oslo_concurrency.lockutils [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 10 17:21:04 compute-0 nova_compute[237049]: 2026-01-10 17:21:04.300 237053 DEBUG oslo_concurrency.lockutils [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 10 17:21:04 compute-0 nova_compute[237049]: 2026-01-10 17:21:04.305 237053 DEBUG nova.virt.libvirt.driver [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 6290fedf-9ecb-464c-8d5e-b6af64859702] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'boot_index': 0, 'attachment_id': 'f34e40bd-e482-449a-95f8-cbab65899fc7', 'mount_device': '/dev/vda', 'guest_format': None, 'disk_bus': 'virtio', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-77e9b8e1-774e-41cc-88ba-d21e1643cb3e', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '77e9b8e1-774e-41cc-88ba-d21e1643cb3e', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '6290fedf-9ecb-464c-8d5e-b6af64859702', 'attached_at': '', 'detached_at': '', 'volume_id': '77e9b8e1-774e-41cc-88ba-d21e1643cb3e', 'serial': '77e9b8e1-774e-41cc-88ba-d21e1643cb3e'}, 'delete_on_termination': True, 'device_type': 'disk', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 10 17:21:04 compute-0 nova_compute[237049]: 2026-01-10 17:21:04.312 237053 WARNING nova.virt.libvirt.driver [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 10 17:21:04 compute-0 nova_compute[237049]: 2026-01-10 17:21:04.319 237053 DEBUG nova.virt.libvirt.host [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 10 17:21:04 compute-0 nova_compute[237049]: 2026-01-10 17:21:04.320 237053 DEBUG nova.virt.libvirt.host [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 10 17:21:04 compute-0 nova_compute[237049]: 2026-01-10 17:21:04.325 237053 DEBUG nova.virt.libvirt.host [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 10 17:21:04 compute-0 nova_compute[237049]: 2026-01-10 17:21:04.326 237053 DEBUG nova.virt.libvirt.host [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 10 17:21:04 compute-0 nova_compute[237049]: 2026-01-10 17:21:04.327 237053 DEBUG nova.virt.libvirt.driver [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 10 17:21:04 compute-0 nova_compute[237049]: 2026-01-10 17:21:04.327 237053 DEBUG nova.virt.hardware [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-10T17:19:37Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='83b4ecee-2b50-47ec-82ec-7f3e1d1624ce',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 10 17:21:04 compute-0 nova_compute[237049]: 2026-01-10 17:21:04.328 237053 DEBUG nova.virt.hardware [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 10 17:21:04 compute-0 nova_compute[237049]: 2026-01-10 17:21:04.328 237053 DEBUG nova.virt.hardware [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 10 17:21:04 compute-0 nova_compute[237049]: 2026-01-10 17:21:04.328 237053 DEBUG nova.virt.hardware [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 10 17:21:04 compute-0 nova_compute[237049]: 2026-01-10 17:21:04.329 237053 DEBUG nova.virt.hardware [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 10 17:21:04 compute-0 nova_compute[237049]: 2026-01-10 17:21:04.329 237053 DEBUG nova.virt.hardware [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 10 17:21:04 compute-0 nova_compute[237049]: 2026-01-10 17:21:04.329 237053 DEBUG nova.virt.hardware [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 10 17:21:04 compute-0 nova_compute[237049]: 2026-01-10 17:21:04.330 237053 DEBUG nova.virt.hardware [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 10 17:21:04 compute-0 nova_compute[237049]: 2026-01-10 17:21:04.330 237053 DEBUG nova.virt.hardware [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 10 17:21:04 compute-0 nova_compute[237049]: 2026-01-10 17:21:04.331 237053 DEBUG nova.virt.hardware [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 10 17:21:04 compute-0 nova_compute[237049]: 2026-01-10 17:21:04.331 237053 DEBUG nova.virt.hardware [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 10 17:21:04 compute-0 nova_compute[237049]: 2026-01-10 17:21:04.365 237053 DEBUG nova.storage.rbd_utils [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] rbd image 6290fedf-9ecb-464c-8d5e-b6af64859702_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 10 17:21:04 compute-0 nova_compute[237049]: 2026-01-10 17:21:04.369 237053 DEBUG nova.privsep.utils [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Jan 10 17:21:04 compute-0 nova_compute[237049]: 2026-01-10 17:21:04.370 237053 DEBUG oslo_concurrency.processutils [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 10 17:21:04 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e96 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:21:04 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e96 do_prune osdmap full prune enabled
Jan 10 17:21:04 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e97 e97: 3 total, 3 up, 3 in
Jan 10 17:21:04 compute-0 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e97: 3 total, 3 up, 3 in
Jan 10 17:21:04 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:21:04.478 152671 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd6:b5:c0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '8e:56:cf:00:80:b3'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 10 17:21:04 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:21:04.481 152671 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 10 17:21:04 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:21:04.483 152671 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=fbd04e21-7be2-4eb3-a385-03f0bb540a40, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 10 17:21:04 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v791: 177 pgs: 177 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.5 KiB/s wr, 26 op/s
Jan 10 17:21:04 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 10 17:21:04 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3849157757' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 10 17:21:04 compute-0 nova_compute[237049]: 2026-01-10 17:21:04.956 237053 DEBUG oslo_concurrency.processutils [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.586s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 10 17:21:04 compute-0 nova_compute[237049]: 2026-01-10 17:21:04.958 237053 DEBUG oslo_concurrency.lockutils [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Acquiring lock "cache_volume_driver" by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 10 17:21:04 compute-0 nova_compute[237049]: 2026-01-10 17:21:04.959 237053 DEBUG oslo_concurrency.lockutils [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Lock "cache_volume_driver" acquired by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 10 17:21:04 compute-0 nova_compute[237049]: 2026-01-10 17:21:04.961 237053 DEBUG oslo_concurrency.lockutils [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Lock "cache_volume_driver" "released" by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 10 17:21:04 compute-0 systemd[1]: Starting libvirt secret daemon...
Jan 10 17:21:05 compute-0 systemd[1]: Started libvirt secret daemon.
Jan 10 17:21:05 compute-0 nova_compute[237049]: 2026-01-10 17:21:05.061 237053 DEBUG nova.objects.instance [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Lazy-loading 'pci_devices' on Instance uuid 6290fedf-9ecb-464c-8d5e-b6af64859702 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 10 17:21:05 compute-0 nova_compute[237049]: 2026-01-10 17:21:05.083 237053 DEBUG nova.virt.libvirt.driver [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 6290fedf-9ecb-464c-8d5e-b6af64859702] End _get_guest_xml xml=<domain type="kvm">
Jan 10 17:21:05 compute-0 nova_compute[237049]:   <uuid>6290fedf-9ecb-464c-8d5e-b6af64859702</uuid>
Jan 10 17:21:05 compute-0 nova_compute[237049]:   <name>instance-00000001</name>
Jan 10 17:21:05 compute-0 nova_compute[237049]:   <memory>131072</memory>
Jan 10 17:21:05 compute-0 nova_compute[237049]:   <vcpu>1</vcpu>
Jan 10 17:21:05 compute-0 nova_compute[237049]:   <metadata>
Jan 10 17:21:05 compute-0 nova_compute[237049]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 10 17:21:05 compute-0 nova_compute[237049]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 10 17:21:05 compute-0 nova_compute[237049]:       <nova:name>instance-depend-image</nova:name>
Jan 10 17:21:05 compute-0 nova_compute[237049]:       <nova:creationTime>2026-01-10 17:21:04</nova:creationTime>
Jan 10 17:21:05 compute-0 nova_compute[237049]:       <nova:flavor name="m1.nano">
Jan 10 17:21:05 compute-0 nova_compute[237049]:         <nova:memory>128</nova:memory>
Jan 10 17:21:05 compute-0 nova_compute[237049]:         <nova:disk>1</nova:disk>
Jan 10 17:21:05 compute-0 nova_compute[237049]:         <nova:swap>0</nova:swap>
Jan 10 17:21:05 compute-0 nova_compute[237049]:         <nova:ephemeral>0</nova:ephemeral>
Jan 10 17:21:05 compute-0 nova_compute[237049]:         <nova:vcpus>1</nova:vcpus>
Jan 10 17:21:05 compute-0 nova_compute[237049]:       </nova:flavor>
Jan 10 17:21:05 compute-0 nova_compute[237049]:       <nova:owner>
Jan 10 17:21:05 compute-0 nova_compute[237049]:         <nova:user uuid="75fbaed513e94e80acbf58803e0a4b03">tempest-ImageDependencyTests-1967781085-project-member</nova:user>
Jan 10 17:21:05 compute-0 nova_compute[237049]:         <nova:project uuid="0299cbaa071f4ac4b1435e4144bd4d79">tempest-ImageDependencyTests-1967781085</nova:project>
Jan 10 17:21:05 compute-0 nova_compute[237049]:       </nova:owner>
Jan 10 17:21:05 compute-0 nova_compute[237049]:       <nova:ports/>
Jan 10 17:21:05 compute-0 nova_compute[237049]:     </nova:instance>
Jan 10 17:21:05 compute-0 nova_compute[237049]:   </metadata>
Jan 10 17:21:05 compute-0 nova_compute[237049]:   <sysinfo type="smbios">
Jan 10 17:21:05 compute-0 nova_compute[237049]:     <system>
Jan 10 17:21:05 compute-0 nova_compute[237049]:       <entry name="manufacturer">RDO</entry>
Jan 10 17:21:05 compute-0 nova_compute[237049]:       <entry name="product">OpenStack Compute</entry>
Jan 10 17:21:05 compute-0 nova_compute[237049]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 10 17:21:05 compute-0 nova_compute[237049]:       <entry name="serial">6290fedf-9ecb-464c-8d5e-b6af64859702</entry>
Jan 10 17:21:05 compute-0 nova_compute[237049]:       <entry name="uuid">6290fedf-9ecb-464c-8d5e-b6af64859702</entry>
Jan 10 17:21:05 compute-0 nova_compute[237049]:       <entry name="family">Virtual Machine</entry>
Jan 10 17:21:05 compute-0 nova_compute[237049]:     </system>
Jan 10 17:21:05 compute-0 nova_compute[237049]:   </sysinfo>
Jan 10 17:21:05 compute-0 nova_compute[237049]:   <os>
Jan 10 17:21:05 compute-0 nova_compute[237049]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 10 17:21:05 compute-0 nova_compute[237049]:     <boot dev="hd"/>
Jan 10 17:21:05 compute-0 nova_compute[237049]:     <smbios mode="sysinfo"/>
Jan 10 17:21:05 compute-0 nova_compute[237049]:   </os>
Jan 10 17:21:05 compute-0 nova_compute[237049]:   <features>
Jan 10 17:21:05 compute-0 nova_compute[237049]:     <acpi/>
Jan 10 17:21:05 compute-0 nova_compute[237049]:     <apic/>
Jan 10 17:21:05 compute-0 nova_compute[237049]:     <vmcoreinfo/>
Jan 10 17:21:05 compute-0 nova_compute[237049]:   </features>
Jan 10 17:21:05 compute-0 nova_compute[237049]:   <clock offset="utc">
Jan 10 17:21:05 compute-0 nova_compute[237049]:     <timer name="pit" tickpolicy="delay"/>
Jan 10 17:21:05 compute-0 nova_compute[237049]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 10 17:21:05 compute-0 nova_compute[237049]:     <timer name="hpet" present="no"/>
Jan 10 17:21:05 compute-0 nova_compute[237049]:   </clock>
Jan 10 17:21:05 compute-0 nova_compute[237049]:   <cpu mode="host-model" match="exact">
Jan 10 17:21:05 compute-0 nova_compute[237049]:     <topology sockets="1" cores="1" threads="1"/>
Jan 10 17:21:05 compute-0 nova_compute[237049]:   </cpu>
Jan 10 17:21:05 compute-0 nova_compute[237049]:   <devices>
Jan 10 17:21:05 compute-0 nova_compute[237049]:     <disk type="network" device="cdrom">
Jan 10 17:21:05 compute-0 nova_compute[237049]:       <driver type="raw" cache="none"/>
Jan 10 17:21:05 compute-0 nova_compute[237049]:       <source protocol="rbd" name="vms/6290fedf-9ecb-464c-8d5e-b6af64859702_disk.config">
Jan 10 17:21:05 compute-0 nova_compute[237049]:         <host name="192.168.122.100" port="6789"/>
Jan 10 17:21:05 compute-0 nova_compute[237049]:       </source>
Jan 10 17:21:05 compute-0 nova_compute[237049]:       <auth username="openstack">
Jan 10 17:21:05 compute-0 nova_compute[237049]:         <secret type="ceph" uuid="a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4"/>
Jan 10 17:21:05 compute-0 nova_compute[237049]:       </auth>
Jan 10 17:21:05 compute-0 nova_compute[237049]:       <target dev="sda" bus="sata"/>
Jan 10 17:21:05 compute-0 nova_compute[237049]:     </disk>
Jan 10 17:21:05 compute-0 nova_compute[237049]:     <disk type="network" device="disk">
Jan 10 17:21:05 compute-0 nova_compute[237049]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 10 17:21:05 compute-0 nova_compute[237049]:       <source protocol="rbd" name="volumes/volume-77e9b8e1-774e-41cc-88ba-d21e1643cb3e">
Jan 10 17:21:05 compute-0 nova_compute[237049]:         <host name="192.168.122.100" port="6789"/>
Jan 10 17:21:05 compute-0 nova_compute[237049]:       </source>
Jan 10 17:21:05 compute-0 nova_compute[237049]:       <auth username="openstack">
Jan 10 17:21:05 compute-0 nova_compute[237049]:         <secret type="ceph" uuid="a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4"/>
Jan 10 17:21:05 compute-0 nova_compute[237049]:       </auth>
Jan 10 17:21:05 compute-0 nova_compute[237049]:       <target dev="vda" bus="virtio"/>
Jan 10 17:21:05 compute-0 nova_compute[237049]:       <serial>77e9b8e1-774e-41cc-88ba-d21e1643cb3e</serial>
Jan 10 17:21:05 compute-0 nova_compute[237049]:     </disk>
Jan 10 17:21:05 compute-0 nova_compute[237049]:     <serial type="pty">
Jan 10 17:21:05 compute-0 nova_compute[237049]:       <log file="/var/lib/nova/instances/6290fedf-9ecb-464c-8d5e-b6af64859702/console.log" append="off"/>
Jan 10 17:21:05 compute-0 nova_compute[237049]:     </serial>
Jan 10 17:21:05 compute-0 nova_compute[237049]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 10 17:21:05 compute-0 nova_compute[237049]:     <video>
Jan 10 17:21:05 compute-0 nova_compute[237049]:       <model type="virtio"/>
Jan 10 17:21:05 compute-0 nova_compute[237049]:     </video>
Jan 10 17:21:05 compute-0 nova_compute[237049]:     <input type="tablet" bus="usb"/>
Jan 10 17:21:05 compute-0 nova_compute[237049]:     <rng model="virtio">
Jan 10 17:21:05 compute-0 nova_compute[237049]:       <backend model="random">/dev/urandom</backend>
Jan 10 17:21:05 compute-0 nova_compute[237049]:     </rng>
Jan 10 17:21:05 compute-0 nova_compute[237049]:     <controller type="pci" model="pcie-root"/>
Jan 10 17:21:05 compute-0 nova_compute[237049]:     <controller type="pci" model="pcie-root-port"/>
Jan 10 17:21:05 compute-0 nova_compute[237049]:     <controller type="pci" model="pcie-root-port"/>
Jan 10 17:21:05 compute-0 nova_compute[237049]:     <controller type="pci" model="pcie-root-port"/>
Jan 10 17:21:05 compute-0 nova_compute[237049]:     <controller type="pci" model="pcie-root-port"/>
Jan 10 17:21:05 compute-0 nova_compute[237049]:     <controller type="pci" model="pcie-root-port"/>
Jan 10 17:21:05 compute-0 nova_compute[237049]:     <controller type="pci" model="pcie-root-port"/>
Jan 10 17:21:05 compute-0 nova_compute[237049]:     <controller type="pci" model="pcie-root-port"/>
Jan 10 17:21:05 compute-0 nova_compute[237049]:     <controller type="pci" model="pcie-root-port"/>
Jan 10 17:21:05 compute-0 nova_compute[237049]:     <controller type="pci" model="pcie-root-port"/>
Jan 10 17:21:05 compute-0 nova_compute[237049]:     <controller type="pci" model="pcie-root-port"/>
Jan 10 17:21:05 compute-0 nova_compute[237049]:     <controller type="pci" model="pcie-root-port"/>
Jan 10 17:21:05 compute-0 nova_compute[237049]:     <controller type="pci" model="pcie-root-port"/>
Jan 10 17:21:05 compute-0 nova_compute[237049]:     <controller type="pci" model="pcie-root-port"/>
Jan 10 17:21:05 compute-0 nova_compute[237049]:     <controller type="pci" model="pcie-root-port"/>
Jan 10 17:21:05 compute-0 nova_compute[237049]:     <controller type="pci" model="pcie-root-port"/>
Jan 10 17:21:05 compute-0 nova_compute[237049]:     <controller type="pci" model="pcie-root-port"/>
Jan 10 17:21:05 compute-0 nova_compute[237049]:     <controller type="pci" model="pcie-root-port"/>
Jan 10 17:21:05 compute-0 nova_compute[237049]:     <controller type="pci" model="pcie-root-port"/>
Jan 10 17:21:05 compute-0 nova_compute[237049]:     <controller type="pci" model="pcie-root-port"/>
Jan 10 17:21:05 compute-0 nova_compute[237049]:     <controller type="pci" model="pcie-root-port"/>
Jan 10 17:21:05 compute-0 nova_compute[237049]:     <controller type="pci" model="pcie-root-port"/>
Jan 10 17:21:05 compute-0 nova_compute[237049]:     <controller type="pci" model="pcie-root-port"/>
Jan 10 17:21:05 compute-0 nova_compute[237049]:     <controller type="pci" model="pcie-root-port"/>
Jan 10 17:21:05 compute-0 nova_compute[237049]:     <controller type="pci" model="pcie-root-port"/>
Jan 10 17:21:05 compute-0 nova_compute[237049]:     <controller type="usb" index="0"/>
Jan 10 17:21:05 compute-0 nova_compute[237049]:     <memballoon model="virtio">
Jan 10 17:21:05 compute-0 nova_compute[237049]:       <stats period="10"/>
Jan 10 17:21:05 compute-0 nova_compute[237049]:     </memballoon>
Jan 10 17:21:05 compute-0 nova_compute[237049]:   </devices>
Jan 10 17:21:05 compute-0 nova_compute[237049]: </domain>
Jan 10 17:21:05 compute-0 nova_compute[237049]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 10 17:21:05 compute-0 nova_compute[237049]: 2026-01-10 17:21:05.159 237053 DEBUG nova.virt.libvirt.driver [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 10 17:21:05 compute-0 nova_compute[237049]: 2026-01-10 17:21:05.159 237053 DEBUG nova.virt.libvirt.driver [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 10 17:21:05 compute-0 nova_compute[237049]: 2026-01-10 17:21:05.160 237053 INFO nova.virt.libvirt.driver [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 6290fedf-9ecb-464c-8d5e-b6af64859702] Using config drive
Jan 10 17:21:05 compute-0 nova_compute[237049]: 2026-01-10 17:21:05.192 237053 DEBUG nova.storage.rbd_utils [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] rbd image 6290fedf-9ecb-464c-8d5e-b6af64859702_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 10 17:21:05 compute-0 ceph-mon[75249]: osdmap e97: 3 total, 3 up, 3 in
Jan 10 17:21:05 compute-0 ceph-mon[75249]: pgmap v791: 177 pgs: 177 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.5 KiB/s wr, 26 op/s
Jan 10 17:21:05 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/3849157757' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 10 17:21:06 compute-0 podman[241333]: 2026-01-10 17:21:06.092633553 +0000 UTC m=+0.088764386 container health_status a2049b55215718ead6719d161f51721cb7ba61ad8a59a8a1174e05b17008fa6f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 17:21:06 compute-0 podman[241332]: 2026-01-10 17:21:06.093914599 +0000 UTC m=+0.086061732 container health_status 4a66317a3a3660e903224f5e823ead0a57597ae17c6ac34919f8a3aeea25124f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true)
Jan 10 17:21:06 compute-0 nova_compute[237049]: 2026-01-10 17:21:06.345 237053 INFO nova.virt.libvirt.driver [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 6290fedf-9ecb-464c-8d5e-b6af64859702] Creating config drive at /var/lib/nova/instances/6290fedf-9ecb-464c-8d5e-b6af64859702/disk.config
Jan 10 17:21:06 compute-0 nova_compute[237049]: 2026-01-10 17:21:06.355 237053 DEBUG oslo_concurrency.processutils [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6290fedf-9ecb-464c-8d5e-b6af64859702/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp45gneuq0 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 10 17:21:06 compute-0 nova_compute[237049]: 2026-01-10 17:21:06.495 237053 DEBUG oslo_concurrency.processutils [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6290fedf-9ecb-464c-8d5e-b6af64859702/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp45gneuq0" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 10 17:21:06 compute-0 nova_compute[237049]: 2026-01-10 17:21:06.531 237053 DEBUG nova.storage.rbd_utils [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] rbd image 6290fedf-9ecb-464c-8d5e-b6af64859702_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 10 17:21:06 compute-0 nova_compute[237049]: 2026-01-10 17:21:06.536 237053 DEBUG oslo_concurrency.processutils [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6290fedf-9ecb-464c-8d5e-b6af64859702/disk.config 6290fedf-9ecb-464c-8d5e-b6af64859702_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 10 17:21:06 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v792: 177 pgs: 177 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail; 6.3 KiB/s rd, 1023 B/s wr, 9 op/s
Jan 10 17:21:07 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e97 do_prune osdmap full prune enabled
Jan 10 17:21:07 compute-0 ceph-mon[75249]: pgmap v792: 177 pgs: 177 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail; 6.3 KiB/s rd, 1023 B/s wr, 9 op/s
Jan 10 17:21:07 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e98 e98: 3 total, 3 up, 3 in
Jan 10 17:21:07 compute-0 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e98: 3 total, 3 up, 3 in
Jan 10 17:21:08 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v794: 177 pgs: 177 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:21:08 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e98 do_prune osdmap full prune enabled
Jan 10 17:21:08 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e99 e99: 3 total, 3 up, 3 in
Jan 10 17:21:08 compute-0 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e99: 3 total, 3 up, 3 in
Jan 10 17:21:08 compute-0 ceph-mon[75249]: osdmap e98: 3 total, 3 up, 3 in
Jan 10 17:21:08 compute-0 nova_compute[237049]: 2026-01-10 17:21:08.834 237053 DEBUG oslo_concurrency.processutils [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6290fedf-9ecb-464c-8d5e-b6af64859702/disk.config 6290fedf-9ecb-464c-8d5e-b6af64859702_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.298s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 10 17:21:08 compute-0 nova_compute[237049]: 2026-01-10 17:21:08.835 237053 INFO nova.virt.libvirt.driver [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 6290fedf-9ecb-464c-8d5e-b6af64859702] Deleting local config drive /var/lib/nova/instances/6290fedf-9ecb-464c-8d5e-b6af64859702/disk.config because it was imported into RBD.
Jan 10 17:21:08 compute-0 systemd-machined[205102]: New machine qemu-1-instance-00000001.
Jan 10 17:21:08 compute-0 systemd[1]: Started Virtual Machine qemu-1-instance-00000001.
Jan 10 17:21:09 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:21:09 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:21:09 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:21:09 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:21:09 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:21:09 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:21:09 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e99 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:21:09 compute-0 ceph-mon[75249]: pgmap v794: 177 pgs: 177 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:21:09 compute-0 ceph-mon[75249]: osdmap e99: 3 total, 3 up, 3 in
Jan 10 17:21:09 compute-0 nova_compute[237049]: 2026-01-10 17:21:09.862 237053 DEBUG nova.virt.driver [None req-e8bffe3c-fd35-435d-be63-f593c81b47ca - - - - - -] Emitting event <LifecycleEvent: 1768065669.8620079, 6290fedf-9ecb-464c-8d5e-b6af64859702 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 10 17:21:09 compute-0 nova_compute[237049]: 2026-01-10 17:21:09.864 237053 INFO nova.compute.manager [None req-e8bffe3c-fd35-435d-be63-f593c81b47ca - - - - - -] [instance: 6290fedf-9ecb-464c-8d5e-b6af64859702] VM Resumed (Lifecycle Event)
Jan 10 17:21:09 compute-0 nova_compute[237049]: 2026-01-10 17:21:09.868 237053 DEBUG nova.compute.manager [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 6290fedf-9ecb-464c-8d5e-b6af64859702] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 10 17:21:09 compute-0 nova_compute[237049]: 2026-01-10 17:21:09.869 237053 DEBUG nova.virt.libvirt.driver [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 6290fedf-9ecb-464c-8d5e-b6af64859702] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 10 17:21:09 compute-0 nova_compute[237049]: 2026-01-10 17:21:09.873 237053 INFO nova.virt.libvirt.driver [-] [instance: 6290fedf-9ecb-464c-8d5e-b6af64859702] Instance spawned successfully.
Jan 10 17:21:09 compute-0 nova_compute[237049]: 2026-01-10 17:21:09.874 237053 DEBUG nova.virt.libvirt.driver [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 6290fedf-9ecb-464c-8d5e-b6af64859702] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 10 17:21:09 compute-0 nova_compute[237049]: 2026-01-10 17:21:09.923 237053 DEBUG nova.compute.manager [None req-e8bffe3c-fd35-435d-be63-f593c81b47ca - - - - - -] [instance: 6290fedf-9ecb-464c-8d5e-b6af64859702] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 10 17:21:09 compute-0 nova_compute[237049]: 2026-01-10 17:21:09.928 237053 DEBUG nova.compute.manager [None req-e8bffe3c-fd35-435d-be63-f593c81b47ca - - - - - -] [instance: 6290fedf-9ecb-464c-8d5e-b6af64859702] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 10 17:21:09 compute-0 nova_compute[237049]: 2026-01-10 17:21:09.952 237053 DEBUG nova.virt.libvirt.driver [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 6290fedf-9ecb-464c-8d5e-b6af64859702] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 10 17:21:09 compute-0 nova_compute[237049]: 2026-01-10 17:21:09.956 237053 DEBUG nova.virt.libvirt.driver [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 6290fedf-9ecb-464c-8d5e-b6af64859702] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 10 17:21:09 compute-0 nova_compute[237049]: 2026-01-10 17:21:09.958 237053 DEBUG nova.virt.libvirt.driver [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 6290fedf-9ecb-464c-8d5e-b6af64859702] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 10 17:21:09 compute-0 nova_compute[237049]: 2026-01-10 17:21:09.959 237053 DEBUG nova.virt.libvirt.driver [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 6290fedf-9ecb-464c-8d5e-b6af64859702] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 10 17:21:09 compute-0 nova_compute[237049]: 2026-01-10 17:21:09.960 237053 DEBUG nova.virt.libvirt.driver [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 6290fedf-9ecb-464c-8d5e-b6af64859702] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 10 17:21:09 compute-0 nova_compute[237049]: 2026-01-10 17:21:09.961 237053 DEBUG nova.virt.libvirt.driver [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 6290fedf-9ecb-464c-8d5e-b6af64859702] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 10 17:21:09 compute-0 nova_compute[237049]: 2026-01-10 17:21:09.969 237053 INFO nova.compute.manager [None req-e8bffe3c-fd35-435d-be63-f593c81b47ca - - - - - -] [instance: 6290fedf-9ecb-464c-8d5e-b6af64859702] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 10 17:21:09 compute-0 nova_compute[237049]: 2026-01-10 17:21:09.970 237053 DEBUG nova.virt.driver [None req-e8bffe3c-fd35-435d-be63-f593c81b47ca - - - - - -] Emitting event <LifecycleEvent: 1768065669.864454, 6290fedf-9ecb-464c-8d5e-b6af64859702 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 10 17:21:09 compute-0 nova_compute[237049]: 2026-01-10 17:21:09.971 237053 INFO nova.compute.manager [None req-e8bffe3c-fd35-435d-be63-f593c81b47ca - - - - - -] [instance: 6290fedf-9ecb-464c-8d5e-b6af64859702] VM Started (Lifecycle Event)
Jan 10 17:21:10 compute-0 nova_compute[237049]: 2026-01-10 17:21:10.066 237053 DEBUG nova.compute.manager [None req-e8bffe3c-fd35-435d-be63-f593c81b47ca - - - - - -] [instance: 6290fedf-9ecb-464c-8d5e-b6af64859702] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 10 17:21:10 compute-0 nova_compute[237049]: 2026-01-10 17:21:10.071 237053 DEBUG nova.compute.manager [None req-e8bffe3c-fd35-435d-be63-f593c81b47ca - - - - - -] [instance: 6290fedf-9ecb-464c-8d5e-b6af64859702] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 10 17:21:10 compute-0 nova_compute[237049]: 2026-01-10 17:21:10.080 237053 INFO nova.compute.manager [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 6290fedf-9ecb-464c-8d5e-b6af64859702] Took 5.79 seconds to spawn the instance on the hypervisor.
Jan 10 17:21:10 compute-0 nova_compute[237049]: 2026-01-10 17:21:10.082 237053 DEBUG nova.compute.manager [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 6290fedf-9ecb-464c-8d5e-b6af64859702] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 10 17:21:10 compute-0 nova_compute[237049]: 2026-01-10 17:21:10.102 237053 INFO nova.compute.manager [None req-e8bffe3c-fd35-435d-be63-f593c81b47ca - - - - - -] [instance: 6290fedf-9ecb-464c-8d5e-b6af64859702] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 10 17:21:10 compute-0 nova_compute[237049]: 2026-01-10 17:21:10.164 237053 INFO nova.compute.manager [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 6290fedf-9ecb-464c-8d5e-b6af64859702] Took 10.10 seconds to build instance.
Jan 10 17:21:10 compute-0 nova_compute[237049]: 2026-01-10 17:21:10.187 237053 DEBUG oslo_concurrency.lockutils [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Lock "6290fedf-9ecb-464c-8d5e-b6af64859702" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.289s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 10 17:21:10 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v796: 177 pgs: 177 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail; 827 B/s rd, 23 KiB/s wr, 0 op/s
Jan 10 17:21:11 compute-0 ceph-mon[75249]: pgmap v796: 177 pgs: 177 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail; 827 B/s rd, 23 KiB/s wr, 0 op/s
Jan 10 17:21:11 compute-0 sshd-session[241474]: Connection closed by authenticating user root 216.36.124.133 port 53530 [preauth]
Jan 10 17:21:12 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v797: 177 pgs: 177 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 19 KiB/s wr, 19 op/s
Jan 10 17:21:13 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e99 do_prune osdmap full prune enabled
Jan 10 17:21:13 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e100 e100: 3 total, 3 up, 3 in
Jan 10 17:21:13 compute-0 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e100: 3 total, 3 up, 3 in
Jan 10 17:21:13 compute-0 ceph-mon[75249]: pgmap v797: 177 pgs: 177 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 19 KiB/s wr, 19 op/s
Jan 10 17:21:14 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e100 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:21:14 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e100 do_prune osdmap full prune enabled
Jan 10 17:21:14 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e101 e101: 3 total, 3 up, 3 in
Jan 10 17:21:14 compute-0 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e101: 3 total, 3 up, 3 in
Jan 10 17:21:14 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v800: 177 pgs: 177 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 25 KiB/s wr, 25 op/s
Jan 10 17:21:14 compute-0 ceph-mon[75249]: osdmap e100: 3 total, 3 up, 3 in
Jan 10 17:21:14 compute-0 ceph-mon[75249]: osdmap e101: 3 total, 3 up, 3 in
Jan 10 17:21:15 compute-0 ceph-mon[75249]: pgmap v800: 177 pgs: 177 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 25 KiB/s wr, 25 op/s
Jan 10 17:21:16 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e101 do_prune osdmap full prune enabled
Jan 10 17:21:16 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e102 e102: 3 total, 3 up, 3 in
Jan 10 17:21:16 compute-0 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e102: 3 total, 3 up, 3 in
Jan 10 17:21:16 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v802: 177 pgs: 177 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail; 75 KiB/s rd, 3.3 KiB/s wr, 96 op/s
Jan 10 17:21:17 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e102 do_prune osdmap full prune enabled
Jan 10 17:21:17 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e103 e103: 3 total, 3 up, 3 in
Jan 10 17:21:17 compute-0 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e103: 3 total, 3 up, 3 in
Jan 10 17:21:17 compute-0 ceph-mon[75249]: osdmap e102: 3 total, 3 up, 3 in
Jan 10 17:21:17 compute-0 ceph-mon[75249]: pgmap v802: 177 pgs: 177 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail; 75 KiB/s rd, 3.3 KiB/s wr, 96 op/s
Jan 10 17:21:18 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e103 do_prune osdmap full prune enabled
Jan 10 17:21:18 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e104 e104: 3 total, 3 up, 3 in
Jan 10 17:21:18 compute-0 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e104: 3 total, 3 up, 3 in
Jan 10 17:21:18 compute-0 ceph-mon[75249]: osdmap e103: 3 total, 3 up, 3 in
Jan 10 17:21:18 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v805: 177 pgs: 4 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 171 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail; 113 KiB/s rd, 5.5 KiB/s wr, 145 op/s
Jan 10 17:21:19 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e104 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:21:19 compute-0 ceph-mon[75249]: osdmap e104: 3 total, 3 up, 3 in
Jan 10 17:21:19 compute-0 ceph-mon[75249]: pgmap v805: 177 pgs: 4 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 171 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail; 113 KiB/s rd, 5.5 KiB/s wr, 145 op/s
Jan 10 17:21:20 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v806: 177 pgs: 4 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 171 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail; 98 KiB/s rd, 6.0 KiB/s wr, 129 op/s
Jan 10 17:21:21 compute-0 ceph-mon[75249]: pgmap v806: 177 pgs: 4 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 171 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail; 98 KiB/s rd, 6.0 KiB/s wr, 129 op/s
Jan 10 17:21:22 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v807: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 68 KiB/s rd, 5.8 KiB/s wr, 92 op/s
Jan 10 17:21:23 compute-0 nova_compute[237049]: 2026-01-10 17:21:23.029 237053 DEBUG oslo_concurrency.lockutils [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Acquiring lock "114a4603-17a5-4e6b-b2d6-c77ef324a07d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 10 17:21:23 compute-0 nova_compute[237049]: 2026-01-10 17:21:23.029 237053 DEBUG oslo_concurrency.lockutils [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Lock "114a4603-17a5-4e6b-b2d6-c77ef324a07d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 10 17:21:23 compute-0 nova_compute[237049]: 2026-01-10 17:21:23.051 237053 DEBUG nova.compute.manager [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 114a4603-17a5-4e6b-b2d6-c77ef324a07d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 10 17:21:23 compute-0 nova_compute[237049]: 2026-01-10 17:21:23.161 237053 DEBUG oslo_concurrency.lockutils [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 10 17:21:23 compute-0 nova_compute[237049]: 2026-01-10 17:21:23.161 237053 DEBUG oslo_concurrency.lockutils [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 10 17:21:23 compute-0 nova_compute[237049]: 2026-01-10 17:21:23.169 237053 DEBUG nova.virt.hardware [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 10 17:21:23 compute-0 nova_compute[237049]: 2026-01-10 17:21:23.169 237053 INFO nova.compute.claims [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 114a4603-17a5-4e6b-b2d6-c77ef324a07d] Claim successful on node compute-0.ctlplane.example.com
Jan 10 17:21:23 compute-0 nova_compute[237049]: 2026-01-10 17:21:23.319 237053 DEBUG oslo_concurrency.processutils [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 10 17:21:23 compute-0 ceph-mon[75249]: pgmap v807: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 68 KiB/s rd, 5.8 KiB/s wr, 92 op/s
Jan 10 17:21:23 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 10 17:21:23 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2003498910' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 10 17:21:23 compute-0 nova_compute[237049]: 2026-01-10 17:21:23.872 237053 DEBUG oslo_concurrency.processutils [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.553s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 10 17:21:23 compute-0 nova_compute[237049]: 2026-01-10 17:21:23.881 237053 DEBUG nova.compute.provider_tree [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Inventory has not changed in ProviderTree for provider: 5f85855c-8a9b-43b5-ae49-f5846b9dcebe update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 10 17:21:23 compute-0 nova_compute[237049]: 2026-01-10 17:21:23.905 237053 DEBUG nova.scheduler.client.report [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Inventory has not changed for provider 5f85855c-8a9b-43b5-ae49-f5846b9dcebe based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 10 17:21:23 compute-0 nova_compute[237049]: 2026-01-10 17:21:23.940 237053 DEBUG oslo_concurrency.lockutils [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.779s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 10 17:21:23 compute-0 nova_compute[237049]: 2026-01-10 17:21:23.942 237053 DEBUG nova.compute.manager [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 114a4603-17a5-4e6b-b2d6-c77ef324a07d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 10 17:21:23 compute-0 nova_compute[237049]: 2026-01-10 17:21:23.993 237053 DEBUG nova.compute.manager [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 114a4603-17a5-4e6b-b2d6-c77ef324a07d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 10 17:21:23 compute-0 nova_compute[237049]: 2026-01-10 17:21:23.994 237053 DEBUG nova.network.neutron [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 114a4603-17a5-4e6b-b2d6-c77ef324a07d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 10 17:21:24 compute-0 nova_compute[237049]: 2026-01-10 17:21:24.018 237053 INFO nova.virt.libvirt.driver [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 114a4603-17a5-4e6b-b2d6-c77ef324a07d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 10 17:21:24 compute-0 nova_compute[237049]: 2026-01-10 17:21:24.038 237053 DEBUG nova.compute.manager [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 114a4603-17a5-4e6b-b2d6-c77ef324a07d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 10 17:21:24 compute-0 nova_compute[237049]: 2026-01-10 17:21:24.117 237053 DEBUG nova.compute.manager [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 114a4603-17a5-4e6b-b2d6-c77ef324a07d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 10 17:21:24 compute-0 nova_compute[237049]: 2026-01-10 17:21:24.118 237053 DEBUG nova.virt.libvirt.driver [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 114a4603-17a5-4e6b-b2d6-c77ef324a07d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 10 17:21:24 compute-0 nova_compute[237049]: 2026-01-10 17:21:24.119 237053 INFO nova.virt.libvirt.driver [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 114a4603-17a5-4e6b-b2d6-c77ef324a07d] Creating image(s)
Jan 10 17:21:24 compute-0 nova_compute[237049]: 2026-01-10 17:21:24.150 237053 DEBUG nova.storage.rbd_utils [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] rbd image 114a4603-17a5-4e6b-b2d6-c77ef324a07d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 10 17:21:24 compute-0 nova_compute[237049]: 2026-01-10 17:21:24.186 237053 DEBUG nova.storage.rbd_utils [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] rbd image 114a4603-17a5-4e6b-b2d6-c77ef324a07d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 10 17:21:24 compute-0 nova_compute[237049]: 2026-01-10 17:21:24.221 237053 DEBUG nova.storage.rbd_utils [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] rbd image 114a4603-17a5-4e6b-b2d6-c77ef324a07d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 10 17:21:24 compute-0 nova_compute[237049]: 2026-01-10 17:21:24.226 237053 DEBUG oslo_concurrency.lockutils [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Acquiring lock "1a00580aebdcff88afc7729ad1595e2017e01a34" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 10 17:21:24 compute-0 nova_compute[237049]: 2026-01-10 17:21:24.227 237053 DEBUG oslo_concurrency.lockutils [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Lock "1a00580aebdcff88afc7729ad1595e2017e01a34" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 10 17:21:24 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e104 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:21:24 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e104 do_prune osdmap full prune enabled
Jan 10 17:21:24 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e105 e105: 3 total, 3 up, 3 in
Jan 10 17:21:24 compute-0 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e105: 3 total, 3 up, 3 in
Jan 10 17:21:24 compute-0 nova_compute[237049]: 2026-01-10 17:21:24.518 237053 DEBUG nova.virt.libvirt.imagebackend [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Image locations are: [{'url': 'rbd://a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/images/debf2853-94a6-4539-86f8-a9fe443a47cc/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/images/debf2853-94a6-4539-86f8-a9fe443a47cc/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Jan 10 17:21:24 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v809: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 58 KiB/s rd, 5.0 KiB/s wr, 80 op/s
Jan 10 17:21:24 compute-0 nova_compute[237049]: 2026-01-10 17:21:24.591 237053 DEBUG nova.network.neutron [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 114a4603-17a5-4e6b-b2d6-c77ef324a07d] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Jan 10 17:21:24 compute-0 nova_compute[237049]: 2026-01-10 17:21:24.592 237053 DEBUG nova.compute.manager [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 114a4603-17a5-4e6b-b2d6-c77ef324a07d] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 10 17:21:24 compute-0 nova_compute[237049]: 2026-01-10 17:21:24.596 237053 DEBUG nova.virt.libvirt.imagebackend [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Selected location: {'url': 'rbd://a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/images/debf2853-94a6-4539-86f8-a9fe443a47cc/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094
Jan 10 17:21:24 compute-0 nova_compute[237049]: 2026-01-10 17:21:24.597 237053 DEBUG nova.storage.rbd_utils [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] cloning images/debf2853-94a6-4539-86f8-a9fe443a47cc@snap to None/114a4603-17a5-4e6b-b2d6-c77ef324a07d_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Jan 10 17:21:24 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/2003498910' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 10 17:21:24 compute-0 ceph-mon[75249]: osdmap e105: 3 total, 3 up, 3 in
Jan 10 17:21:24 compute-0 nova_compute[237049]: 2026-01-10 17:21:24.722 237053 DEBUG oslo_concurrency.lockutils [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Lock "1a00580aebdcff88afc7729ad1595e2017e01a34" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.494s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 10 17:21:24 compute-0 nova_compute[237049]: 2026-01-10 17:21:24.900 237053 DEBUG nova.storage.rbd_utils [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] resizing rbd image 114a4603-17a5-4e6b-b2d6-c77ef324a07d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 10 17:21:24 compute-0 nova_compute[237049]: 2026-01-10 17:21:24.998 237053 DEBUG nova.objects.instance [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Lazy-loading 'migration_context' on Instance uuid 114a4603-17a5-4e6b-b2d6-c77ef324a07d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 10 17:21:25 compute-0 nova_compute[237049]: 2026-01-10 17:21:25.037 237053 DEBUG nova.virt.libvirt.driver [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 114a4603-17a5-4e6b-b2d6-c77ef324a07d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 10 17:21:25 compute-0 nova_compute[237049]: 2026-01-10 17:21:25.038 237053 DEBUG nova.virt.libvirt.driver [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 114a4603-17a5-4e6b-b2d6-c77ef324a07d] Ensure instance console log exists: /var/lib/nova/instances/114a4603-17a5-4e6b-b2d6-c77ef324a07d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 10 17:21:25 compute-0 nova_compute[237049]: 2026-01-10 17:21:25.038 237053 DEBUG oslo_concurrency.lockutils [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 10 17:21:25 compute-0 nova_compute[237049]: 2026-01-10 17:21:25.039 237053 DEBUG oslo_concurrency.lockutils [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 10 17:21:25 compute-0 nova_compute[237049]: 2026-01-10 17:21:25.040 237053 DEBUG oslo_concurrency.lockutils [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 10 17:21:25 compute-0 nova_compute[237049]: 2026-01-10 17:21:25.042 237053 DEBUG nova.virt.libvirt.driver [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 114a4603-17a5-4e6b-b2d6-c77ef324a07d] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='a55840e5637d8193bf5f45ed86d227d1',container_format='bare',created_at=2026-01-10T17:21:17Z,direct_url=<?>,disk_format='raw',id=debf2853-94a6-4539-86f8-a9fe443a47cc,min_disk=0,min_ram=0,name='tempest-image-dependency-test-105931288',owner='0299cbaa071f4ac4b1435e4144bd4d79',properties=ImageMetaProps,protected=<?>,size=1024,status='active',tags=<?>,updated_at=2026-01-10T17:21:18Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'size': 0, 'device_name': '/dev/vda', 'encrypted': False, 'disk_bus': 'virtio', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encryption_secret_uuid': None, 'device_type': 'disk', 'image_id': 'debf2853-94a6-4539-86f8-a9fe443a47cc'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 10 17:21:25 compute-0 nova_compute[237049]: 2026-01-10 17:21:25.049 237053 WARNING nova.virt.libvirt.driver [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 10 17:21:25 compute-0 nova_compute[237049]: 2026-01-10 17:21:25.056 237053 DEBUG nova.virt.libvirt.host [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 10 17:21:25 compute-0 nova_compute[237049]: 2026-01-10 17:21:25.057 237053 DEBUG nova.virt.libvirt.host [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 10 17:21:25 compute-0 nova_compute[237049]: 2026-01-10 17:21:25.061 237053 DEBUG nova.virt.libvirt.host [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 10 17:21:25 compute-0 nova_compute[237049]: 2026-01-10 17:21:25.062 237053 DEBUG nova.virt.libvirt.host [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 10 17:21:25 compute-0 nova_compute[237049]: 2026-01-10 17:21:25.063 237053 DEBUG nova.virt.libvirt.driver [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 10 17:21:25 compute-0 nova_compute[237049]: 2026-01-10 17:21:25.063 237053 DEBUG nova.virt.hardware [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-10T17:19:37Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='83b4ecee-2b50-47ec-82ec-7f3e1d1624ce',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='a55840e5637d8193bf5f45ed86d227d1',container_format='bare',created_at=2026-01-10T17:21:17Z,direct_url=<?>,disk_format='raw',id=debf2853-94a6-4539-86f8-a9fe443a47cc,min_disk=0,min_ram=0,name='tempest-image-dependency-test-105931288',owner='0299cbaa071f4ac4b1435e4144bd4d79',properties=ImageMetaProps,protected=<?>,size=1024,status='active',tags=<?>,updated_at=2026-01-10T17:21:18Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 10 17:21:25 compute-0 nova_compute[237049]: 2026-01-10 17:21:25.064 237053 DEBUG nova.virt.hardware [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 10 17:21:25 compute-0 nova_compute[237049]: 2026-01-10 17:21:25.064 237053 DEBUG nova.virt.hardware [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 10 17:21:25 compute-0 nova_compute[237049]: 2026-01-10 17:21:25.065 237053 DEBUG nova.virt.hardware [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 10 17:21:25 compute-0 nova_compute[237049]: 2026-01-10 17:21:25.065 237053 DEBUG nova.virt.hardware [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 10 17:21:25 compute-0 nova_compute[237049]: 2026-01-10 17:21:25.066 237053 DEBUG nova.virt.hardware [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 10 17:21:25 compute-0 nova_compute[237049]: 2026-01-10 17:21:25.066 237053 DEBUG nova.virt.hardware [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 10 17:21:25 compute-0 nova_compute[237049]: 2026-01-10 17:21:25.067 237053 DEBUG nova.virt.hardware [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 10 17:21:25 compute-0 nova_compute[237049]: 2026-01-10 17:21:25.067 237053 DEBUG nova.virt.hardware [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 10 17:21:25 compute-0 nova_compute[237049]: 2026-01-10 17:21:25.068 237053 DEBUG nova.virt.hardware [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 10 17:21:25 compute-0 nova_compute[237049]: 2026-01-10 17:21:25.068 237053 DEBUG nova.virt.hardware [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 10 17:21:25 compute-0 nova_compute[237049]: 2026-01-10 17:21:25.073 237053 DEBUG oslo_concurrency.processutils [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 10 17:21:25 compute-0 ceph-mon[75249]: pgmap v809: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 58 KiB/s rd, 5.0 KiB/s wr, 80 op/s
Jan 10 17:21:25 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 10 17:21:25 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1211419263' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 10 17:21:25 compute-0 nova_compute[237049]: 2026-01-10 17:21:25.689 237053 DEBUG oslo_concurrency.processutils [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.615s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 10 17:21:25 compute-0 nova_compute[237049]: 2026-01-10 17:21:25.723 237053 DEBUG nova.storage.rbd_utils [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] rbd image 114a4603-17a5-4e6b-b2d6-c77ef324a07d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 10 17:21:25 compute-0 nova_compute[237049]: 2026-01-10 17:21:25.729 237053 DEBUG oslo_concurrency.processutils [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 10 17:21:26 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 10 17:21:26 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1967516454' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 10 17:21:26 compute-0 nova_compute[237049]: 2026-01-10 17:21:26.307 237053 DEBUG oslo_concurrency.processutils [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.579s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 10 17:21:26 compute-0 nova_compute[237049]: 2026-01-10 17:21:26.311 237053 DEBUG nova.objects.instance [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Lazy-loading 'pci_devices' on Instance uuid 114a4603-17a5-4e6b-b2d6-c77ef324a07d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 10 17:21:26 compute-0 nova_compute[237049]: 2026-01-10 17:21:26.332 237053 DEBUG nova.virt.libvirt.driver [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 114a4603-17a5-4e6b-b2d6-c77ef324a07d] End _get_guest_xml xml=<domain type="kvm">
Jan 10 17:21:26 compute-0 nova_compute[237049]:   <uuid>114a4603-17a5-4e6b-b2d6-c77ef324a07d</uuid>
Jan 10 17:21:26 compute-0 nova_compute[237049]:   <name>instance-00000002</name>
Jan 10 17:21:26 compute-0 nova_compute[237049]:   <memory>131072</memory>
Jan 10 17:21:26 compute-0 nova_compute[237049]:   <vcpu>1</vcpu>
Jan 10 17:21:26 compute-0 nova_compute[237049]:   <metadata>
Jan 10 17:21:26 compute-0 nova_compute[237049]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 10 17:21:26 compute-0 nova_compute[237049]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 10 17:21:26 compute-0 nova_compute[237049]:       <nova:name>instance-depend-image</nova:name>
Jan 10 17:21:26 compute-0 nova_compute[237049]:       <nova:creationTime>2026-01-10 17:21:25</nova:creationTime>
Jan 10 17:21:26 compute-0 nova_compute[237049]:       <nova:flavor name="m1.nano">
Jan 10 17:21:26 compute-0 nova_compute[237049]:         <nova:memory>128</nova:memory>
Jan 10 17:21:26 compute-0 nova_compute[237049]:         <nova:disk>1</nova:disk>
Jan 10 17:21:26 compute-0 nova_compute[237049]:         <nova:swap>0</nova:swap>
Jan 10 17:21:26 compute-0 nova_compute[237049]:         <nova:ephemeral>0</nova:ephemeral>
Jan 10 17:21:26 compute-0 nova_compute[237049]:         <nova:vcpus>1</nova:vcpus>
Jan 10 17:21:26 compute-0 nova_compute[237049]:       </nova:flavor>
Jan 10 17:21:26 compute-0 nova_compute[237049]:       <nova:owner>
Jan 10 17:21:26 compute-0 nova_compute[237049]:         <nova:user uuid="75fbaed513e94e80acbf58803e0a4b03">tempest-ImageDependencyTests-1967781085-project-member</nova:user>
Jan 10 17:21:26 compute-0 nova_compute[237049]:         <nova:project uuid="0299cbaa071f4ac4b1435e4144bd4d79">tempest-ImageDependencyTests-1967781085</nova:project>
Jan 10 17:21:26 compute-0 nova_compute[237049]:       </nova:owner>
Jan 10 17:21:26 compute-0 nova_compute[237049]:       <nova:root type="image" uuid="debf2853-94a6-4539-86f8-a9fe443a47cc"/>
Jan 10 17:21:26 compute-0 nova_compute[237049]:       <nova:ports/>
Jan 10 17:21:26 compute-0 nova_compute[237049]:     </nova:instance>
Jan 10 17:21:26 compute-0 nova_compute[237049]:   </metadata>
Jan 10 17:21:26 compute-0 nova_compute[237049]:   <sysinfo type="smbios">
Jan 10 17:21:26 compute-0 nova_compute[237049]:     <system>
Jan 10 17:21:26 compute-0 nova_compute[237049]:       <entry name="manufacturer">RDO</entry>
Jan 10 17:21:26 compute-0 nova_compute[237049]:       <entry name="product">OpenStack Compute</entry>
Jan 10 17:21:26 compute-0 nova_compute[237049]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 10 17:21:26 compute-0 nova_compute[237049]:       <entry name="serial">114a4603-17a5-4e6b-b2d6-c77ef324a07d</entry>
Jan 10 17:21:26 compute-0 nova_compute[237049]:       <entry name="uuid">114a4603-17a5-4e6b-b2d6-c77ef324a07d</entry>
Jan 10 17:21:26 compute-0 nova_compute[237049]:       <entry name="family">Virtual Machine</entry>
Jan 10 17:21:26 compute-0 nova_compute[237049]:     </system>
Jan 10 17:21:26 compute-0 nova_compute[237049]:   </sysinfo>
Jan 10 17:21:26 compute-0 nova_compute[237049]:   <os>
Jan 10 17:21:26 compute-0 nova_compute[237049]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 10 17:21:26 compute-0 nova_compute[237049]:     <boot dev="hd"/>
Jan 10 17:21:26 compute-0 nova_compute[237049]:     <smbios mode="sysinfo"/>
Jan 10 17:21:26 compute-0 nova_compute[237049]:   </os>
Jan 10 17:21:26 compute-0 nova_compute[237049]:   <features>
Jan 10 17:21:26 compute-0 nova_compute[237049]:     <acpi/>
Jan 10 17:21:26 compute-0 nova_compute[237049]:     <apic/>
Jan 10 17:21:26 compute-0 nova_compute[237049]:     <vmcoreinfo/>
Jan 10 17:21:26 compute-0 nova_compute[237049]:   </features>
Jan 10 17:21:26 compute-0 nova_compute[237049]:   <clock offset="utc">
Jan 10 17:21:26 compute-0 nova_compute[237049]:     <timer name="pit" tickpolicy="delay"/>
Jan 10 17:21:26 compute-0 nova_compute[237049]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 10 17:21:26 compute-0 nova_compute[237049]:     <timer name="hpet" present="no"/>
Jan 10 17:21:26 compute-0 nova_compute[237049]:   </clock>
Jan 10 17:21:26 compute-0 nova_compute[237049]:   <cpu mode="host-model" match="exact">
Jan 10 17:21:26 compute-0 nova_compute[237049]:     <topology sockets="1" cores="1" threads="1"/>
Jan 10 17:21:26 compute-0 nova_compute[237049]:   </cpu>
Jan 10 17:21:26 compute-0 nova_compute[237049]:   <devices>
Jan 10 17:21:26 compute-0 nova_compute[237049]:     <disk type="network" device="disk">
Jan 10 17:21:26 compute-0 nova_compute[237049]:       <driver type="raw" cache="none"/>
Jan 10 17:21:26 compute-0 nova_compute[237049]:       <source protocol="rbd" name="vms/114a4603-17a5-4e6b-b2d6-c77ef324a07d_disk">
Jan 10 17:21:26 compute-0 nova_compute[237049]:         <host name="192.168.122.100" port="6789"/>
Jan 10 17:21:26 compute-0 nova_compute[237049]:       </source>
Jan 10 17:21:26 compute-0 nova_compute[237049]:       <auth username="openstack">
Jan 10 17:21:26 compute-0 nova_compute[237049]:         <secret type="ceph" uuid="a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4"/>
Jan 10 17:21:26 compute-0 nova_compute[237049]:       </auth>
Jan 10 17:21:26 compute-0 nova_compute[237049]:       <target dev="vda" bus="virtio"/>
Jan 10 17:21:26 compute-0 nova_compute[237049]:     </disk>
Jan 10 17:21:26 compute-0 nova_compute[237049]:     <disk type="network" device="cdrom">
Jan 10 17:21:26 compute-0 nova_compute[237049]:       <driver type="raw" cache="none"/>
Jan 10 17:21:26 compute-0 nova_compute[237049]:       <source protocol="rbd" name="vms/114a4603-17a5-4e6b-b2d6-c77ef324a07d_disk.config">
Jan 10 17:21:26 compute-0 nova_compute[237049]:         <host name="192.168.122.100" port="6789"/>
Jan 10 17:21:26 compute-0 nova_compute[237049]:       </source>
Jan 10 17:21:26 compute-0 nova_compute[237049]:       <auth username="openstack">
Jan 10 17:21:26 compute-0 nova_compute[237049]:         <secret type="ceph" uuid="a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4"/>
Jan 10 17:21:26 compute-0 nova_compute[237049]:       </auth>
Jan 10 17:21:26 compute-0 nova_compute[237049]:       <target dev="sda" bus="sata"/>
Jan 10 17:21:26 compute-0 nova_compute[237049]:     </disk>
Jan 10 17:21:26 compute-0 nova_compute[237049]:     <serial type="pty">
Jan 10 17:21:26 compute-0 nova_compute[237049]:       <log file="/var/lib/nova/instances/114a4603-17a5-4e6b-b2d6-c77ef324a07d/console.log" append="off"/>
Jan 10 17:21:26 compute-0 nova_compute[237049]:     </serial>
Jan 10 17:21:26 compute-0 nova_compute[237049]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 10 17:21:26 compute-0 nova_compute[237049]:     <video>
Jan 10 17:21:26 compute-0 nova_compute[237049]:       <model type="virtio"/>
Jan 10 17:21:26 compute-0 nova_compute[237049]:     </video>
Jan 10 17:21:26 compute-0 nova_compute[237049]:     <input type="tablet" bus="usb"/>
Jan 10 17:21:26 compute-0 nova_compute[237049]:     <rng model="virtio">
Jan 10 17:21:26 compute-0 nova_compute[237049]:       <backend model="random">/dev/urandom</backend>
Jan 10 17:21:26 compute-0 nova_compute[237049]:     </rng>
Jan 10 17:21:26 compute-0 nova_compute[237049]:     <controller type="pci" model="pcie-root"/>
Jan 10 17:21:26 compute-0 nova_compute[237049]:     <controller type="pci" model="pcie-root-port"/>
Jan 10 17:21:26 compute-0 nova_compute[237049]:     <controller type="pci" model="pcie-root-port"/>
Jan 10 17:21:26 compute-0 nova_compute[237049]:     <controller type="pci" model="pcie-root-port"/>
Jan 10 17:21:26 compute-0 nova_compute[237049]:     <controller type="pci" model="pcie-root-port"/>
Jan 10 17:21:26 compute-0 nova_compute[237049]:     <controller type="pci" model="pcie-root-port"/>
Jan 10 17:21:26 compute-0 nova_compute[237049]:     <controller type="pci" model="pcie-root-port"/>
Jan 10 17:21:26 compute-0 nova_compute[237049]:     <controller type="pci" model="pcie-root-port"/>
Jan 10 17:21:26 compute-0 nova_compute[237049]:     <controller type="pci" model="pcie-root-port"/>
Jan 10 17:21:26 compute-0 nova_compute[237049]:     <controller type="pci" model="pcie-root-port"/>
Jan 10 17:21:26 compute-0 nova_compute[237049]:     <controller type="pci" model="pcie-root-port"/>
Jan 10 17:21:26 compute-0 nova_compute[237049]:     <controller type="pci" model="pcie-root-port"/>
Jan 10 17:21:26 compute-0 nova_compute[237049]:     <controller type="pci" model="pcie-root-port"/>
Jan 10 17:21:26 compute-0 nova_compute[237049]:     <controller type="pci" model="pcie-root-port"/>
Jan 10 17:21:26 compute-0 nova_compute[237049]:     <controller type="pci" model="pcie-root-port"/>
Jan 10 17:21:26 compute-0 nova_compute[237049]:     <controller type="pci" model="pcie-root-port"/>
Jan 10 17:21:26 compute-0 nova_compute[237049]:     <controller type="pci" model="pcie-root-port"/>
Jan 10 17:21:26 compute-0 nova_compute[237049]:     <controller type="pci" model="pcie-root-port"/>
Jan 10 17:21:26 compute-0 nova_compute[237049]:     <controller type="pci" model="pcie-root-port"/>
Jan 10 17:21:26 compute-0 nova_compute[237049]:     <controller type="pci" model="pcie-root-port"/>
Jan 10 17:21:26 compute-0 nova_compute[237049]:     <controller type="pci" model="pcie-root-port"/>
Jan 10 17:21:26 compute-0 nova_compute[237049]:     <controller type="pci" model="pcie-root-port"/>
Jan 10 17:21:26 compute-0 nova_compute[237049]:     <controller type="pci" model="pcie-root-port"/>
Jan 10 17:21:26 compute-0 nova_compute[237049]:     <controller type="pci" model="pcie-root-port"/>
Jan 10 17:21:26 compute-0 nova_compute[237049]:     <controller type="pci" model="pcie-root-port"/>
Jan 10 17:21:26 compute-0 nova_compute[237049]:     <controller type="usb" index="0"/>
Jan 10 17:21:26 compute-0 nova_compute[237049]:     <memballoon model="virtio">
Jan 10 17:21:26 compute-0 nova_compute[237049]:       <stats period="10"/>
Jan 10 17:21:26 compute-0 nova_compute[237049]:     </memballoon>
Jan 10 17:21:26 compute-0 nova_compute[237049]:   </devices>
Jan 10 17:21:26 compute-0 nova_compute[237049]: </domain>
Jan 10 17:21:26 compute-0 nova_compute[237049]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 10 17:21:26 compute-0 nova_compute[237049]: 2026-01-10 17:21:26.393 237053 DEBUG nova.virt.libvirt.driver [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 10 17:21:26 compute-0 nova_compute[237049]: 2026-01-10 17:21:26.393 237053 DEBUG nova.virt.libvirt.driver [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 10 17:21:26 compute-0 nova_compute[237049]: 2026-01-10 17:21:26.393 237053 INFO nova.virt.libvirt.driver [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 114a4603-17a5-4e6b-b2d6-c77ef324a07d] Using config drive
Jan 10 17:21:26 compute-0 nova_compute[237049]: 2026-01-10 17:21:26.418 237053 DEBUG nova.storage.rbd_utils [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] rbd image 114a4603-17a5-4e6b-b2d6-c77ef324a07d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 10 17:21:26 compute-0 nova_compute[237049]: 2026-01-10 17:21:26.578 237053 INFO nova.virt.libvirt.driver [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 114a4603-17a5-4e6b-b2d6-c77ef324a07d] Creating config drive at /var/lib/nova/instances/114a4603-17a5-4e6b-b2d6-c77ef324a07d/disk.config
Jan 10 17:21:26 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v810: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 74 KiB/s rd, 4.5 KiB/s wr, 98 op/s
Jan 10 17:21:26 compute-0 nova_compute[237049]: 2026-01-10 17:21:26.588 237053 DEBUG oslo_concurrency.processutils [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/114a4603-17a5-4e6b-b2d6-c77ef324a07d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpf8bbnkfj execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 10 17:21:26 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/1211419263' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 10 17:21:26 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/1967516454' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 10 17:21:26 compute-0 nova_compute[237049]: 2026-01-10 17:21:26.729 237053 DEBUG oslo_concurrency.processutils [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/114a4603-17a5-4e6b-b2d6-c77ef324a07d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpf8bbnkfj" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 10 17:21:26 compute-0 nova_compute[237049]: 2026-01-10 17:21:26.774 237053 DEBUG nova.storage.rbd_utils [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] rbd image 114a4603-17a5-4e6b-b2d6-c77ef324a07d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 10 17:21:26 compute-0 nova_compute[237049]: 2026-01-10 17:21:26.780 237053 DEBUG oslo_concurrency.processutils [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/114a4603-17a5-4e6b-b2d6-c77ef324a07d/disk.config 114a4603-17a5-4e6b-b2d6-c77ef324a07d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 10 17:21:26 compute-0 nova_compute[237049]: 2026-01-10 17:21:26.960 237053 DEBUG oslo_concurrency.processutils [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/114a4603-17a5-4e6b-b2d6-c77ef324a07d/disk.config 114a4603-17a5-4e6b-b2d6-c77ef324a07d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.180s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 10 17:21:26 compute-0 nova_compute[237049]: 2026-01-10 17:21:26.962 237053 INFO nova.virt.libvirt.driver [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 114a4603-17a5-4e6b-b2d6-c77ef324a07d] Deleting local config drive /var/lib/nova/instances/114a4603-17a5-4e6b-b2d6-c77ef324a07d/disk.config because it was imported into RBD.
Jan 10 17:21:27 compute-0 systemd-machined[205102]: New machine qemu-2-instance-00000002.
Jan 10 17:21:27 compute-0 systemd[1]: Started Virtual Machine qemu-2-instance-00000002.
Jan 10 17:21:27 compute-0 ceph-mon[75249]: pgmap v810: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 74 KiB/s rd, 4.5 KiB/s wr, 98 op/s
Jan 10 17:21:27 compute-0 nova_compute[237049]: 2026-01-10 17:21:27.784 237053 DEBUG nova.virt.driver [None req-e8bffe3c-fd35-435d-be63-f593c81b47ca - - - - - -] Emitting event <LifecycleEvent: 1768065687.7838073, 114a4603-17a5-4e6b-b2d6-c77ef324a07d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 10 17:21:27 compute-0 nova_compute[237049]: 2026-01-10 17:21:27.786 237053 INFO nova.compute.manager [None req-e8bffe3c-fd35-435d-be63-f593c81b47ca - - - - - -] [instance: 114a4603-17a5-4e6b-b2d6-c77ef324a07d] VM Resumed (Lifecycle Event)
Jan 10 17:21:27 compute-0 nova_compute[237049]: 2026-01-10 17:21:27.790 237053 DEBUG nova.compute.manager [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 114a4603-17a5-4e6b-b2d6-c77ef324a07d] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 10 17:21:27 compute-0 nova_compute[237049]: 2026-01-10 17:21:27.791 237053 DEBUG nova.virt.libvirt.driver [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 114a4603-17a5-4e6b-b2d6-c77ef324a07d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 10 17:21:27 compute-0 nova_compute[237049]: 2026-01-10 17:21:27.798 237053 INFO nova.virt.libvirt.driver [-] [instance: 114a4603-17a5-4e6b-b2d6-c77ef324a07d] Instance spawned successfully.
Jan 10 17:21:27 compute-0 nova_compute[237049]: 2026-01-10 17:21:27.798 237053 DEBUG nova.virt.libvirt.driver [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 114a4603-17a5-4e6b-b2d6-c77ef324a07d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 10 17:21:27 compute-0 nova_compute[237049]: 2026-01-10 17:21:27.837 237053 DEBUG nova.compute.manager [None req-e8bffe3c-fd35-435d-be63-f593c81b47ca - - - - - -] [instance: 114a4603-17a5-4e6b-b2d6-c77ef324a07d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 10 17:21:27 compute-0 nova_compute[237049]: 2026-01-10 17:21:27.846 237053 DEBUG nova.compute.manager [None req-e8bffe3c-fd35-435d-be63-f593c81b47ca - - - - - -] [instance: 114a4603-17a5-4e6b-b2d6-c77ef324a07d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 10 17:21:27 compute-0 nova_compute[237049]: 2026-01-10 17:21:27.852 237053 DEBUG nova.virt.libvirt.driver [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 114a4603-17a5-4e6b-b2d6-c77ef324a07d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 10 17:21:27 compute-0 nova_compute[237049]: 2026-01-10 17:21:27.853 237053 DEBUG nova.virt.libvirt.driver [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 114a4603-17a5-4e6b-b2d6-c77ef324a07d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 10 17:21:27 compute-0 nova_compute[237049]: 2026-01-10 17:21:27.854 237053 DEBUG nova.virt.libvirt.driver [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 114a4603-17a5-4e6b-b2d6-c77ef324a07d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 10 17:21:27 compute-0 nova_compute[237049]: 2026-01-10 17:21:27.854 237053 DEBUG nova.virt.libvirt.driver [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 114a4603-17a5-4e6b-b2d6-c77ef324a07d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 10 17:21:27 compute-0 nova_compute[237049]: 2026-01-10 17:21:27.855 237053 DEBUG nova.virt.libvirt.driver [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 114a4603-17a5-4e6b-b2d6-c77ef324a07d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 10 17:21:27 compute-0 nova_compute[237049]: 2026-01-10 17:21:27.856 237053 DEBUG nova.virt.libvirt.driver [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 114a4603-17a5-4e6b-b2d6-c77ef324a07d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 10 17:21:27 compute-0 nova_compute[237049]: 2026-01-10 17:21:27.907 237053 INFO nova.compute.manager [None req-e8bffe3c-fd35-435d-be63-f593c81b47ca - - - - - -] [instance: 114a4603-17a5-4e6b-b2d6-c77ef324a07d] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 10 17:21:27 compute-0 nova_compute[237049]: 2026-01-10 17:21:27.907 237053 DEBUG nova.virt.driver [None req-e8bffe3c-fd35-435d-be63-f593c81b47ca - - - - - -] Emitting event <LifecycleEvent: 1768065687.7853625, 114a4603-17a5-4e6b-b2d6-c77ef324a07d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 10 17:21:27 compute-0 nova_compute[237049]: 2026-01-10 17:21:27.908 237053 INFO nova.compute.manager [None req-e8bffe3c-fd35-435d-be63-f593c81b47ca - - - - - -] [instance: 114a4603-17a5-4e6b-b2d6-c77ef324a07d] VM Started (Lifecycle Event)
Jan 10 17:21:27 compute-0 nova_compute[237049]: 2026-01-10 17:21:27.943 237053 DEBUG nova.compute.manager [None req-e8bffe3c-fd35-435d-be63-f593c81b47ca - - - - - -] [instance: 114a4603-17a5-4e6b-b2d6-c77ef324a07d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 10 17:21:27 compute-0 nova_compute[237049]: 2026-01-10 17:21:27.948 237053 DEBUG nova.compute.manager [None req-e8bffe3c-fd35-435d-be63-f593c81b47ca - - - - - -] [instance: 114a4603-17a5-4e6b-b2d6-c77ef324a07d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 10 17:21:27 compute-0 nova_compute[237049]: 2026-01-10 17:21:27.957 237053 INFO nova.compute.manager [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 114a4603-17a5-4e6b-b2d6-c77ef324a07d] Took 3.84 seconds to spawn the instance on the hypervisor.
Jan 10 17:21:27 compute-0 nova_compute[237049]: 2026-01-10 17:21:27.958 237053 DEBUG nova.compute.manager [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 114a4603-17a5-4e6b-b2d6-c77ef324a07d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 10 17:21:27 compute-0 nova_compute[237049]: 2026-01-10 17:21:27.973 237053 INFO nova.compute.manager [None req-e8bffe3c-fd35-435d-be63-f593c81b47ca - - - - - -] [instance: 114a4603-17a5-4e6b-b2d6-c77ef324a07d] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 10 17:21:28 compute-0 nova_compute[237049]: 2026-01-10 17:21:28.015 237053 INFO nova.compute.manager [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 114a4603-17a5-4e6b-b2d6-c77ef324a07d] Took 4.89 seconds to build instance.
Jan 10 17:21:28 compute-0 nova_compute[237049]: 2026-01-10 17:21:28.036 237053 DEBUG oslo_concurrency.lockutils [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Lock "114a4603-17a5-4e6b-b2d6-c77ef324a07d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.007s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 10 17:21:28 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v811: 177 pgs: 177 active+clean; 42 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 63 KiB/s rd, 18 KiB/s wr, 83 op/s
Jan 10 17:21:29 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e105 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:21:29 compute-0 ceph-mon[75249]: pgmap v811: 177 pgs: 177 active+clean; 42 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 63 KiB/s rd, 18 KiB/s wr, 83 op/s
Jan 10 17:21:30 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v812: 177 pgs: 177 active+clean; 42 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 55 KiB/s rd, 17 KiB/s wr, 72 op/s
Jan 10 17:21:30 compute-0 nova_compute[237049]: 2026-01-10 17:21:30.937 237053 DEBUG nova.compute.manager [None req-02e78c99-790d-4bd5-a244-f9c8c8d98a40 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 114a4603-17a5-4e6b-b2d6-c77ef324a07d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 10 17:21:30 compute-0 nova_compute[237049]: 2026-01-10 17:21:30.999 237053 INFO nova.compute.manager [None req-02e78c99-790d-4bd5-a244-f9c8c8d98a40 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 114a4603-17a5-4e6b-b2d6-c77ef324a07d] instance snapshotting
Jan 10 17:21:31 compute-0 nova_compute[237049]: 2026-01-10 17:21:31.279 237053 INFO nova.virt.libvirt.driver [None req-02e78c99-790d-4bd5-a244-f9c8c8d98a40 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 114a4603-17a5-4e6b-b2d6-c77ef324a07d] Beginning live snapshot process
Jan 10 17:21:31 compute-0 nova_compute[237049]: 2026-01-10 17:21:31.473 237053 DEBUG nova.storage.rbd_utils [None req-02e78c99-790d-4bd5-a244-f9c8c8d98a40 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] creating snapshot(0e0fcb56a68946158c679c3e8fa00004) on rbd image(114a4603-17a5-4e6b-b2d6-c77ef324a07d_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 10 17:21:31 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e105 do_prune osdmap full prune enabled
Jan 10 17:21:31 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e106 e106: 3 total, 3 up, 3 in
Jan 10 17:21:31 compute-0 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e106: 3 total, 3 up, 3 in
Jan 10 17:21:31 compute-0 ceph-mon[75249]: pgmap v812: 177 pgs: 177 active+clean; 42 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 55 KiB/s rd, 17 KiB/s wr, 72 op/s
Jan 10 17:21:31 compute-0 nova_compute[237049]: 2026-01-10 17:21:31.762 237053 DEBUG nova.storage.rbd_utils [None req-02e78c99-790d-4bd5-a244-f9c8c8d98a40 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] cloning vms/114a4603-17a5-4e6b-b2d6-c77ef324a07d_disk@0e0fcb56a68946158c679c3e8fa00004 to images/54dadfaa-d0a5-471e-9ebf-65a4699f0e55 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Jan 10 17:21:31 compute-0 nova_compute[237049]: 2026-01-10 17:21:31.899 237053 DEBUG nova.storage.rbd_utils [None req-02e78c99-790d-4bd5-a244-f9c8c8d98a40 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] flattening images/54dadfaa-d0a5-471e-9ebf-65a4699f0e55 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Jan 10 17:21:32 compute-0 nova_compute[237049]: 2026-01-10 17:21:32.355 237053 DEBUG nova.storage.rbd_utils [None req-02e78c99-790d-4bd5-a244-f9c8c8d98a40 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] removing snapshot(0e0fcb56a68946158c679c3e8fa00004) on rbd image(114a4603-17a5-4e6b-b2d6-c77ef324a07d_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Jan 10 17:21:32 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v814: 177 pgs: 177 active+clean; 42 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 55 KiB/s rd, 20 KiB/s wr, 69 op/s
Jan 10 17:21:32 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e106 do_prune osdmap full prune enabled
Jan 10 17:21:32 compute-0 ceph-mon[75249]: osdmap e106: 3 total, 3 up, 3 in
Jan 10 17:21:32 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e107 e107: 3 total, 3 up, 3 in
Jan 10 17:21:32 compute-0 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e107: 3 total, 3 up, 3 in
Jan 10 17:21:32 compute-0 nova_compute[237049]: 2026-01-10 17:21:32.802 237053 DEBUG nova.storage.rbd_utils [None req-02e78c99-790d-4bd5-a244-f9c8c8d98a40 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] creating snapshot(snap) on rbd image(54dadfaa-d0a5-471e-9ebf-65a4699f0e55) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 10 17:21:33 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e107 do_prune osdmap full prune enabled
Jan 10 17:21:33 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e108 e108: 3 total, 3 up, 3 in
Jan 10 17:21:33 compute-0 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e108: 3 total, 3 up, 3 in
Jan 10 17:21:33 compute-0 ceph-mon[75249]: pgmap v814: 177 pgs: 177 active+clean; 42 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 55 KiB/s rd, 20 KiB/s wr, 69 op/s
Jan 10 17:21:33 compute-0 ceph-mon[75249]: osdmap e107: 3 total, 3 up, 3 in
Jan 10 17:21:34 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:21:34 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v817: 177 pgs: 177 active+clean; 42 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 682 B/s wr, 20 op/s
Jan 10 17:21:34 compute-0 ceph-mon[75249]: osdmap e108: 3 total, 3 up, 3 in
Jan 10 17:21:35 compute-0 nova_compute[237049]: 2026-01-10 17:21:35.240 237053 INFO nova.virt.libvirt.driver [None req-02e78c99-790d-4bd5-a244-f9c8c8d98a40 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 114a4603-17a5-4e6b-b2d6-c77ef324a07d] Snapshot image upload complete
Jan 10 17:21:35 compute-0 nova_compute[237049]: 2026-01-10 17:21:35.241 237053 INFO nova.compute.manager [None req-02e78c99-790d-4bd5-a244-f9c8c8d98a40 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 114a4603-17a5-4e6b-b2d6-c77ef324a07d] Took 4.24 seconds to snapshot the instance on the hypervisor.
Jan 10 17:21:35 compute-0 sudo[242013]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 17:21:35 compute-0 sudo[242013]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:21:35 compute-0 sudo[242013]: pam_unix(sudo:session): session closed for user root
Jan 10 17:21:35 compute-0 sudo[242038]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Jan 10 17:21:35 compute-0 sudo[242038]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:21:35 compute-0 ceph-mon[75249]: pgmap v817: 177 pgs: 177 active+clean; 42 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 682 B/s wr, 20 op/s
Jan 10 17:21:36 compute-0 podman[242107]: 2026-01-10 17:21:36.016345628 +0000 UTC m=+0.084091011 container exec 69622407e4b336ab6e593d34ac16bfb19f7f8835a32ed22c7a89e50ee8c8d8e7 (image=quay.io/ceph/ceph:v20, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-mon-compute-0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 17:21:36 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 10 17:21:36 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1901931967' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 10 17:21:36 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 10 17:21:36 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1901931967' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 10 17:21:36 compute-0 podman[242107]: 2026-01-10 17:21:36.121897531 +0000 UTC m=+0.189642874 container exec_died 69622407e4b336ab6e593d34ac16bfb19f7f8835a32ed22c7a89e50ee8c8d8e7 (image=quay.io/ceph/ceph:v20, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 10 17:21:36 compute-0 podman[242147]: 2026-01-10 17:21:36.368498534 +0000 UTC m=+0.147972906 container health_status 4a66317a3a3660e903224f5e823ead0a57597ae17c6ac34919f8a3aeea25124f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 10 17:21:36 compute-0 podman[242151]: 2026-01-10 17:21:36.397726336 +0000 UTC m=+0.176310611 container health_status a2049b55215718ead6719d161f51721cb7ba61ad8a59a8a1174e05b17008fa6f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3)
Jan 10 17:21:36 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v818: 177 pgs: 177 active+clean; 42 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 111 KiB/s rd, 5.2 KiB/s wr, 142 op/s
Jan 10 17:21:36 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e108 do_prune osdmap full prune enabled
Jan 10 17:21:36 compute-0 ceph-mon[75249]: from='client.? 192.168.122.10:0/1901931967' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 10 17:21:36 compute-0 ceph-mon[75249]: from='client.? 192.168.122.10:0/1901931967' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 10 17:21:36 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e109 e109: 3 total, 3 up, 3 in
Jan 10 17:21:36 compute-0 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e109: 3 total, 3 up, 3 in
Jan 10 17:21:37 compute-0 sudo[242038]: pam_unix(sudo:session): session closed for user root
Jan 10 17:21:37 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 10 17:21:37 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:21:37 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 10 17:21:37 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:21:37 compute-0 sudo[242315]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 17:21:37 compute-0 sudo[242315]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:21:37 compute-0 sudo[242315]: pam_unix(sudo:session): session closed for user root
Jan 10 17:21:37 compute-0 sudo[242340]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 10 17:21:37 compute-0 sudo[242340]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:21:37 compute-0 nova_compute[237049]: 2026-01-10 17:21:37.436 237053 DEBUG oslo_concurrency.lockutils [None req-0cc8dfa5-ebce-444d-ac7a-56cb838145c3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Acquiring lock "114a4603-17a5-4e6b-b2d6-c77ef324a07d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 10 17:21:37 compute-0 nova_compute[237049]: 2026-01-10 17:21:37.437 237053 DEBUG oslo_concurrency.lockutils [None req-0cc8dfa5-ebce-444d-ac7a-56cb838145c3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Lock "114a4603-17a5-4e6b-b2d6-c77ef324a07d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 10 17:21:37 compute-0 nova_compute[237049]: 2026-01-10 17:21:37.438 237053 DEBUG oslo_concurrency.lockutils [None req-0cc8dfa5-ebce-444d-ac7a-56cb838145c3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Acquiring lock "114a4603-17a5-4e6b-b2d6-c77ef324a07d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 10 17:21:37 compute-0 nova_compute[237049]: 2026-01-10 17:21:37.438 237053 DEBUG oslo_concurrency.lockutils [None req-0cc8dfa5-ebce-444d-ac7a-56cb838145c3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Lock "114a4603-17a5-4e6b-b2d6-c77ef324a07d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 10 17:21:37 compute-0 nova_compute[237049]: 2026-01-10 17:21:37.439 237053 DEBUG oslo_concurrency.lockutils [None req-0cc8dfa5-ebce-444d-ac7a-56cb838145c3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Lock "114a4603-17a5-4e6b-b2d6-c77ef324a07d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 10 17:21:37 compute-0 nova_compute[237049]: 2026-01-10 17:21:37.441 237053 INFO nova.compute.manager [None req-0cc8dfa5-ebce-444d-ac7a-56cb838145c3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 114a4603-17a5-4e6b-b2d6-c77ef324a07d] Terminating instance
Jan 10 17:21:37 compute-0 nova_compute[237049]: 2026-01-10 17:21:37.443 237053 DEBUG oslo_concurrency.lockutils [None req-0cc8dfa5-ebce-444d-ac7a-56cb838145c3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Acquiring lock "refresh_cache-114a4603-17a5-4e6b-b2d6-c77ef324a07d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 10 17:21:37 compute-0 nova_compute[237049]: 2026-01-10 17:21:37.443 237053 DEBUG oslo_concurrency.lockutils [None req-0cc8dfa5-ebce-444d-ac7a-56cb838145c3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Acquired lock "refresh_cache-114a4603-17a5-4e6b-b2d6-c77ef324a07d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 10 17:21:37 compute-0 nova_compute[237049]: 2026-01-10 17:21:37.444 237053 DEBUG nova.network.neutron [None req-0cc8dfa5-ebce-444d-ac7a-56cb838145c3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 114a4603-17a5-4e6b-b2d6-c77ef324a07d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 10 17:21:37 compute-0 ceph-mon[75249]: pgmap v818: 177 pgs: 177 active+clean; 42 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 111 KiB/s rd, 5.2 KiB/s wr, 142 op/s
Jan 10 17:21:37 compute-0 ceph-mon[75249]: osdmap e109: 3 total, 3 up, 3 in
Jan 10 17:21:37 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:21:37 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:21:37 compute-0 sudo[242340]: pam_unix(sudo:session): session closed for user root
Jan 10 17:21:37 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 17:21:37 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 17:21:37 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 10 17:21:37 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 10 17:21:37 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 10 17:21:37 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:21:37 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 10 17:21:37 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 10 17:21:37 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 10 17:21:37 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 10 17:21:37 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 17:21:37 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 17:21:37 compute-0 nova_compute[237049]: 2026-01-10 17:21:37.942 237053 DEBUG nova.network.neutron [None req-0cc8dfa5-ebce-444d-ac7a-56cb838145c3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 114a4603-17a5-4e6b-b2d6-c77ef324a07d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 10 17:21:37 compute-0 sudo[242396]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 17:21:37 compute-0 sudo[242396]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:21:37 compute-0 sudo[242396]: pam_unix(sudo:session): session closed for user root
Jan 10 17:21:38 compute-0 sudo[242421]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 10 17:21:38 compute-0 sudo[242421]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:21:38 compute-0 ceph-mgr[75538]: [balancer INFO root] Optimize plan auto_2026-01-10_17:21:38
Jan 10 17:21:38 compute-0 ceph-mgr[75538]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 10 17:21:38 compute-0 ceph-mgr[75538]: [balancer INFO root] do_upmap
Jan 10 17:21:38 compute-0 ceph-mgr[75538]: [balancer INFO root] pools ['.mgr', 'backups', 'vms', 'cephfs.cephfs.data', 'volumes', 'images', 'cephfs.cephfs.meta']
Jan 10 17:21:38 compute-0 ceph-mgr[75538]: [balancer INFO root] prepared 0/10 upmap changes
Jan 10 17:21:38 compute-0 nova_compute[237049]: 2026-01-10 17:21:38.221 237053 DEBUG nova.network.neutron [None req-0cc8dfa5-ebce-444d-ac7a-56cb838145c3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 114a4603-17a5-4e6b-b2d6-c77ef324a07d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 10 17:21:38 compute-0 nova_compute[237049]: 2026-01-10 17:21:38.235 237053 DEBUG oslo_concurrency.lockutils [None req-0cc8dfa5-ebce-444d-ac7a-56cb838145c3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Releasing lock "refresh_cache-114a4603-17a5-4e6b-b2d6-c77ef324a07d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 10 17:21:38 compute-0 nova_compute[237049]: 2026-01-10 17:21:38.236 237053 DEBUG nova.compute.manager [None req-0cc8dfa5-ebce-444d-ac7a-56cb838145c3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 114a4603-17a5-4e6b-b2d6-c77ef324a07d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 10 17:21:38 compute-0 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000002.scope: Deactivated successfully.
Jan 10 17:21:38 compute-0 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000002.scope: Consumed 1.179s CPU time.
Jan 10 17:21:38 compute-0 systemd-machined[205102]: Machine qemu-2-instance-00000002 terminated.
Jan 10 17:21:38 compute-0 podman[242458]: 2026-01-10 17:21:38.393538017 +0000 UTC m=+0.056189208 container create 5e866af974d52a602bde1168dcd4f4ade35399db87989d76a72b51d963abd66a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_chebyshev, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 17:21:38 compute-0 systemd[1]: Started libpod-conmon-5e866af974d52a602bde1168dcd4f4ade35399db87989d76a72b51d963abd66a.scope.
Jan 10 17:21:38 compute-0 podman[242458]: 2026-01-10 17:21:38.365116659 +0000 UTC m=+0.027767910 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:21:38 compute-0 nova_compute[237049]: 2026-01-10 17:21:38.464 237053 INFO nova.virt.libvirt.driver [-] [instance: 114a4603-17a5-4e6b-b2d6-c77ef324a07d] Instance destroyed successfully.
Jan 10 17:21:38 compute-0 nova_compute[237049]: 2026-01-10 17:21:38.465 237053 DEBUG nova.objects.instance [None req-0cc8dfa5-ebce-444d-ac7a-56cb838145c3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Lazy-loading 'resources' on Instance uuid 114a4603-17a5-4e6b-b2d6-c77ef324a07d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 10 17:21:38 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:21:38 compute-0 podman[242458]: 2026-01-10 17:21:38.504116508 +0000 UTC m=+0.166767779 container init 5e866af974d52a602bde1168dcd4f4ade35399db87989d76a72b51d963abd66a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_chebyshev, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 10 17:21:38 compute-0 podman[242458]: 2026-01-10 17:21:38.515492254 +0000 UTC m=+0.178143445 container start 5e866af974d52a602bde1168dcd4f4ade35399db87989d76a72b51d963abd66a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_chebyshev, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 10 17:21:38 compute-0 podman[242458]: 2026-01-10 17:21:38.52011867 +0000 UTC m=+0.182769881 container attach 5e866af974d52a602bde1168dcd4f4ade35399db87989d76a72b51d963abd66a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_chebyshev, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 10 17:21:38 compute-0 elegant_chebyshev[242475]: 167 167
Jan 10 17:21:38 compute-0 systemd[1]: libpod-5e866af974d52a602bde1168dcd4f4ade35399db87989d76a72b51d963abd66a.scope: Deactivated successfully.
Jan 10 17:21:38 compute-0 podman[242458]: 2026-01-10 17:21:38.530809745 +0000 UTC m=+0.193460916 container died 5e866af974d52a602bde1168dcd4f4ade35399db87989d76a72b51d963abd66a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_chebyshev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 10 17:21:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-9ac9627a80ce9e05d4199ade9d2d8212fde8054f2f760ac26968d358cc5b8e9b-merged.mount: Deactivated successfully.
Jan 10 17:21:38 compute-0 podman[242458]: 2026-01-10 17:21:38.576623656 +0000 UTC m=+0.239274867 container remove 5e866af974d52a602bde1168dcd4f4ade35399db87989d76a72b51d963abd66a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_chebyshev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 10 17:21:38 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v820: 177 pgs: 177 active+clean; 42 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 102 KiB/s rd, 5.8 KiB/s wr, 132 op/s
Jan 10 17:21:38 compute-0 systemd[1]: libpod-conmon-5e866af974d52a602bde1168dcd4f4ade35399db87989d76a72b51d963abd66a.scope: Deactivated successfully.
Jan 10 17:21:38 compute-0 podman[242520]: 2026-01-10 17:21:38.740413857 +0000 UTC m=+0.043503074 container create ebaa469a032c68c31ecaa7e2cf881afd820b08f2e58228c3fd0fabda70e98cef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_meitner, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 17:21:38 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 17:21:38 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 10 17:21:38 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:21:38 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 10 17:21:38 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 10 17:21:38 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 17:21:38 compute-0 systemd[1]: Started libpod-conmon-ebaa469a032c68c31ecaa7e2cf881afd820b08f2e58228c3fd0fabda70e98cef.scope.
Jan 10 17:21:38 compute-0 podman[242520]: 2026-01-10 17:21:38.720404967 +0000 UTC m=+0.023494224 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:21:38 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:21:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abe3108b175475fa85651cf73b196f1dfd100223ad33e12304d42e9f0594b4c8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 17:21:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abe3108b175475fa85651cf73b196f1dfd100223ad33e12304d42e9f0594b4c8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 17:21:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abe3108b175475fa85651cf73b196f1dfd100223ad33e12304d42e9f0594b4c8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 17:21:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abe3108b175475fa85651cf73b196f1dfd100223ad33e12304d42e9f0594b4c8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 17:21:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abe3108b175475fa85651cf73b196f1dfd100223ad33e12304d42e9f0594b4c8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 10 17:21:38 compute-0 podman[242520]: 2026-01-10 17:21:38.867781764 +0000 UTC m=+0.170871061 container init ebaa469a032c68c31ecaa7e2cf881afd820b08f2e58228c3fd0fabda70e98cef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_meitner, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 10 17:21:38 compute-0 podman[242520]: 2026-01-10 17:21:38.878895661 +0000 UTC m=+0.181984888 container start ebaa469a032c68c31ecaa7e2cf881afd820b08f2e58228c3fd0fabda70e98cef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_meitner, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 10 17:21:38 compute-0 podman[242520]: 2026-01-10 17:21:38.883640251 +0000 UTC m=+0.186729488 container attach ebaa469a032c68c31ecaa7e2cf881afd820b08f2e58228c3fd0fabda70e98cef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_meitner, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 10 17:21:38 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e109 do_prune osdmap full prune enabled
Jan 10 17:21:38 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e110 e110: 3 total, 3 up, 3 in
Jan 10 17:21:38 compute-0 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e110: 3 total, 3 up, 3 in
Jan 10 17:21:39 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:21:39 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:21:39 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:21:39 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:21:39 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:21:39 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:21:39 compute-0 nova_compute[237049]: 2026-01-10 17:21:39.098 237053 INFO nova.virt.libvirt.driver [None req-0cc8dfa5-ebce-444d-ac7a-56cb838145c3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 114a4603-17a5-4e6b-b2d6-c77ef324a07d] Deleting instance files /var/lib/nova/instances/114a4603-17a5-4e6b-b2d6-c77ef324a07d_del
Jan 10 17:21:39 compute-0 nova_compute[237049]: 2026-01-10 17:21:39.100 237053 INFO nova.virt.libvirt.driver [None req-0cc8dfa5-ebce-444d-ac7a-56cb838145c3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 114a4603-17a5-4e6b-b2d6-c77ef324a07d] Deletion of /var/lib/nova/instances/114a4603-17a5-4e6b-b2d6-c77ef324a07d_del complete
Jan 10 17:21:39 compute-0 nova_compute[237049]: 2026-01-10 17:21:39.172 237053 DEBUG nova.virt.libvirt.host [None req-0cc8dfa5-ebce-444d-ac7a-56cb838145c3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754
Jan 10 17:21:39 compute-0 nova_compute[237049]: 2026-01-10 17:21:39.173 237053 INFO nova.virt.libvirt.host [None req-0cc8dfa5-ebce-444d-ac7a-56cb838145c3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] UEFI support detected
Jan 10 17:21:39 compute-0 nova_compute[237049]: 2026-01-10 17:21:39.175 237053 INFO nova.compute.manager [None req-0cc8dfa5-ebce-444d-ac7a-56cb838145c3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 114a4603-17a5-4e6b-b2d6-c77ef324a07d] Took 0.94 seconds to destroy the instance on the hypervisor.
Jan 10 17:21:39 compute-0 nova_compute[237049]: 2026-01-10 17:21:39.176 237053 DEBUG oslo.service.loopingcall [None req-0cc8dfa5-ebce-444d-ac7a-56cb838145c3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 10 17:21:39 compute-0 nova_compute[237049]: 2026-01-10 17:21:39.177 237053 DEBUG nova.compute.manager [-] [instance: 114a4603-17a5-4e6b-b2d6-c77ef324a07d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 10 17:21:39 compute-0 nova_compute[237049]: 2026-01-10 17:21:39.177 237053 DEBUG nova.network.neutron [-] [instance: 114a4603-17a5-4e6b-b2d6-c77ef324a07d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 10 17:21:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 10 17:21:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 10 17:21:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 10 17:21:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 10 17:21:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 10 17:21:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 10 17:21:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 10 17:21:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 10 17:21:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 10 17:21:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 10 17:21:39 compute-0 loving_meitner[242537]: --> passed data devices: 0 physical, 3 LVM
Jan 10 17:21:39 compute-0 loving_meitner[242537]: --> All data devices are unavailable
Jan 10 17:21:39 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e110 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:21:39 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e110 do_prune osdmap full prune enabled
Jan 10 17:21:39 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e111 e111: 3 total, 3 up, 3 in
Jan 10 17:21:39 compute-0 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e111: 3 total, 3 up, 3 in
Jan 10 17:21:39 compute-0 systemd[1]: libpod-ebaa469a032c68c31ecaa7e2cf881afd820b08f2e58228c3fd0fabda70e98cef.scope: Deactivated successfully.
Jan 10 17:21:39 compute-0 podman[242520]: 2026-01-10 17:21:39.417679002 +0000 UTC m=+0.720768239 container died ebaa469a032c68c31ecaa7e2cf881afd820b08f2e58228c3fd0fabda70e98cef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_meitner, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 10 17:21:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-abe3108b175475fa85651cf73b196f1dfd100223ad33e12304d42e9f0594b4c8-merged.mount: Deactivated successfully.
Jan 10 17:21:39 compute-0 podman[242520]: 2026-01-10 17:21:39.472001484 +0000 UTC m=+0.775090701 container remove ebaa469a032c68c31ecaa7e2cf881afd820b08f2e58228c3fd0fabda70e98cef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_meitner, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 10 17:21:39 compute-0 systemd[1]: libpod-conmon-ebaa469a032c68c31ecaa7e2cf881afd820b08f2e58228c3fd0fabda70e98cef.scope: Deactivated successfully.
Jan 10 17:21:39 compute-0 nova_compute[237049]: 2026-01-10 17:21:39.492 237053 DEBUG nova.network.neutron [-] [instance: 114a4603-17a5-4e6b-b2d6-c77ef324a07d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 10 17:21:39 compute-0 nova_compute[237049]: 2026-01-10 17:21:39.520 237053 DEBUG nova.network.neutron [-] [instance: 114a4603-17a5-4e6b-b2d6-c77ef324a07d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 10 17:21:39 compute-0 sudo[242421]: pam_unix(sudo:session): session closed for user root
Jan 10 17:21:39 compute-0 nova_compute[237049]: 2026-01-10 17:21:39.547 237053 INFO nova.compute.manager [-] [instance: 114a4603-17a5-4e6b-b2d6-c77ef324a07d] Took 0.37 seconds to deallocate network for instance.
Jan 10 17:21:39 compute-0 sudo[242570]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 17:21:39 compute-0 sudo[242570]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:21:39 compute-0 sudo[242570]: pam_unix(sudo:session): session closed for user root
Jan 10 17:21:39 compute-0 nova_compute[237049]: 2026-01-10 17:21:39.619 237053 DEBUG oslo_concurrency.lockutils [None req-0cc8dfa5-ebce-444d-ac7a-56cb838145c3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 10 17:21:39 compute-0 nova_compute[237049]: 2026-01-10 17:21:39.620 237053 DEBUG oslo_concurrency.lockutils [None req-0cc8dfa5-ebce-444d-ac7a-56cb838145c3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 10 17:21:39 compute-0 sudo[242595]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 -- lvm list --format json
Jan 10 17:21:39 compute-0 sudo[242595]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:21:39 compute-0 nova_compute[237049]: 2026-01-10 17:21:39.703 237053 DEBUG oslo_concurrency.processutils [None req-0cc8dfa5-ebce-444d-ac7a-56cb838145c3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 10 17:21:39 compute-0 ceph-mon[75249]: pgmap v820: 177 pgs: 177 active+clean; 42 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 102 KiB/s rd, 5.8 KiB/s wr, 132 op/s
Jan 10 17:21:39 compute-0 ceph-mon[75249]: osdmap e110: 3 total, 3 up, 3 in
Jan 10 17:21:39 compute-0 ceph-mon[75249]: osdmap e111: 3 total, 3 up, 3 in
Jan 10 17:21:39 compute-0 podman[242652]: 2026-01-10 17:21:39.956037469 +0000 UTC m=+0.039863006 container create 5a1626f8900ce4986f91a4eee685a00f2bb49fef1c51730345455c56bb5828e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_mclean, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 17:21:40 compute-0 systemd[1]: Started libpod-conmon-5a1626f8900ce4986f91a4eee685a00f2bb49fef1c51730345455c56bb5828e5.scope.
Jan 10 17:21:40 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:21:40 compute-0 podman[242652]: 2026-01-10 17:21:39.936530134 +0000 UTC m=+0.020355651 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:21:40 compute-0 podman[242652]: 2026-01-10 17:21:40.048673601 +0000 UTC m=+0.132499188 container init 5a1626f8900ce4986f91a4eee685a00f2bb49fef1c51730345455c56bb5828e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_mclean, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 10 17:21:40 compute-0 podman[242652]: 2026-01-10 17:21:40.060951233 +0000 UTC m=+0.144776760 container start 5a1626f8900ce4986f91a4eee685a00f2bb49fef1c51730345455c56bb5828e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_mclean, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 10 17:21:40 compute-0 podman[242652]: 2026-01-10 17:21:40.065523478 +0000 UTC m=+0.149349015 container attach 5a1626f8900ce4986f91a4eee685a00f2bb49fef1c51730345455c56bb5828e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_mclean, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 17:21:40 compute-0 thirsty_mclean[242668]: 167 167
Jan 10 17:21:40 compute-0 systemd[1]: libpod-5a1626f8900ce4986f91a4eee685a00f2bb49fef1c51730345455c56bb5828e5.scope: Deactivated successfully.
Jan 10 17:21:40 compute-0 podman[242652]: 2026-01-10 17:21:40.069419963 +0000 UTC m=+0.153245460 container died 5a1626f8900ce4986f91a4eee685a00f2bb49fef1c51730345455c56bb5828e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_mclean, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 10 17:21:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-ca18b563b63b57e64a640a7310f9c598bba6e35d45c783dfa150dc6b1dfd0ee4-merged.mount: Deactivated successfully.
Jan 10 17:21:40 compute-0 podman[242652]: 2026-01-10 17:21:40.111767172 +0000 UTC m=+0.195592679 container remove 5a1626f8900ce4986f91a4eee685a00f2bb49fef1c51730345455c56bb5828e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_mclean, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 10 17:21:40 compute-0 systemd[1]: libpod-conmon-5a1626f8900ce4986f91a4eee685a00f2bb49fef1c51730345455c56bb5828e5.scope: Deactivated successfully.
Jan 10 17:21:40 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 10 17:21:40 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2679230637' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 10 17:21:40 compute-0 nova_compute[237049]: 2026-01-10 17:21:40.258 237053 DEBUG oslo_concurrency.processutils [None req-0cc8dfa5-ebce-444d-ac7a-56cb838145c3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.555s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 10 17:21:40 compute-0 nova_compute[237049]: 2026-01-10 17:21:40.267 237053 DEBUG nova.compute.provider_tree [None req-0cc8dfa5-ebce-444d-ac7a-56cb838145c3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Inventory has not changed in ProviderTree for provider: 5f85855c-8a9b-43b5-ae49-f5846b9dcebe update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 10 17:21:40 compute-0 nova_compute[237049]: 2026-01-10 17:21:40.321 237053 DEBUG nova.scheduler.client.report [None req-0cc8dfa5-ebce-444d-ac7a-56cb838145c3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Inventory has not changed for provider 5f85855c-8a9b-43b5-ae49-f5846b9dcebe based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 10 17:21:40 compute-0 podman[242694]: 2026-01-10 17:21:40.337635574 +0000 UTC m=+0.048174642 container create 3d4be6da4b2d7bd91143bb5212234053d30d5df328d647582be1898f61becc15 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_volhard, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 10 17:21:40 compute-0 nova_compute[237049]: 2026-01-10 17:21:40.376 237053 DEBUG oslo_concurrency.lockutils [None req-0cc8dfa5-ebce-444d-ac7a-56cb838145c3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.756s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 10 17:21:40 compute-0 systemd[1]: Started libpod-conmon-3d4be6da4b2d7bd91143bb5212234053d30d5df328d647582be1898f61becc15.scope.
Jan 10 17:21:40 compute-0 podman[242694]: 2026-01-10 17:21:40.313985786 +0000 UTC m=+0.024524834 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:21:40 compute-0 nova_compute[237049]: 2026-01-10 17:21:40.421 237053 INFO nova.scheduler.client.report [None req-0cc8dfa5-ebce-444d-ac7a-56cb838145c3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Deleted allocations for instance 114a4603-17a5-4e6b-b2d6-c77ef324a07d
Jan 10 17:21:40 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:21:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78f34f70b3ddc92e54872b8170f51e7afacf11a37c0d93c259498770c803a123/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 17:21:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78f34f70b3ddc92e54872b8170f51e7afacf11a37c0d93c259498770c803a123/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 17:21:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78f34f70b3ddc92e54872b8170f51e7afacf11a37c0d93c259498770c803a123/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 17:21:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78f34f70b3ddc92e54872b8170f51e7afacf11a37c0d93c259498770c803a123/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 17:21:40 compute-0 podman[242694]: 2026-01-10 17:21:40.459665023 +0000 UTC m=+0.170204161 container init 3d4be6da4b2d7bd91143bb5212234053d30d5df328d647582be1898f61becc15 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_volhard, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 17:21:40 compute-0 podman[242694]: 2026-01-10 17:21:40.47246984 +0000 UTC m=+0.183008908 container start 3d4be6da4b2d7bd91143bb5212234053d30d5df328d647582be1898f61becc15 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_volhard, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 10 17:21:40 compute-0 podman[242694]: 2026-01-10 17:21:40.478048715 +0000 UTC m=+0.188587853 container attach 3d4be6da4b2d7bd91143bb5212234053d30d5df328d647582be1898f61becc15 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_volhard, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 10 17:21:40 compute-0 nova_compute[237049]: 2026-01-10 17:21:40.528 237053 DEBUG oslo_concurrency.lockutils [None req-0cc8dfa5-ebce-444d-ac7a-56cb838145c3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Lock "114a4603-17a5-4e6b-b2d6-c77ef324a07d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.090s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 10 17:21:40 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v823: 177 pgs: 177 active+clean; 42 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 169 KiB/s rd, 9.8 KiB/s wr, 219 op/s
Jan 10 17:21:40 compute-0 happy_volhard[242711]: {
Jan 10 17:21:40 compute-0 happy_volhard[242711]:     "0": [
Jan 10 17:21:40 compute-0 happy_volhard[242711]:         {
Jan 10 17:21:40 compute-0 happy_volhard[242711]:             "devices": [
Jan 10 17:21:40 compute-0 happy_volhard[242711]:                 "/dev/loop3"
Jan 10 17:21:40 compute-0 happy_volhard[242711]:             ],
Jan 10 17:21:40 compute-0 happy_volhard[242711]:             "lv_name": "ceph_lv0",
Jan 10 17:21:40 compute-0 happy_volhard[242711]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 10 17:21:40 compute-0 happy_volhard[242711]:             "lv_size": "21470642176",
Jan 10 17:21:40 compute-0 happy_volhard[242711]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9aa1dcc9-88f4-49c0-be40-744313964d3e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 17:21:40 compute-0 happy_volhard[242711]:             "lv_uuid": "fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2",
Jan 10 17:21:40 compute-0 happy_volhard[242711]:             "name": "ceph_lv0",
Jan 10 17:21:40 compute-0 happy_volhard[242711]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 10 17:21:40 compute-0 happy_volhard[242711]:             "tags": {
Jan 10 17:21:40 compute-0 happy_volhard[242711]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 10 17:21:40 compute-0 happy_volhard[242711]:                 "ceph.block_uuid": "fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2",
Jan 10 17:21:40 compute-0 happy_volhard[242711]:                 "ceph.cephx_lockbox_secret": "",
Jan 10 17:21:40 compute-0 happy_volhard[242711]:                 "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 17:21:40 compute-0 happy_volhard[242711]:                 "ceph.cluster_name": "ceph",
Jan 10 17:21:40 compute-0 happy_volhard[242711]:                 "ceph.crush_device_class": "",
Jan 10 17:21:40 compute-0 happy_volhard[242711]:                 "ceph.encrypted": "0",
Jan 10 17:21:40 compute-0 happy_volhard[242711]:                 "ceph.objectstore": "bluestore",
Jan 10 17:21:40 compute-0 happy_volhard[242711]:                 "ceph.osd_fsid": "9aa1dcc9-88f4-49c0-be40-744313964d3e",
Jan 10 17:21:40 compute-0 happy_volhard[242711]:                 "ceph.osd_id": "0",
Jan 10 17:21:40 compute-0 happy_volhard[242711]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 10 17:21:40 compute-0 happy_volhard[242711]:                 "ceph.type": "block",
Jan 10 17:21:40 compute-0 happy_volhard[242711]:                 "ceph.vdo": "0",
Jan 10 17:21:40 compute-0 happy_volhard[242711]:                 "ceph.with_tpm": "0"
Jan 10 17:21:40 compute-0 happy_volhard[242711]:             },
Jan 10 17:21:40 compute-0 happy_volhard[242711]:             "type": "block",
Jan 10 17:21:40 compute-0 happy_volhard[242711]:             "vg_name": "ceph_vg0"
Jan 10 17:21:40 compute-0 happy_volhard[242711]:         }
Jan 10 17:21:40 compute-0 happy_volhard[242711]:     ],
Jan 10 17:21:40 compute-0 happy_volhard[242711]:     "1": [
Jan 10 17:21:40 compute-0 happy_volhard[242711]:         {
Jan 10 17:21:40 compute-0 happy_volhard[242711]:             "devices": [
Jan 10 17:21:40 compute-0 happy_volhard[242711]:                 "/dev/loop4"
Jan 10 17:21:40 compute-0 happy_volhard[242711]:             ],
Jan 10 17:21:40 compute-0 happy_volhard[242711]:             "lv_name": "ceph_lv1",
Jan 10 17:21:40 compute-0 happy_volhard[242711]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 10 17:21:40 compute-0 happy_volhard[242711]:             "lv_size": "21470642176",
Jan 10 17:21:40 compute-0 happy_volhard[242711]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e8e31518-65ae-476c-891c-e2fc550d0a1c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 17:21:40 compute-0 happy_volhard[242711]:             "lv_uuid": "jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE",
Jan 10 17:21:40 compute-0 happy_volhard[242711]:             "name": "ceph_lv1",
Jan 10 17:21:40 compute-0 happy_volhard[242711]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 10 17:21:40 compute-0 happy_volhard[242711]:             "tags": {
Jan 10 17:21:40 compute-0 happy_volhard[242711]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 10 17:21:40 compute-0 happy_volhard[242711]:                 "ceph.block_uuid": "jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE",
Jan 10 17:21:40 compute-0 happy_volhard[242711]:                 "ceph.cephx_lockbox_secret": "",
Jan 10 17:21:40 compute-0 happy_volhard[242711]:                 "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 17:21:40 compute-0 happy_volhard[242711]:                 "ceph.cluster_name": "ceph",
Jan 10 17:21:40 compute-0 happy_volhard[242711]:                 "ceph.crush_device_class": "",
Jan 10 17:21:40 compute-0 happy_volhard[242711]:                 "ceph.encrypted": "0",
Jan 10 17:21:40 compute-0 happy_volhard[242711]:                 "ceph.objectstore": "bluestore",
Jan 10 17:21:40 compute-0 happy_volhard[242711]:                 "ceph.osd_fsid": "e8e31518-65ae-476c-891c-e2fc550d0a1c",
Jan 10 17:21:40 compute-0 happy_volhard[242711]:                 "ceph.osd_id": "1",
Jan 10 17:21:40 compute-0 happy_volhard[242711]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 10 17:21:40 compute-0 happy_volhard[242711]:                 "ceph.type": "block",
Jan 10 17:21:40 compute-0 happy_volhard[242711]:                 "ceph.vdo": "0",
Jan 10 17:21:40 compute-0 happy_volhard[242711]:                 "ceph.with_tpm": "0"
Jan 10 17:21:40 compute-0 happy_volhard[242711]:             },
Jan 10 17:21:40 compute-0 happy_volhard[242711]:             "type": "block",
Jan 10 17:21:40 compute-0 happy_volhard[242711]:             "vg_name": "ceph_vg1"
Jan 10 17:21:40 compute-0 happy_volhard[242711]:         }
Jan 10 17:21:40 compute-0 happy_volhard[242711]:     ],
Jan 10 17:21:40 compute-0 happy_volhard[242711]:     "2": [
Jan 10 17:21:40 compute-0 happy_volhard[242711]:         {
Jan 10 17:21:40 compute-0 happy_volhard[242711]:             "devices": [
Jan 10 17:21:40 compute-0 happy_volhard[242711]:                 "/dev/loop5"
Jan 10 17:21:40 compute-0 happy_volhard[242711]:             ],
Jan 10 17:21:40 compute-0 happy_volhard[242711]:             "lv_name": "ceph_lv2",
Jan 10 17:21:40 compute-0 happy_volhard[242711]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 10 17:21:40 compute-0 happy_volhard[242711]:             "lv_size": "21470642176",
Jan 10 17:21:40 compute-0 happy_volhard[242711]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=87473727-6468-4f68-8371-e0bf60edaa43,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 17:21:40 compute-0 happy_volhard[242711]:             "lv_uuid": "47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL",
Jan 10 17:21:40 compute-0 happy_volhard[242711]:             "name": "ceph_lv2",
Jan 10 17:21:40 compute-0 happy_volhard[242711]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 10 17:21:40 compute-0 happy_volhard[242711]:             "tags": {
Jan 10 17:21:40 compute-0 happy_volhard[242711]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 10 17:21:40 compute-0 happy_volhard[242711]:                 "ceph.block_uuid": "47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL",
Jan 10 17:21:40 compute-0 happy_volhard[242711]:                 "ceph.cephx_lockbox_secret": "",
Jan 10 17:21:40 compute-0 happy_volhard[242711]:                 "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 17:21:40 compute-0 happy_volhard[242711]:                 "ceph.cluster_name": "ceph",
Jan 10 17:21:40 compute-0 happy_volhard[242711]:                 "ceph.crush_device_class": "",
Jan 10 17:21:40 compute-0 happy_volhard[242711]:                 "ceph.encrypted": "0",
Jan 10 17:21:40 compute-0 happy_volhard[242711]:                 "ceph.objectstore": "bluestore",
Jan 10 17:21:40 compute-0 happy_volhard[242711]:                 "ceph.osd_fsid": "87473727-6468-4f68-8371-e0bf60edaa43",
Jan 10 17:21:40 compute-0 happy_volhard[242711]:                 "ceph.osd_id": "2",
Jan 10 17:21:40 compute-0 happy_volhard[242711]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 10 17:21:40 compute-0 happy_volhard[242711]:                 "ceph.type": "block",
Jan 10 17:21:40 compute-0 happy_volhard[242711]:                 "ceph.vdo": "0",
Jan 10 17:21:40 compute-0 happy_volhard[242711]:                 "ceph.with_tpm": "0"
Jan 10 17:21:40 compute-0 happy_volhard[242711]:             },
Jan 10 17:21:40 compute-0 happy_volhard[242711]:             "type": "block",
Jan 10 17:21:40 compute-0 happy_volhard[242711]:             "vg_name": "ceph_vg2"
Jan 10 17:21:40 compute-0 happy_volhard[242711]:         }
Jan 10 17:21:40 compute-0 happy_volhard[242711]:     ]
Jan 10 17:21:40 compute-0 happy_volhard[242711]: }
Jan 10 17:21:40 compute-0 systemd[1]: libpod-3d4be6da4b2d7bd91143bb5212234053d30d5df328d647582be1898f61becc15.scope: Deactivated successfully.
Jan 10 17:21:40 compute-0 podman[242694]: 2026-01-10 17:21:40.858001921 +0000 UTC m=+0.568540989 container died 3d4be6da4b2d7bd91143bb5212234053d30d5df328d647582be1898f61becc15 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_volhard, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 17:21:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-78f34f70b3ddc92e54872b8170f51e7afacf11a37c0d93c259498770c803a123-merged.mount: Deactivated successfully.
Jan 10 17:21:40 compute-0 podman[242694]: 2026-01-10 17:21:40.909665094 +0000 UTC m=+0.620204122 container remove 3d4be6da4b2d7bd91143bb5212234053d30d5df328d647582be1898f61becc15 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_volhard, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 17:21:40 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/2679230637' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 10 17:21:40 compute-0 systemd[1]: libpod-conmon-3d4be6da4b2d7bd91143bb5212234053d30d5df328d647582be1898f61becc15.scope: Deactivated successfully.
Jan 10 17:21:40 compute-0 sudo[242595]: pam_unix(sudo:session): session closed for user root
Jan 10 17:21:40 compute-0 nova_compute[237049]: 2026-01-10 17:21:40.998 237053 DEBUG oslo_concurrency.lockutils [None req-49a8e484-13d1-48ec-8c87-792f6e967cb2 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Acquiring lock "6290fedf-9ecb-464c-8d5e-b6af64859702" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 10 17:21:41 compute-0 nova_compute[237049]: 2026-01-10 17:21:41.000 237053 DEBUG oslo_concurrency.lockutils [None req-49a8e484-13d1-48ec-8c87-792f6e967cb2 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Lock "6290fedf-9ecb-464c-8d5e-b6af64859702" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 10 17:21:41 compute-0 nova_compute[237049]: 2026-01-10 17:21:41.000 237053 DEBUG oslo_concurrency.lockutils [None req-49a8e484-13d1-48ec-8c87-792f6e967cb2 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Acquiring lock "6290fedf-9ecb-464c-8d5e-b6af64859702-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 10 17:21:41 compute-0 nova_compute[237049]: 2026-01-10 17:21:41.001 237053 DEBUG oslo_concurrency.lockutils [None req-49a8e484-13d1-48ec-8c87-792f6e967cb2 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Lock "6290fedf-9ecb-464c-8d5e-b6af64859702-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 10 17:21:41 compute-0 nova_compute[237049]: 2026-01-10 17:21:41.001 237053 DEBUG oslo_concurrency.lockutils [None req-49a8e484-13d1-48ec-8c87-792f6e967cb2 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Lock "6290fedf-9ecb-464c-8d5e-b6af64859702-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 10 17:21:41 compute-0 nova_compute[237049]: 2026-01-10 17:21:41.004 237053 INFO nova.compute.manager [None req-49a8e484-13d1-48ec-8c87-792f6e967cb2 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 6290fedf-9ecb-464c-8d5e-b6af64859702] Terminating instance
Jan 10 17:21:41 compute-0 nova_compute[237049]: 2026-01-10 17:21:41.006 237053 DEBUG oslo_concurrency.lockutils [None req-49a8e484-13d1-48ec-8c87-792f6e967cb2 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Acquiring lock "refresh_cache-6290fedf-9ecb-464c-8d5e-b6af64859702" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 10 17:21:41 compute-0 nova_compute[237049]: 2026-01-10 17:21:41.006 237053 DEBUG oslo_concurrency.lockutils [None req-49a8e484-13d1-48ec-8c87-792f6e967cb2 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Acquired lock "refresh_cache-6290fedf-9ecb-464c-8d5e-b6af64859702" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 10 17:21:41 compute-0 nova_compute[237049]: 2026-01-10 17:21:41.007 237053 DEBUG nova.network.neutron [None req-49a8e484-13d1-48ec-8c87-792f6e967cb2 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 6290fedf-9ecb-464c-8d5e-b6af64859702] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 10 17:21:41 compute-0 sudo[242732]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 17:21:41 compute-0 sudo[242732]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:21:41 compute-0 sudo[242732]: pam_unix(sudo:session): session closed for user root
Jan 10 17:21:41 compute-0 sudo[242757]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 -- raw list --format json
Jan 10 17:21:41 compute-0 sudo[242757]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:21:41 compute-0 nova_compute[237049]: 2026-01-10 17:21:41.414 237053 DEBUG nova.network.neutron [None req-49a8e484-13d1-48ec-8c87-792f6e967cb2 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 6290fedf-9ecb-464c-8d5e-b6af64859702] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 10 17:21:41 compute-0 podman[242793]: 2026-01-10 17:21:41.416108791 +0000 UTC m=+0.056167108 container create 0cd9c60de812217d2fc22dc5a2bd6decfba871e0e0ce5cdd133d82d8089dc565 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_northcutt, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 10 17:21:41 compute-0 systemd[1]: Started libpod-conmon-0cd9c60de812217d2fc22dc5a2bd6decfba871e0e0ce5cdd133d82d8089dc565.scope.
Jan 10 17:21:41 compute-0 podman[242793]: 2026-01-10 17:21:41.388250959 +0000 UTC m=+0.028309326 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:21:41 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:21:41 compute-0 podman[242793]: 2026-01-10 17:21:41.519129488 +0000 UTC m=+0.159187835 container init 0cd9c60de812217d2fc22dc5a2bd6decfba871e0e0ce5cdd133d82d8089dc565 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_northcutt, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 10 17:21:41 compute-0 podman[242793]: 2026-01-10 17:21:41.528974479 +0000 UTC m=+0.169032806 container start 0cd9c60de812217d2fc22dc5a2bd6decfba871e0e0ce5cdd133d82d8089dc565 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_northcutt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 10 17:21:41 compute-0 podman[242793]: 2026-01-10 17:21:41.534032548 +0000 UTC m=+0.174090865 container attach 0cd9c60de812217d2fc22dc5a2bd6decfba871e0e0ce5cdd133d82d8089dc565 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_northcutt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 17:21:41 compute-0 affectionate_northcutt[242809]: 167 167
Jan 10 17:21:41 compute-0 systemd[1]: libpod-0cd9c60de812217d2fc22dc5a2bd6decfba871e0e0ce5cdd133d82d8089dc565.scope: Deactivated successfully.
Jan 10 17:21:41 compute-0 podman[242793]: 2026-01-10 17:21:41.536179481 +0000 UTC m=+0.176237798 container died 0cd9c60de812217d2fc22dc5a2bd6decfba871e0e0ce5cdd133d82d8089dc565 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_northcutt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 17:21:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-9604867bab8c8850292d061b4f0f68ec5afbaadc360a8c7c8e43cf572bf00334-merged.mount: Deactivated successfully.
Jan 10 17:21:41 compute-0 podman[242793]: 2026-01-10 17:21:41.5927641 +0000 UTC m=+0.232822407 container remove 0cd9c60de812217d2fc22dc5a2bd6decfba871e0e0ce5cdd133d82d8089dc565 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_northcutt, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 17:21:41 compute-0 systemd[1]: libpod-conmon-0cd9c60de812217d2fc22dc5a2bd6decfba871e0e0ce5cdd133d82d8089dc565.scope: Deactivated successfully.
Jan 10 17:21:41 compute-0 nova_compute[237049]: 2026-01-10 17:21:41.646 237053 DEBUG nova.network.neutron [None req-49a8e484-13d1-48ec-8c87-792f6e967cb2 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 6290fedf-9ecb-464c-8d5e-b6af64859702] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 10 17:21:41 compute-0 nova_compute[237049]: 2026-01-10 17:21:41.664 237053 DEBUG oslo_concurrency.lockutils [None req-49a8e484-13d1-48ec-8c87-792f6e967cb2 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Releasing lock "refresh_cache-6290fedf-9ecb-464c-8d5e-b6af64859702" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 10 17:21:41 compute-0 nova_compute[237049]: 2026-01-10 17:21:41.665 237053 DEBUG nova.compute.manager [None req-49a8e484-13d1-48ec-8c87-792f6e967cb2 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 6290fedf-9ecb-464c-8d5e-b6af64859702] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 10 17:21:41 compute-0 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Deactivated successfully.
Jan 10 17:21:41 compute-0 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Consumed 1.658s CPU time.
Jan 10 17:21:41 compute-0 systemd-machined[205102]: Machine qemu-1-instance-00000001 terminated.
Jan 10 17:21:41 compute-0 podman[242833]: 2026-01-10 17:21:41.836739756 +0000 UTC m=+0.051054037 container create 3b887bd9a01daca623e4572c31ae9e354931cd69cd91baecfc28ac3829f3aa46 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_zhukovsky, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 10 17:21:41 compute-0 systemd[1]: Started libpod-conmon-3b887bd9a01daca623e4572c31ae9e354931cd69cd91baecfc28ac3829f3aa46.scope.
Jan 10 17:21:41 compute-0 nova_compute[237049]: 2026-01-10 17:21:41.893 237053 INFO nova.virt.libvirt.driver [-] [instance: 6290fedf-9ecb-464c-8d5e-b6af64859702] Instance destroyed successfully.
Jan 10 17:21:41 compute-0 nova_compute[237049]: 2026-01-10 17:21:41.894 237053 DEBUG nova.objects.instance [None req-49a8e484-13d1-48ec-8c87-792f6e967cb2 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Lazy-loading 'resources' on Instance uuid 6290fedf-9ecb-464c-8d5e-b6af64859702 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 10 17:21:41 compute-0 podman[242833]: 2026-01-10 17:21:41.813990905 +0000 UTC m=+0.028305226 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:21:41 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:21:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16a37537895a39139185e9b74598e135fda086578f0b8a635155ed9dbcfee84d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 17:21:41 compute-0 ceph-mon[75249]: pgmap v823: 177 pgs: 177 active+clean; 42 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 169 KiB/s rd, 9.8 KiB/s wr, 219 op/s
Jan 10 17:21:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16a37537895a39139185e9b74598e135fda086578f0b8a635155ed9dbcfee84d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 17:21:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16a37537895a39139185e9b74598e135fda086578f0b8a635155ed9dbcfee84d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 17:21:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16a37537895a39139185e9b74598e135fda086578f0b8a635155ed9dbcfee84d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 17:21:41 compute-0 podman[242833]: 2026-01-10 17:21:41.937194608 +0000 UTC m=+0.151508899 container init 3b887bd9a01daca623e4572c31ae9e354931cd69cd91baecfc28ac3829f3aa46 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_zhukovsky, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 17:21:41 compute-0 podman[242833]: 2026-01-10 17:21:41.947685778 +0000 UTC m=+0.162000059 container start 3b887bd9a01daca623e4572c31ae9e354931cd69cd91baecfc28ac3829f3aa46 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_zhukovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 10 17:21:41 compute-0 podman[242833]: 2026-01-10 17:21:41.951314175 +0000 UTC m=+0.165628466 container attach 3b887bd9a01daca623e4572c31ae9e354931cd69cd91baecfc28ac3829f3aa46 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_zhukovsky, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 10 17:21:42 compute-0 nova_compute[237049]: 2026-01-10 17:21:42.075 237053 INFO nova.virt.libvirt.driver [None req-49a8e484-13d1-48ec-8c87-792f6e967cb2 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 6290fedf-9ecb-464c-8d5e-b6af64859702] Deleting instance files /var/lib/nova/instances/6290fedf-9ecb-464c-8d5e-b6af64859702_del
Jan 10 17:21:42 compute-0 nova_compute[237049]: 2026-01-10 17:21:42.076 237053 INFO nova.virt.libvirt.driver [None req-49a8e484-13d1-48ec-8c87-792f6e967cb2 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 6290fedf-9ecb-464c-8d5e-b6af64859702] Deletion of /var/lib/nova/instances/6290fedf-9ecb-464c-8d5e-b6af64859702_del complete
Jan 10 17:21:42 compute-0 nova_compute[237049]: 2026-01-10 17:21:42.134 237053 INFO nova.compute.manager [None req-49a8e484-13d1-48ec-8c87-792f6e967cb2 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 6290fedf-9ecb-464c-8d5e-b6af64859702] Took 0.47 seconds to destroy the instance on the hypervisor.
Jan 10 17:21:42 compute-0 nova_compute[237049]: 2026-01-10 17:21:42.135 237053 DEBUG oslo.service.loopingcall [None req-49a8e484-13d1-48ec-8c87-792f6e967cb2 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 10 17:21:42 compute-0 nova_compute[237049]: 2026-01-10 17:21:42.136 237053 DEBUG nova.compute.manager [-] [instance: 6290fedf-9ecb-464c-8d5e-b6af64859702] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 10 17:21:42 compute-0 nova_compute[237049]: 2026-01-10 17:21:42.136 237053 DEBUG nova.network.neutron [-] [instance: 6290fedf-9ecb-464c-8d5e-b6af64859702] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 10 17:21:42 compute-0 nova_compute[237049]: 2026-01-10 17:21:42.357 237053 DEBUG nova.network.neutron [-] [instance: 6290fedf-9ecb-464c-8d5e-b6af64859702] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 10 17:21:42 compute-0 nova_compute[237049]: 2026-01-10 17:21:42.386 237053 DEBUG nova.network.neutron [-] [instance: 6290fedf-9ecb-464c-8d5e-b6af64859702] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 10 17:21:42 compute-0 nova_compute[237049]: 2026-01-10 17:21:42.399 237053 INFO nova.compute.manager [-] [instance: 6290fedf-9ecb-464c-8d5e-b6af64859702] Took 0.26 seconds to deallocate network for instance.
Jan 10 17:21:42 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v824: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 105 KiB/s rd, 5.7 KiB/s wr, 140 op/s
Jan 10 17:21:42 compute-0 nova_compute[237049]: 2026-01-10 17:21:42.621 237053 INFO nova.compute.manager [None req-49a8e484-13d1-48ec-8c87-792f6e967cb2 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 6290fedf-9ecb-464c-8d5e-b6af64859702] Took 0.22 seconds to detach 1 volumes for instance.
Jan 10 17:21:42 compute-0 nova_compute[237049]: 2026-01-10 17:21:42.623 237053 DEBUG nova.compute.manager [None req-49a8e484-13d1-48ec-8c87-792f6e967cb2 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 6290fedf-9ecb-464c-8d5e-b6af64859702] Deleting volume: 77e9b8e1-774e-41cc-88ba-d21e1643cb3e _cleanup_volumes /usr/lib/python3.9/site-packages/nova/compute/manager.py:3217
Jan 10 17:21:42 compute-0 lvm[242945]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 10 17:21:42 compute-0 lvm[242945]: VG ceph_vg0 finished
Jan 10 17:21:42 compute-0 lvm[242948]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 10 17:21:42 compute-0 lvm[242948]: VG ceph_vg1 finished
Jan 10 17:21:42 compute-0 lvm[242950]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 10 17:21:42 compute-0 lvm[242950]: VG ceph_vg2 finished
Jan 10 17:21:42 compute-0 nova_compute[237049]: 2026-01-10 17:21:42.789 237053 DEBUG oslo_concurrency.lockutils [None req-49a8e484-13d1-48ec-8c87-792f6e967cb2 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 10 17:21:42 compute-0 nova_compute[237049]: 2026-01-10 17:21:42.790 237053 DEBUG oslo_concurrency.lockutils [None req-49a8e484-13d1-48ec-8c87-792f6e967cb2 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 10 17:21:42 compute-0 funny_zhukovsky[242850]: {}
Jan 10 17:21:42 compute-0 nova_compute[237049]: 2026-01-10 17:21:42.827 237053 DEBUG oslo_concurrency.processutils [None req-49a8e484-13d1-48ec-8c87-792f6e967cb2 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 10 17:21:42 compute-0 systemd[1]: libpod-3b887bd9a01daca623e4572c31ae9e354931cd69cd91baecfc28ac3829f3aa46.scope: Deactivated successfully.
Jan 10 17:21:42 compute-0 podman[242833]: 2026-01-10 17:21:42.871770031 +0000 UTC m=+1.086084342 container died 3b887bd9a01daca623e4572c31ae9e354931cd69cd91baecfc28ac3829f3aa46 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_zhukovsky, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2)
Jan 10 17:21:42 compute-0 systemd[1]: libpod-3b887bd9a01daca623e4572c31ae9e354931cd69cd91baecfc28ac3829f3aa46.scope: Consumed 1.451s CPU time.
Jan 10 17:21:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-16a37537895a39139185e9b74598e135fda086578f0b8a635155ed9dbcfee84d-merged.mount: Deactivated successfully.
Jan 10 17:21:42 compute-0 podman[242833]: 2026-01-10 17:21:42.929098542 +0000 UTC m=+1.143412853 container remove 3b887bd9a01daca623e4572c31ae9e354931cd69cd91baecfc28ac3829f3aa46 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_zhukovsky, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 10 17:21:42 compute-0 systemd[1]: libpod-conmon-3b887bd9a01daca623e4572c31ae9e354931cd69cd91baecfc28ac3829f3aa46.scope: Deactivated successfully.
Jan 10 17:21:42 compute-0 sudo[242757]: pam_unix(sudo:session): session closed for user root
Jan 10 17:21:42 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 10 17:21:42 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:21:42 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 10 17:21:42 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:21:43 compute-0 sudo[242966]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 10 17:21:43 compute-0 sudo[242966]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:21:43 compute-0 sudo[242966]: pam_unix(sudo:session): session closed for user root
Jan 10 17:21:43 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 10 17:21:43 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1703407448' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 10 17:21:43 compute-0 nova_compute[237049]: 2026-01-10 17:21:43.432 237053 DEBUG oslo_concurrency.processutils [None req-49a8e484-13d1-48ec-8c87-792f6e967cb2 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.606s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 10 17:21:43 compute-0 nova_compute[237049]: 2026-01-10 17:21:43.442 237053 DEBUG nova.compute.provider_tree [None req-49a8e484-13d1-48ec-8c87-792f6e967cb2 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Inventory has not changed in ProviderTree for provider: 5f85855c-8a9b-43b5-ae49-f5846b9dcebe update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 10 17:21:43 compute-0 nova_compute[237049]: 2026-01-10 17:21:43.470 237053 DEBUG nova.scheduler.client.report [None req-49a8e484-13d1-48ec-8c87-792f6e967cb2 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Inventory has not changed for provider 5f85855c-8a9b-43b5-ae49-f5846b9dcebe based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 10 17:21:43 compute-0 nova_compute[237049]: 2026-01-10 17:21:43.497 237053 DEBUG oslo_concurrency.lockutils [None req-49a8e484-13d1-48ec-8c87-792f6e967cb2 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.707s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 10 17:21:43 compute-0 nova_compute[237049]: 2026-01-10 17:21:43.526 237053 INFO nova.scheduler.client.report [None req-49a8e484-13d1-48ec-8c87-792f6e967cb2 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Deleted allocations for instance 6290fedf-9ecb-464c-8d5e-b6af64859702
Jan 10 17:21:43 compute-0 nova_compute[237049]: 2026-01-10 17:21:43.591 237053 DEBUG oslo_concurrency.lockutils [None req-49a8e484-13d1-48ec-8c87-792f6e967cb2 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Lock "6290fedf-9ecb-464c-8d5e-b6af64859702" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.592s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 10 17:21:43 compute-0 ceph-mon[75249]: pgmap v824: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 105 KiB/s rd, 5.7 KiB/s wr, 140 op/s
Jan 10 17:21:43 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:21:43 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:21:43 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/1703407448' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 10 17:21:44 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e111 do_prune osdmap full prune enabled
Jan 10 17:21:44 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e112 e112: 3 total, 3 up, 3 in
Jan 10 17:21:44 compute-0 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e112: 3 total, 3 up, 3 in
Jan 10 17:21:44 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 10 17:21:44 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2941004804' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 10 17:21:44 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 10 17:21:44 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2941004804' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 10 17:21:44 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e112 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:21:44 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e112 do_prune osdmap full prune enabled
Jan 10 17:21:44 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e113 e113: 3 total, 3 up, 3 in
Jan 10 17:21:44 compute-0 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e113: 3 total, 3 up, 3 in
Jan 10 17:21:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] _maybe_adjust
Jan 10 17:21:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:21:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 10 17:21:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:21:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 2.7939546245642764e-06 of space, bias 1.0, pg target 0.0008381863873692829 quantized to 32 (current 32)
Jan 10 17:21:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:21:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 2.592532299548735e-07 of space, bias 1.0, pg target 7.777596898646204e-05 quantized to 32 (current 32)
Jan 10 17:21:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:21:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 17:21:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:21:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006687338715334877 of space, bias 1.0, pg target 0.2006201614600463 quantized to 32 (current 32)
Jan 10 17:21:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:21:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0344668074946482e-06 of space, bias 4.0, pg target 0.0012413601689935778 quantized to 16 (current 16)
Jan 10 17:21:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:21:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 17:21:44 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v827: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 110 KiB/s rd, 5.1 KiB/s wr, 146 op/s
Jan 10 17:21:45 compute-0 ceph-mon[75249]: osdmap e112: 3 total, 3 up, 3 in
Jan 10 17:21:45 compute-0 ceph-mon[75249]: from='client.? 192.168.122.10:0/2941004804' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 10 17:21:45 compute-0 ceph-mon[75249]: from='client.? 192.168.122.10:0/2941004804' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 10 17:21:45 compute-0 ceph-mon[75249]: osdmap e113: 3 total, 3 up, 3 in
Jan 10 17:21:46 compute-0 ceph-mon[75249]: pgmap v827: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 110 KiB/s rd, 5.1 KiB/s wr, 146 op/s
Jan 10 17:21:46 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v828: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 116 KiB/s rd, 5.6 KiB/s wr, 154 op/s
Jan 10 17:21:48 compute-0 ceph-mon[75249]: pgmap v828: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 116 KiB/s rd, 5.6 KiB/s wr, 154 op/s
Jan 10 17:21:48 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v829: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 68 KiB/s rd, 2.4 KiB/s wr, 89 op/s
Jan 10 17:21:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:21:48.923 152671 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 10 17:21:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:21:48.925 152671 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 10 17:21:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:21:48.926 152671 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 10 17:21:49 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:21:49 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e113 do_prune osdmap full prune enabled
Jan 10 17:21:49 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 e114: 3 total, 3 up, 3 in
Jan 10 17:21:49 compute-0 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e114: 3 total, 3 up, 3 in
Jan 10 17:21:50 compute-0 ceph-mon[75249]: pgmap v829: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 68 KiB/s rd, 2.4 KiB/s wr, 89 op/s
Jan 10 17:21:50 compute-0 ceph-mon[75249]: osdmap e114: 3 total, 3 up, 3 in
Jan 10 17:21:50 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v831: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 48 KiB/s rd, 2.1 KiB/s wr, 62 op/s
Jan 10 17:21:52 compute-0 ceph-mon[75249]: pgmap v831: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 48 KiB/s rd, 2.1 KiB/s wr, 62 op/s
Jan 10 17:21:52 compute-0 nova_compute[237049]: 2026-01-10 17:21:52.346 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:21:52 compute-0 nova_compute[237049]: 2026-01-10 17:21:52.347 237053 DEBUG nova.compute.manager [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 10 17:21:52 compute-0 nova_compute[237049]: 2026-01-10 17:21:52.347 237053 DEBUG nova.compute.manager [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 10 17:21:52 compute-0 nova_compute[237049]: 2026-01-10 17:21:52.362 237053 DEBUG nova.compute.manager [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 10 17:21:52 compute-0 nova_compute[237049]: 2026-01-10 17:21:52.363 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:21:52 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v832: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 1.7 KiB/s wr, 50 op/s
Jan 10 17:21:53 compute-0 nova_compute[237049]: 2026-01-10 17:21:53.347 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:21:53 compute-0 nova_compute[237049]: 2026-01-10 17:21:53.347 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:21:53 compute-0 nova_compute[237049]: 2026-01-10 17:21:53.463 237053 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768065698.4622705, 114a4603-17a5-4e6b-b2d6-c77ef324a07d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 10 17:21:53 compute-0 nova_compute[237049]: 2026-01-10 17:21:53.464 237053 INFO nova.compute.manager [-] [instance: 114a4603-17a5-4e6b-b2d6-c77ef324a07d] VM Stopped (Lifecycle Event)
Jan 10 17:21:53 compute-0 nova_compute[237049]: 2026-01-10 17:21:53.492 237053 DEBUG nova.compute.manager [None req-74d6b09b-f480-4798-ac92-06e3b5c6750c - - - - - -] [instance: 114a4603-17a5-4e6b-b2d6-c77ef324a07d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 10 17:21:54 compute-0 ceph-mon[75249]: pgmap v832: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 1.7 KiB/s wr, 50 op/s
Jan 10 17:21:54 compute-0 nova_compute[237049]: 2026-01-10 17:21:54.345 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:21:54 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:21:54 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v833: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 1.4 KiB/s wr, 41 op/s
Jan 10 17:21:55 compute-0 nova_compute[237049]: 2026-01-10 17:21:55.345 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:21:55 compute-0 nova_compute[237049]: 2026-01-10 17:21:55.346 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:21:55 compute-0 nova_compute[237049]: 2026-01-10 17:21:55.346 237053 DEBUG nova.compute.manager [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 10 17:21:55 compute-0 nova_compute[237049]: 2026-01-10 17:21:55.346 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:21:55 compute-0 nova_compute[237049]: 2026-01-10 17:21:55.384 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 10 17:21:55 compute-0 nova_compute[237049]: 2026-01-10 17:21:55.385 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 10 17:21:55 compute-0 nova_compute[237049]: 2026-01-10 17:21:55.385 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 10 17:21:55 compute-0 nova_compute[237049]: 2026-01-10 17:21:55.386 237053 DEBUG nova.compute.resource_tracker [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 10 17:21:55 compute-0 nova_compute[237049]: 2026-01-10 17:21:55.386 237053 DEBUG oslo_concurrency.processutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 10 17:21:55 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 10 17:21:55 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2823587697' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 10 17:21:55 compute-0 nova_compute[237049]: 2026-01-10 17:21:55.994 237053 DEBUG oslo_concurrency.processutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.607s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 10 17:21:56 compute-0 ceph-mon[75249]: pgmap v833: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 1.4 KiB/s wr, 41 op/s
Jan 10 17:21:56 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/2823587697' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 10 17:21:56 compute-0 nova_compute[237049]: 2026-01-10 17:21:56.250 237053 WARNING nova.virt.libvirt.driver [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 10 17:21:56 compute-0 nova_compute[237049]: 2026-01-10 17:21:56.252 237053 DEBUG nova.compute.resource_tracker [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5160MB free_disk=59.98824910167605GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 10 17:21:56 compute-0 nova_compute[237049]: 2026-01-10 17:21:56.253 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 10 17:21:56 compute-0 nova_compute[237049]: 2026-01-10 17:21:56.253 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 10 17:21:56 compute-0 nova_compute[237049]: 2026-01-10 17:21:56.347 237053 DEBUG nova.compute.resource_tracker [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 10 17:21:56 compute-0 nova_compute[237049]: 2026-01-10 17:21:56.348 237053 DEBUG nova.compute.resource_tracker [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 10 17:21:56 compute-0 nova_compute[237049]: 2026-01-10 17:21:56.371 237053 DEBUG oslo_concurrency.processutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 10 17:21:56 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v834: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 307 B/s wr, 13 op/s
Jan 10 17:21:56 compute-0 nova_compute[237049]: 2026-01-10 17:21:56.891 237053 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768065701.8899665, 6290fedf-9ecb-464c-8d5e-b6af64859702 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 10 17:21:56 compute-0 nova_compute[237049]: 2026-01-10 17:21:56.892 237053 INFO nova.compute.manager [-] [instance: 6290fedf-9ecb-464c-8d5e-b6af64859702] VM Stopped (Lifecycle Event)
Jan 10 17:21:56 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 10 17:21:56 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3724504685' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 10 17:21:56 compute-0 nova_compute[237049]: 2026-01-10 17:21:56.921 237053 DEBUG nova.compute.manager [None req-1ba4046f-acc8-45a7-8645-d1cd2b371baa - - - - - -] [instance: 6290fedf-9ecb-464c-8d5e-b6af64859702] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 10 17:21:56 compute-0 nova_compute[237049]: 2026-01-10 17:21:56.924 237053 DEBUG oslo_concurrency.processutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.553s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 10 17:21:56 compute-0 nova_compute[237049]: 2026-01-10 17:21:56.932 237053 DEBUG nova.compute.provider_tree [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Inventory has not changed in ProviderTree for provider: 5f85855c-8a9b-43b5-ae49-f5846b9dcebe update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 10 17:21:56 compute-0 nova_compute[237049]: 2026-01-10 17:21:56.950 237053 DEBUG nova.scheduler.client.report [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Inventory has not changed for provider 5f85855c-8a9b-43b5-ae49-f5846b9dcebe based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 10 17:21:56 compute-0 nova_compute[237049]: 2026-01-10 17:21:56.976 237053 DEBUG nova.compute.resource_tracker [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 10 17:21:56 compute-0 nova_compute[237049]: 2026-01-10 17:21:56.977 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.723s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 10 17:21:57 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/3724504685' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 10 17:21:57 compute-0 nova_compute[237049]: 2026-01-10 17:21:57.977 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:21:57 compute-0 nova_compute[237049]: 2026-01-10 17:21:57.994 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:21:58 compute-0 ceph-mon[75249]: pgmap v834: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 307 B/s wr, 13 op/s
Jan 10 17:21:58 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v835: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:21:59 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:22:00 compute-0 ceph-mon[75249]: pgmap v835: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:22:00 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v836: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:22:02 compute-0 ceph-mon[75249]: pgmap v836: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:22:02 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v837: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:22:04 compute-0 ceph-mon[75249]: pgmap v837: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:22:04 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:22:04 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v838: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:22:06 compute-0 ceph-mon[75249]: pgmap v838: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:22:06 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v839: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:22:07 compute-0 podman[243056]: 2026-01-10 17:22:07.103845039 +0000 UTC m=+0.102520535 container health_status 4a66317a3a3660e903224f5e823ead0a57597ae17c6ac34919f8a3aeea25124f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3)
Jan 10 17:22:07 compute-0 podman[243057]: 2026-01-10 17:22:07.13950166 +0000 UTC m=+0.136745384 container health_status a2049b55215718ead6719d161f51721cb7ba61ad8a59a8a1174e05b17008fa6f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller)
Jan 10 17:22:08 compute-0 ceph-mon[75249]: pgmap v839: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:22:08 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v840: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:22:09 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:22:09 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:22:09 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:22:09 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:22:09 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:22:09 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:22:09 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:22:10 compute-0 ceph-mon[75249]: pgmap v840: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:22:10 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v841: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:22:11 compute-0 ceph-mon[75249]: pgmap v841: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:22:12 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v842: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:22:13 compute-0 ceph-mon[75249]: pgmap v842: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:22:14 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:22:14 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v843: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:22:15 compute-0 ceph-mon[75249]: pgmap v843: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:22:16 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v844: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:22:17 compute-0 ceph-mon[75249]: pgmap v844: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:22:18 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v845: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:22:19 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:22:19 compute-0 ceph-mon[75249]: pgmap v845: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:22:20 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v846: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:22:21 compute-0 ceph-mon[75249]: pgmap v846: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:22:22 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v847: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:22:23 compute-0 ceph-mon[75249]: pgmap v847: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:22:24 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:22:24 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v848: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:22:25 compute-0 ceph-mon[75249]: pgmap v848: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:22:26 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v849: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:22:27 compute-0 ceph-mon[75249]: pgmap v849: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:22:28 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v850: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:22:29 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:22:29 compute-0 ceph-mon[75249]: pgmap v850: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:22:30 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v851: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:22:31 compute-0 ceph-mon[75249]: pgmap v851: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:22:32 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v852: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:22:33 compute-0 ceph-mon[75249]: pgmap v852: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:22:34 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:22:34 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v853: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:22:35 compute-0 ceph-mon[75249]: pgmap v853: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:22:36 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 10 17:22:36 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3372710127' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 10 17:22:36 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 10 17:22:36 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3372710127' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 10 17:22:36 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v854: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:22:36 compute-0 ceph-mon[75249]: from='client.? 192.168.122.10:0/3372710127' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 10 17:22:36 compute-0 ceph-mon[75249]: from='client.? 192.168.122.10:0/3372710127' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 10 17:22:37 compute-0 ceph-mon[75249]: pgmap v854: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:22:38 compute-0 podman[243099]: 2026-01-10 17:22:38.093774589 +0000 UTC m=+0.086132424 container health_status 4a66317a3a3660e903224f5e823ead0a57597ae17c6ac34919f8a3aeea25124f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 10 17:22:38 compute-0 ceph-mgr[75538]: [balancer INFO root] Optimize plan auto_2026-01-10_17:22:38
Jan 10 17:22:38 compute-0 ceph-mgr[75538]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 10 17:22:38 compute-0 ceph-mgr[75538]: [balancer INFO root] do_upmap
Jan 10 17:22:38 compute-0 ceph-mgr[75538]: [balancer INFO root] pools ['.mgr', 'images', 'vms', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'volumes', 'backups']
Jan 10 17:22:38 compute-0 ceph-mgr[75538]: [balancer INFO root] prepared 0/10 upmap changes
Jan 10 17:22:38 compute-0 podman[243100]: 2026-01-10 17:22:38.152628094 +0000 UTC m=+0.138091685 container health_status a2049b55215718ead6719d161f51721cb7ba61ad8a59a8a1174e05b17008fa6f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 10 17:22:38 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v855: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:22:39 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:22:39 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:22:39 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:22:39 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:22:39 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:22:39 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:22:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 10 17:22:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 10 17:22:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 10 17:22:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 10 17:22:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 10 17:22:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 10 17:22:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 10 17:22:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 10 17:22:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 10 17:22:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 10 17:22:39 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:22:39 compute-0 ceph-mon[75249]: pgmap v855: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:22:40 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v856: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:22:41 compute-0 ceph-mon[75249]: pgmap v856: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:22:42 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v857: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:22:43 compute-0 sudo[243144]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 17:22:43 compute-0 sudo[243144]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:22:43 compute-0 sudo[243144]: pam_unix(sudo:session): session closed for user root
Jan 10 17:22:43 compute-0 sudo[243169]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 10 17:22:43 compute-0 sudo[243169]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:22:43 compute-0 sudo[243169]: pam_unix(sudo:session): session closed for user root
Jan 10 17:22:43 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 17:22:43 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 17:22:43 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 10 17:22:43 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 10 17:22:43 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 10 17:22:43 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:22:43 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 10 17:22:43 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 10 17:22:43 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 10 17:22:43 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 10 17:22:43 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 17:22:43 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 17:22:43 compute-0 ceph-mon[75249]: pgmap v857: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:22:43 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 17:22:43 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 10 17:22:43 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:22:43 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 10 17:22:43 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 10 17:22:43 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 17:22:43 compute-0 sudo[243227]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 17:22:43 compute-0 sudo[243227]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:22:43 compute-0 sudo[243227]: pam_unix(sudo:session): session closed for user root
Jan 10 17:22:44 compute-0 sudo[243252]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 10 17:22:44 compute-0 sudo[243252]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:22:44 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:22:44 compute-0 podman[243289]: 2026-01-10 17:22:44.455099326 +0000 UTC m=+0.062198430 container create 2da4530523d640b78dff9fed569c094ae81ccbb8314544f3e36ac078a0aa07fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_chaum, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 10 17:22:44 compute-0 systemd[1]: Started libpod-conmon-2da4530523d640b78dff9fed569c094ae81ccbb8314544f3e36ac078a0aa07fc.scope.
Jan 10 17:22:44 compute-0 podman[243289]: 2026-01-10 17:22:44.432765468 +0000 UTC m=+0.039864582 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:22:44 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:22:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] _maybe_adjust
Jan 10 17:22:44 compute-0 podman[243289]: 2026-01-10 17:22:44.566897331 +0000 UTC m=+0.173996475 container init 2da4530523d640b78dff9fed569c094ae81ccbb8314544f3e36ac078a0aa07fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_chaum, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 10 17:22:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:22:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 10 17:22:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:22:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 5.365931724612428e-07 of space, bias 1.0, pg target 0.00016097795173837282 quantized to 32 (current 32)
Jan 10 17:22:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:22:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.1924810223865999e-07 of space, bias 1.0, pg target 3.5774430671597993e-05 quantized to 32 (current 32)
Jan 10 17:22:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:22:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 17:22:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:22:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000668695260671586 of space, bias 1.0, pg target 0.2006085782014758 quantized to 32 (current 32)
Jan 10 17:22:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:22:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0462037643091811e-06 of space, bias 4.0, pg target 0.0012554445171710175 quantized to 16 (current 16)
Jan 10 17:22:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:22:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 17:22:44 compute-0 podman[243289]: 2026-01-10 17:22:44.575138393 +0000 UTC m=+0.182237467 container start 2da4530523d640b78dff9fed569c094ae81ccbb8314544f3e36ac078a0aa07fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_chaum, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 17:22:44 compute-0 podman[243289]: 2026-01-10 17:22:44.579687081 +0000 UTC m=+0.186786155 container attach 2da4530523d640b78dff9fed569c094ae81ccbb8314544f3e36ac078a0aa07fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_chaum, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 10 17:22:44 compute-0 intelligent_chaum[243305]: 167 167
Jan 10 17:22:44 compute-0 systemd[1]: libpod-2da4530523d640b78dff9fed569c094ae81ccbb8314544f3e36ac078a0aa07fc.scope: Deactivated successfully.
Jan 10 17:22:44 compute-0 conmon[243305]: conmon 2da4530523d640b78dff <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2da4530523d640b78dff9fed569c094ae81ccbb8314544f3e36ac078a0aa07fc.scope/container/memory.events
Jan 10 17:22:44 compute-0 podman[243289]: 2026-01-10 17:22:44.588611292 +0000 UTC m=+0.195710396 container died 2da4530523d640b78dff9fed569c094ae81ccbb8314544f3e36ac078a0aa07fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_chaum, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 10 17:22:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-6a2267958606bc0f98a268082a811763028481617494199c899eb7de3ceb17d2-merged.mount: Deactivated successfully.
Jan 10 17:22:44 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v858: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:22:44 compute-0 podman[243289]: 2026-01-10 17:22:44.636562141 +0000 UTC m=+0.243661245 container remove 2da4530523d640b78dff9fed569c094ae81ccbb8314544f3e36ac078a0aa07fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_chaum, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 10 17:22:44 compute-0 systemd[1]: libpod-conmon-2da4530523d640b78dff9fed569c094ae81ccbb8314544f3e36ac078a0aa07fc.scope: Deactivated successfully.
Jan 10 17:22:44 compute-0 podman[243329]: 2026-01-10 17:22:44.869897805 +0000 UTC m=+0.056495191 container create 8b1dfa7b5fda07e80ba599cdb4b0dd5613a5babb1d324d358ce6a8fd2ae334b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_herschel, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle)
Jan 10 17:22:44 compute-0 systemd[1]: Started libpod-conmon-8b1dfa7b5fda07e80ba599cdb4b0dd5613a5babb1d324d358ce6a8fd2ae334b3.scope.
Jan 10 17:22:44 compute-0 podman[243329]: 2026-01-10 17:22:44.84485548 +0000 UTC m=+0.031452956 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:22:44 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:22:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd5e169a1b9c6455c6e6ac6a63eb94e6eb75f58d9c1dfa8bf4221857c76c2301/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 17:22:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd5e169a1b9c6455c6e6ac6a63eb94e6eb75f58d9c1dfa8bf4221857c76c2301/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 17:22:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd5e169a1b9c6455c6e6ac6a63eb94e6eb75f58d9c1dfa8bf4221857c76c2301/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 17:22:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd5e169a1b9c6455c6e6ac6a63eb94e6eb75f58d9c1dfa8bf4221857c76c2301/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 17:22:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd5e169a1b9c6455c6e6ac6a63eb94e6eb75f58d9c1dfa8bf4221857c76c2301/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 10 17:22:44 compute-0 podman[243329]: 2026-01-10 17:22:44.988188022 +0000 UTC m=+0.174785468 container init 8b1dfa7b5fda07e80ba599cdb4b0dd5613a5babb1d324d358ce6a8fd2ae334b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_herschel, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 10 17:22:44 compute-0 sshd-session[243225]: Connection closed by authenticating user root 216.36.124.133 port 54562 [preauth]
Jan 10 17:22:45 compute-0 podman[243329]: 2026-01-10 17:22:45.005430077 +0000 UTC m=+0.192027493 container start 8b1dfa7b5fda07e80ba599cdb4b0dd5613a5babb1d324d358ce6a8fd2ae334b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_herschel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 17:22:45 compute-0 podman[243329]: 2026-01-10 17:22:45.010496469 +0000 UTC m=+0.197093935 container attach 8b1dfa7b5fda07e80ba599cdb4b0dd5613a5babb1d324d358ce6a8fd2ae334b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_herschel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 10 17:22:45 compute-0 serene_herschel[243345]: --> passed data devices: 0 physical, 3 LVM
Jan 10 17:22:45 compute-0 serene_herschel[243345]: --> All data devices are unavailable
Jan 10 17:22:45 compute-0 systemd[1]: libpod-8b1dfa7b5fda07e80ba599cdb4b0dd5613a5babb1d324d358ce6a8fd2ae334b3.scope: Deactivated successfully.
Jan 10 17:22:45 compute-0 podman[243329]: 2026-01-10 17:22:45.583845817 +0000 UTC m=+0.770443233 container died 8b1dfa7b5fda07e80ba599cdb4b0dd5613a5babb1d324d358ce6a8fd2ae334b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_herschel, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 17:22:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-dd5e169a1b9c6455c6e6ac6a63eb94e6eb75f58d9c1dfa8bf4221857c76c2301-merged.mount: Deactivated successfully.
Jan 10 17:22:45 compute-0 podman[243329]: 2026-01-10 17:22:45.639880724 +0000 UTC m=+0.826478130 container remove 8b1dfa7b5fda07e80ba599cdb4b0dd5613a5babb1d324d358ce6a8fd2ae334b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_herschel, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030)
Jan 10 17:22:45 compute-0 systemd[1]: libpod-conmon-8b1dfa7b5fda07e80ba599cdb4b0dd5613a5babb1d324d358ce6a8fd2ae334b3.scope: Deactivated successfully.
Jan 10 17:22:45 compute-0 sudo[243252]: pam_unix(sudo:session): session closed for user root
Jan 10 17:22:45 compute-0 sudo[243377]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 17:22:45 compute-0 sudo[243377]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:22:45 compute-0 sudo[243377]: pam_unix(sudo:session): session closed for user root
Jan 10 17:22:45 compute-0 sudo[243402]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 -- lvm list --format json
Jan 10 17:22:45 compute-0 sudo[243402]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:22:45 compute-0 ceph-mon[75249]: pgmap v858: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:22:46 compute-0 podman[243439]: 2026-01-10 17:22:46.211564954 +0000 UTC m=+0.065050141 container create f5b5cac624ae36a832d496737433f7f7c49919c7b710d9ac87734235d658c6c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_edison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 10 17:22:46 compute-0 systemd[1]: Started libpod-conmon-f5b5cac624ae36a832d496737433f7f7c49919c7b710d9ac87734235d658c6c2.scope.
Jan 10 17:22:46 compute-0 podman[243439]: 2026-01-10 17:22:46.184676677 +0000 UTC m=+0.038161924 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:22:46 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:22:46 compute-0 podman[243439]: 2026-01-10 17:22:46.303099588 +0000 UTC m=+0.156584765 container init f5b5cac624ae36a832d496737433f7f7c49919c7b710d9ac87734235d658c6c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_edison, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 17:22:46 compute-0 podman[243439]: 2026-01-10 17:22:46.313380958 +0000 UTC m=+0.166866105 container start f5b5cac624ae36a832d496737433f7f7c49919c7b710d9ac87734235d658c6c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_edison, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 17:22:46 compute-0 podman[243439]: 2026-01-10 17:22:46.31666358 +0000 UTC m=+0.170148747 container attach f5b5cac624ae36a832d496737433f7f7c49919c7b710d9ac87734235d658c6c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_edison, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 10 17:22:46 compute-0 great_edison[243455]: 167 167
Jan 10 17:22:46 compute-0 systemd[1]: libpod-f5b5cac624ae36a832d496737433f7f7c49919c7b710d9ac87734235d658c6c2.scope: Deactivated successfully.
Jan 10 17:22:46 compute-0 podman[243439]: 2026-01-10 17:22:46.319435888 +0000 UTC m=+0.172921075 container died f5b5cac624ae36a832d496737433f7f7c49919c7b710d9ac87734235d658c6c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_edison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 10 17:22:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-3ed15df06d36b5c15531209b4fa74cc2a7c01304a2691e20ab045901d5304048-merged.mount: Deactivated successfully.
Jan 10 17:22:46 compute-0 podman[243439]: 2026-01-10 17:22:46.366814371 +0000 UTC m=+0.220299528 container remove f5b5cac624ae36a832d496737433f7f7c49919c7b710d9ac87734235d658c6c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_edison, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 17:22:46 compute-0 systemd[1]: libpod-conmon-f5b5cac624ae36a832d496737433f7f7c49919c7b710d9ac87734235d658c6c2.scope: Deactivated successfully.
Jan 10 17:22:46 compute-0 podman[243478]: 2026-01-10 17:22:46.589829604 +0000 UTC m=+0.055760019 container create 946b66faf7562a67431779367f7dbb28df3fbf8cdaef455ba77efe4da0697001 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_lalande, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 17:22:46 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v859: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:22:46 compute-0 systemd[1]: Started libpod-conmon-946b66faf7562a67431779367f7dbb28df3fbf8cdaef455ba77efe4da0697001.scope.
Jan 10 17:22:46 compute-0 podman[243478]: 2026-01-10 17:22:46.566508468 +0000 UTC m=+0.032438923 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:22:46 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:22:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f00adda25fe8438164ec8c244024a4bdf4b4bf2b4c903e7267fc52eabb36c52/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 17:22:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f00adda25fe8438164ec8c244024a4bdf4b4bf2b4c903e7267fc52eabb36c52/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 17:22:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f00adda25fe8438164ec8c244024a4bdf4b4bf2b4c903e7267fc52eabb36c52/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 17:22:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f00adda25fe8438164ec8c244024a4bdf4b4bf2b4c903e7267fc52eabb36c52/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 17:22:46 compute-0 podman[243478]: 2026-01-10 17:22:46.702946286 +0000 UTC m=+0.168876741 container init 946b66faf7562a67431779367f7dbb28df3fbf8cdaef455ba77efe4da0697001 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_lalande, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 17:22:46 compute-0 podman[243478]: 2026-01-10 17:22:46.714091289 +0000 UTC m=+0.180021704 container start 946b66faf7562a67431779367f7dbb28df3fbf8cdaef455ba77efe4da0697001 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_lalande, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 10 17:22:46 compute-0 podman[243478]: 2026-01-10 17:22:46.718547055 +0000 UTC m=+0.184477510 container attach 946b66faf7562a67431779367f7dbb28df3fbf8cdaef455ba77efe4da0697001 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_lalande, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 10 17:22:47 compute-0 suspicious_lalande[243495]: {
Jan 10 17:22:47 compute-0 suspicious_lalande[243495]:     "0": [
Jan 10 17:22:47 compute-0 suspicious_lalande[243495]:         {
Jan 10 17:22:47 compute-0 suspicious_lalande[243495]:             "devices": [
Jan 10 17:22:47 compute-0 suspicious_lalande[243495]:                 "/dev/loop3"
Jan 10 17:22:47 compute-0 suspicious_lalande[243495]:             ],
Jan 10 17:22:47 compute-0 suspicious_lalande[243495]:             "lv_name": "ceph_lv0",
Jan 10 17:22:47 compute-0 suspicious_lalande[243495]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 10 17:22:47 compute-0 suspicious_lalande[243495]:             "lv_size": "21470642176",
Jan 10 17:22:47 compute-0 suspicious_lalande[243495]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9aa1dcc9-88f4-49c0-be40-744313964d3e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 17:22:47 compute-0 suspicious_lalande[243495]:             "lv_uuid": "fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2",
Jan 10 17:22:47 compute-0 suspicious_lalande[243495]:             "name": "ceph_lv0",
Jan 10 17:22:47 compute-0 suspicious_lalande[243495]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 10 17:22:47 compute-0 suspicious_lalande[243495]:             "tags": {
Jan 10 17:22:47 compute-0 suspicious_lalande[243495]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 10 17:22:47 compute-0 suspicious_lalande[243495]:                 "ceph.block_uuid": "fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2",
Jan 10 17:22:47 compute-0 suspicious_lalande[243495]:                 "ceph.cephx_lockbox_secret": "",
Jan 10 17:22:47 compute-0 suspicious_lalande[243495]:                 "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 17:22:47 compute-0 suspicious_lalande[243495]:                 "ceph.cluster_name": "ceph",
Jan 10 17:22:47 compute-0 suspicious_lalande[243495]:                 "ceph.crush_device_class": "",
Jan 10 17:22:47 compute-0 suspicious_lalande[243495]:                 "ceph.encrypted": "0",
Jan 10 17:22:47 compute-0 suspicious_lalande[243495]:                 "ceph.objectstore": "bluestore",
Jan 10 17:22:47 compute-0 suspicious_lalande[243495]:                 "ceph.osd_fsid": "9aa1dcc9-88f4-49c0-be40-744313964d3e",
Jan 10 17:22:47 compute-0 suspicious_lalande[243495]:                 "ceph.osd_id": "0",
Jan 10 17:22:47 compute-0 suspicious_lalande[243495]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 10 17:22:47 compute-0 suspicious_lalande[243495]:                 "ceph.type": "block",
Jan 10 17:22:47 compute-0 suspicious_lalande[243495]:                 "ceph.vdo": "0",
Jan 10 17:22:47 compute-0 suspicious_lalande[243495]:                 "ceph.with_tpm": "0"
Jan 10 17:22:47 compute-0 suspicious_lalande[243495]:             },
Jan 10 17:22:47 compute-0 suspicious_lalande[243495]:             "type": "block",
Jan 10 17:22:47 compute-0 suspicious_lalande[243495]:             "vg_name": "ceph_vg0"
Jan 10 17:22:47 compute-0 suspicious_lalande[243495]:         }
Jan 10 17:22:47 compute-0 suspicious_lalande[243495]:     ],
Jan 10 17:22:47 compute-0 suspicious_lalande[243495]:     "1": [
Jan 10 17:22:47 compute-0 suspicious_lalande[243495]:         {
Jan 10 17:22:47 compute-0 suspicious_lalande[243495]:             "devices": [
Jan 10 17:22:47 compute-0 suspicious_lalande[243495]:                 "/dev/loop4"
Jan 10 17:22:47 compute-0 suspicious_lalande[243495]:             ],
Jan 10 17:22:47 compute-0 suspicious_lalande[243495]:             "lv_name": "ceph_lv1",
Jan 10 17:22:47 compute-0 suspicious_lalande[243495]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 10 17:22:47 compute-0 suspicious_lalande[243495]:             "lv_size": "21470642176",
Jan 10 17:22:47 compute-0 suspicious_lalande[243495]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e8e31518-65ae-476c-891c-e2fc550d0a1c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 17:22:47 compute-0 suspicious_lalande[243495]:             "lv_uuid": "jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE",
Jan 10 17:22:47 compute-0 suspicious_lalande[243495]:             "name": "ceph_lv1",
Jan 10 17:22:47 compute-0 suspicious_lalande[243495]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 10 17:22:47 compute-0 suspicious_lalande[243495]:             "tags": {
Jan 10 17:22:47 compute-0 suspicious_lalande[243495]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 10 17:22:47 compute-0 suspicious_lalande[243495]:                 "ceph.block_uuid": "jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE",
Jan 10 17:22:47 compute-0 suspicious_lalande[243495]:                 "ceph.cephx_lockbox_secret": "",
Jan 10 17:22:47 compute-0 suspicious_lalande[243495]:                 "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 17:22:47 compute-0 suspicious_lalande[243495]:                 "ceph.cluster_name": "ceph",
Jan 10 17:22:47 compute-0 suspicious_lalande[243495]:                 "ceph.crush_device_class": "",
Jan 10 17:22:47 compute-0 suspicious_lalande[243495]:                 "ceph.encrypted": "0",
Jan 10 17:22:47 compute-0 suspicious_lalande[243495]:                 "ceph.objectstore": "bluestore",
Jan 10 17:22:47 compute-0 suspicious_lalande[243495]:                 "ceph.osd_fsid": "e8e31518-65ae-476c-891c-e2fc550d0a1c",
Jan 10 17:22:47 compute-0 suspicious_lalande[243495]:                 "ceph.osd_id": "1",
Jan 10 17:22:47 compute-0 suspicious_lalande[243495]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 10 17:22:47 compute-0 suspicious_lalande[243495]:                 "ceph.type": "block",
Jan 10 17:22:47 compute-0 suspicious_lalande[243495]:                 "ceph.vdo": "0",
Jan 10 17:22:47 compute-0 suspicious_lalande[243495]:                 "ceph.with_tpm": "0"
Jan 10 17:22:47 compute-0 suspicious_lalande[243495]:             },
Jan 10 17:22:47 compute-0 suspicious_lalande[243495]:             "type": "block",
Jan 10 17:22:47 compute-0 suspicious_lalande[243495]:             "vg_name": "ceph_vg1"
Jan 10 17:22:47 compute-0 suspicious_lalande[243495]:         }
Jan 10 17:22:47 compute-0 suspicious_lalande[243495]:     ],
Jan 10 17:22:47 compute-0 suspicious_lalande[243495]:     "2": [
Jan 10 17:22:47 compute-0 suspicious_lalande[243495]:         {
Jan 10 17:22:47 compute-0 suspicious_lalande[243495]:             "devices": [
Jan 10 17:22:47 compute-0 suspicious_lalande[243495]:                 "/dev/loop5"
Jan 10 17:22:47 compute-0 suspicious_lalande[243495]:             ],
Jan 10 17:22:47 compute-0 suspicious_lalande[243495]:             "lv_name": "ceph_lv2",
Jan 10 17:22:47 compute-0 suspicious_lalande[243495]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 10 17:22:47 compute-0 suspicious_lalande[243495]:             "lv_size": "21470642176",
Jan 10 17:22:47 compute-0 suspicious_lalande[243495]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=87473727-6468-4f68-8371-e0bf60edaa43,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 17:22:47 compute-0 suspicious_lalande[243495]:             "lv_uuid": "47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL",
Jan 10 17:22:47 compute-0 suspicious_lalande[243495]:             "name": "ceph_lv2",
Jan 10 17:22:47 compute-0 suspicious_lalande[243495]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 10 17:22:47 compute-0 suspicious_lalande[243495]:             "tags": {
Jan 10 17:22:47 compute-0 suspicious_lalande[243495]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 10 17:22:47 compute-0 suspicious_lalande[243495]:                 "ceph.block_uuid": "47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL",
Jan 10 17:22:47 compute-0 suspicious_lalande[243495]:                 "ceph.cephx_lockbox_secret": "",
Jan 10 17:22:47 compute-0 suspicious_lalande[243495]:                 "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 17:22:47 compute-0 suspicious_lalande[243495]:                 "ceph.cluster_name": "ceph",
Jan 10 17:22:47 compute-0 suspicious_lalande[243495]:                 "ceph.crush_device_class": "",
Jan 10 17:22:47 compute-0 suspicious_lalande[243495]:                 "ceph.encrypted": "0",
Jan 10 17:22:47 compute-0 suspicious_lalande[243495]:                 "ceph.objectstore": "bluestore",
Jan 10 17:22:47 compute-0 suspicious_lalande[243495]:                 "ceph.osd_fsid": "87473727-6468-4f68-8371-e0bf60edaa43",
Jan 10 17:22:47 compute-0 suspicious_lalande[243495]:                 "ceph.osd_id": "2",
Jan 10 17:22:47 compute-0 suspicious_lalande[243495]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 10 17:22:47 compute-0 suspicious_lalande[243495]:                 "ceph.type": "block",
Jan 10 17:22:47 compute-0 suspicious_lalande[243495]:                 "ceph.vdo": "0",
Jan 10 17:22:47 compute-0 suspicious_lalande[243495]:                 "ceph.with_tpm": "0"
Jan 10 17:22:47 compute-0 suspicious_lalande[243495]:             },
Jan 10 17:22:47 compute-0 suspicious_lalande[243495]:             "type": "block",
Jan 10 17:22:47 compute-0 suspicious_lalande[243495]:             "vg_name": "ceph_vg2"
Jan 10 17:22:47 compute-0 suspicious_lalande[243495]:         }
Jan 10 17:22:47 compute-0 suspicious_lalande[243495]:     ]
Jan 10 17:22:47 compute-0 suspicious_lalande[243495]: }
Jan 10 17:22:47 compute-0 systemd[1]: libpod-946b66faf7562a67431779367f7dbb28df3fbf8cdaef455ba77efe4da0697001.scope: Deactivated successfully.
Jan 10 17:22:47 compute-0 podman[243478]: 2026-01-10 17:22:47.096080245 +0000 UTC m=+0.562010660 container died 946b66faf7562a67431779367f7dbb28df3fbf8cdaef455ba77efe4da0697001 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_lalande, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 17:22:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-8f00adda25fe8438164ec8c244024a4bdf4b4bf2b4c903e7267fc52eabb36c52-merged.mount: Deactivated successfully.
Jan 10 17:22:47 compute-0 podman[243478]: 2026-01-10 17:22:47.158820659 +0000 UTC m=+0.624751074 container remove 946b66faf7562a67431779367f7dbb28df3fbf8cdaef455ba77efe4da0697001 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_lalande, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 10 17:22:47 compute-0 systemd[1]: libpod-conmon-946b66faf7562a67431779367f7dbb28df3fbf8cdaef455ba77efe4da0697001.scope: Deactivated successfully.
Jan 10 17:22:47 compute-0 sudo[243402]: pam_unix(sudo:session): session closed for user root
Jan 10 17:22:47 compute-0 sudo[243516]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 17:22:47 compute-0 sudo[243516]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:22:47 compute-0 sudo[243516]: pam_unix(sudo:session): session closed for user root
Jan 10 17:22:47 compute-0 sudo[243541]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 -- raw list --format json
Jan 10 17:22:47 compute-0 sudo[243541]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:22:47 compute-0 podman[243578]: 2026-01-10 17:22:47.70205142 +0000 UTC m=+0.062492589 container create 6160c383596c7a292694a16a2e051fd59c789916cb56518a1a3c93bcd4bd7f92 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_boyd, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 10 17:22:47 compute-0 systemd[1]: Started libpod-conmon-6160c383596c7a292694a16a2e051fd59c789916cb56518a1a3c93bcd4bd7f92.scope.
Jan 10 17:22:47 compute-0 podman[243578]: 2026-01-10 17:22:47.673541128 +0000 UTC m=+0.033982337 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:22:47 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:22:47 compute-0 podman[243578]: 2026-01-10 17:22:47.787276897 +0000 UTC m=+0.147718086 container init 6160c383596c7a292694a16a2e051fd59c789916cb56518a1a3c93bcd4bd7f92 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_boyd, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 17:22:47 compute-0 podman[243578]: 2026-01-10 17:22:47.79520182 +0000 UTC m=+0.155642969 container start 6160c383596c7a292694a16a2e051fd59c789916cb56518a1a3c93bcd4bd7f92 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_boyd, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 10 17:22:47 compute-0 podman[243578]: 2026-01-10 17:22:47.799296655 +0000 UTC m=+0.159737864 container attach 6160c383596c7a292694a16a2e051fd59c789916cb56518a1a3c93bcd4bd7f92 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_boyd, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 17:22:47 compute-0 mystifying_boyd[243594]: 167 167
Jan 10 17:22:47 compute-0 systemd[1]: libpod-6160c383596c7a292694a16a2e051fd59c789916cb56518a1a3c93bcd4bd7f92.scope: Deactivated successfully.
Jan 10 17:22:47 compute-0 podman[243578]: 2026-01-10 17:22:47.804004818 +0000 UTC m=+0.164445937 container died 6160c383596c7a292694a16a2e051fd59c789916cb56518a1a3c93bcd4bd7f92 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_boyd, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True)
Jan 10 17:22:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-3a3c8007052594733bd75da49ccfe8a0bfde96cd181635c9e4f3f80d03bb5c8e-merged.mount: Deactivated successfully.
Jan 10 17:22:47 compute-0 podman[243578]: 2026-01-10 17:22:47.856131904 +0000 UTC m=+0.216573063 container remove 6160c383596c7a292694a16a2e051fd59c789916cb56518a1a3c93bcd4bd7f92 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_boyd, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 10 17:22:47 compute-0 systemd[1]: libpod-conmon-6160c383596c7a292694a16a2e051fd59c789916cb56518a1a3c93bcd4bd7f92.scope: Deactivated successfully.
Jan 10 17:22:47 compute-0 ceph-mon[75249]: pgmap v859: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:22:48 compute-0 podman[243616]: 2026-01-10 17:22:48.084149618 +0000 UTC m=+0.072945883 container create b70ff93e5d8c689efa1a1c931dece4158218d1a858a6c12dc65a83a09c5fa08c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_herschel, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 17:22:48 compute-0 systemd[1]: Started libpod-conmon-b70ff93e5d8c689efa1a1c931dece4158218d1a858a6c12dc65a83a09c5fa08c.scope.
Jan 10 17:22:48 compute-0 podman[243616]: 2026-01-10 17:22:48.053617659 +0000 UTC m=+0.042414004 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:22:48 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:22:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e58901899483886f27121af1cc8ef65799199c9c2a6adde1f675f770326c661/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 17:22:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e58901899483886f27121af1cc8ef65799199c9c2a6adde1f675f770326c661/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 17:22:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e58901899483886f27121af1cc8ef65799199c9c2a6adde1f675f770326c661/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 17:22:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e58901899483886f27121af1cc8ef65799199c9c2a6adde1f675f770326c661/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 17:22:48 compute-0 podman[243616]: 2026-01-10 17:22:48.184995725 +0000 UTC m=+0.173792000 container init b70ff93e5d8c689efa1a1c931dece4158218d1a858a6c12dc65a83a09c5fa08c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_herschel, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 10 17:22:48 compute-0 podman[243616]: 2026-01-10 17:22:48.192129026 +0000 UTC m=+0.180925291 container start b70ff93e5d8c689efa1a1c931dece4158218d1a858a6c12dc65a83a09c5fa08c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_herschel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 17:22:48 compute-0 podman[243616]: 2026-01-10 17:22:48.196628262 +0000 UTC m=+0.185424537 container attach b70ff93e5d8c689efa1a1c931dece4158218d1a858a6c12dc65a83a09c5fa08c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_herschel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 17:22:48 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v860: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:22:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:22:48.936 152671 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 10 17:22:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:22:48.940 152671 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 10 17:22:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:22:48.940 152671 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 10 17:22:48 compute-0 lvm[243710]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 10 17:22:48 compute-0 lvm[243710]: VG ceph_vg0 finished
Jan 10 17:22:48 compute-0 lvm[243711]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 10 17:22:48 compute-0 lvm[243711]: VG ceph_vg1 finished
Jan 10 17:22:48 compute-0 lvm[243713]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 10 17:22:48 compute-0 lvm[243713]: VG ceph_vg2 finished
Jan 10 17:22:49 compute-0 lvm[243715]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 10 17:22:49 compute-0 lvm[243715]: VG ceph_vg2 finished
Jan 10 17:22:49 compute-0 awesome_herschel[243632]: {}
Jan 10 17:22:49 compute-0 systemd[1]: libpod-b70ff93e5d8c689efa1a1c931dece4158218d1a858a6c12dc65a83a09c5fa08c.scope: Deactivated successfully.
Jan 10 17:22:49 compute-0 systemd[1]: libpod-b70ff93e5d8c689efa1a1c931dece4158218d1a858a6c12dc65a83a09c5fa08c.scope: Consumed 1.397s CPU time.
Jan 10 17:22:49 compute-0 podman[243616]: 2026-01-10 17:22:49.08777917 +0000 UTC m=+1.076575415 container died b70ff93e5d8c689efa1a1c931dece4158218d1a858a6c12dc65a83a09c5fa08c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_herschel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 17:22:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-6e58901899483886f27121af1cc8ef65799199c9c2a6adde1f675f770326c661-merged.mount: Deactivated successfully.
Jan 10 17:22:49 compute-0 podman[243616]: 2026-01-10 17:22:49.141364807 +0000 UTC m=+1.130161042 container remove b70ff93e5d8c689efa1a1c931dece4158218d1a858a6c12dc65a83a09c5fa08c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_herschel, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 10 17:22:49 compute-0 systemd[1]: libpod-conmon-b70ff93e5d8c689efa1a1c931dece4158218d1a858a6c12dc65a83a09c5fa08c.scope: Deactivated successfully.
Jan 10 17:22:49 compute-0 sudo[243541]: pam_unix(sudo:session): session closed for user root
Jan 10 17:22:49 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 10 17:22:49 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:22:49 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 10 17:22:49 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:22:49 compute-0 sudo[243728]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 10 17:22:49 compute-0 sudo[243728]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:22:49 compute-0 sudo[243728]: pam_unix(sudo:session): session closed for user root
Jan 10 17:22:49 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:22:50 compute-0 ceph-mon[75249]: pgmap v860: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:22:50 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:22:50 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:22:50 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v861: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:22:52 compute-0 ceph-mon[75249]: pgmap v861: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:22:52 compute-0 nova_compute[237049]: 2026-01-10 17:22:52.346 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:22:52 compute-0 nova_compute[237049]: 2026-01-10 17:22:52.347 237053 DEBUG nova.compute.manager [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 10 17:22:52 compute-0 nova_compute[237049]: 2026-01-10 17:22:52.348 237053 DEBUG nova.compute.manager [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 10 17:22:52 compute-0 nova_compute[237049]: 2026-01-10 17:22:52.378 237053 DEBUG nova.compute.manager [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 10 17:22:52 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v862: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:22:53 compute-0 nova_compute[237049]: 2026-01-10 17:22:53.345 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:22:53 compute-0 nova_compute[237049]: 2026-01-10 17:22:53.346 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:22:54 compute-0 ceph-mon[75249]: pgmap v862: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:22:54 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:22:54 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v863: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:22:54 compute-0 sshd-session[243753]: Accepted publickey for zuul from 192.168.122.10 port 34978 ssh2: ECDSA SHA256:YYROLJW/JwZAyyZtyl+88gzuUs1GqrQIhGb+AzXg9yc
Jan 10 17:22:54 compute-0 systemd-logind[798]: New session 52 of user zuul.
Jan 10 17:22:54 compute-0 systemd[1]: Started Session 52 of User zuul.
Jan 10 17:22:54 compute-0 sshd-session[243753]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 10 17:22:55 compute-0 sudo[243757]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp  -p container,openstack_edpm,system,storage,virt'
Jan 10 17:22:55 compute-0 sudo[243757]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:22:55 compute-0 nova_compute[237049]: 2026-01-10 17:22:55.335 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:22:55 compute-0 nova_compute[237049]: 2026-01-10 17:22:55.345 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:22:55 compute-0 nova_compute[237049]: 2026-01-10 17:22:55.346 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:22:55 compute-0 nova_compute[237049]: 2026-01-10 17:22:55.346 237053 DEBUG nova.compute.manager [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 10 17:22:56 compute-0 ceph-mon[75249]: pgmap v863: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:22:56 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v864: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:22:57 compute-0 nova_compute[237049]: 2026-01-10 17:22:57.346 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:22:57 compute-0 nova_compute[237049]: 2026-01-10 17:22:57.347 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:22:57 compute-0 nova_compute[237049]: 2026-01-10 17:22:57.388 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 10 17:22:57 compute-0 nova_compute[237049]: 2026-01-10 17:22:57.390 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 10 17:22:57 compute-0 nova_compute[237049]: 2026-01-10 17:22:57.390 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 10 17:22:57 compute-0 nova_compute[237049]: 2026-01-10 17:22:57.391 237053 DEBUG nova.compute.resource_tracker [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 10 17:22:57 compute-0 nova_compute[237049]: 2026-01-10 17:22:57.392 237053 DEBUG oslo_concurrency.processutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 10 17:22:57 compute-0 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.14696 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 17:22:57 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 10 17:22:57 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1558181602' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 10 17:22:57 compute-0 nova_compute[237049]: 2026-01-10 17:22:57.980 237053 DEBUG oslo_concurrency.processutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.589s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 10 17:22:58 compute-0 ceph-mon[75249]: pgmap v864: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:22:58 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/1558181602' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 10 17:22:58 compute-0 nova_compute[237049]: 2026-01-10 17:22:58.149 237053 WARNING nova.virt.libvirt.driver [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 10 17:22:58 compute-0 nova_compute[237049]: 2026-01-10 17:22:58.150 237053 DEBUG nova.compute.resource_tracker [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5176MB free_disk=59.988249060697854GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 10 17:22:58 compute-0 nova_compute[237049]: 2026-01-10 17:22:58.150 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 10 17:22:58 compute-0 nova_compute[237049]: 2026-01-10 17:22:58.151 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 10 17:22:58 compute-0 nova_compute[237049]: 2026-01-10 17:22:58.254 237053 DEBUG nova.compute.resource_tracker [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 10 17:22:58 compute-0 nova_compute[237049]: 2026-01-10 17:22:58.254 237053 DEBUG nova.compute.resource_tracker [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 10 17:22:58 compute-0 nova_compute[237049]: 2026-01-10 17:22:58.294 237053 DEBUG oslo_concurrency.processutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 10 17:22:58 compute-0 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.14698 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 17:22:58 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v865: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:22:58 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 10 17:22:58 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2322899128' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 10 17:22:58 compute-0 nova_compute[237049]: 2026-01-10 17:22:58.860 237053 DEBUG oslo_concurrency.processutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.566s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 10 17:22:58 compute-0 nova_compute[237049]: 2026-01-10 17:22:58.868 237053 DEBUG nova.compute.provider_tree [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Inventory has not changed in ProviderTree for provider: 5f85855c-8a9b-43b5-ae49-f5846b9dcebe update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 10 17:22:58 compute-0 nova_compute[237049]: 2026-01-10 17:22:58.890 237053 DEBUG nova.scheduler.client.report [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Inventory has not changed for provider 5f85855c-8a9b-43b5-ae49-f5846b9dcebe based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 10 17:22:58 compute-0 nova_compute[237049]: 2026-01-10 17:22:58.892 237053 DEBUG nova.compute.resource_tracker [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 10 17:22:58 compute-0 nova_compute[237049]: 2026-01-10 17:22:58.893 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.742s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 10 17:22:59 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0)
Jan 10 17:22:59 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1587073077' entity='client.admin' cmd={"prefix": "status"} : dispatch
Jan 10 17:22:59 compute-0 ceph-mon[75249]: from='client.14696 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 17:22:59 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/2322899128' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 10 17:22:59 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/1587073077' entity='client.admin' cmd={"prefix": "status"} : dispatch
Jan 10 17:22:59 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:22:59 compute-0 nova_compute[237049]: 2026-01-10 17:22:59.892 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:23:00 compute-0 ceph-mon[75249]: from='client.14698 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 17:23:00 compute-0 ceph-mon[75249]: pgmap v865: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:23:00 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v866: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:23:02 compute-0 ceph-mon[75249]: pgmap v866: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:23:02 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v867: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:23:03 compute-0 ceph-mon[75249]: pgmap v867: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:23:04 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:23:04 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v868: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:23:04 compute-0 ovs-vsctl[244103]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Jan 10 17:23:05 compute-0 ceph-mon[75249]: pgmap v868: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:23:06 compute-0 virtqemud[236762]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Jan 10 17:23:06 compute-0 virtqemud[236762]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Jan 10 17:23:06 compute-0 virtqemud[236762]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Jan 10 17:23:06 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v869: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:23:06 compute-0 ceph-mds[93917]: mds.cephfs.compute-0.anmivh asok_command: cache status {prefix=cache status} (starting...)
Jan 10 17:23:06 compute-0 ceph-mds[93917]: mds.cephfs.compute-0.anmivh asok_command: client ls {prefix=client ls} (starting...)
Jan 10 17:23:07 compute-0 lvm[244448]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 10 17:23:07 compute-0 lvm[244448]: VG ceph_vg0 finished
Jan 10 17:23:07 compute-0 lvm[244455]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 10 17:23:07 compute-0 lvm[244455]: VG ceph_vg2 finished
Jan 10 17:23:07 compute-0 lvm[244459]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 10 17:23:07 compute-0 lvm[244459]: VG ceph_vg1 finished
Jan 10 17:23:07 compute-0 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.14704 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 17:23:07 compute-0 ceph-mds[93917]: mds.cephfs.compute-0.anmivh asok_command: damage ls {prefix=damage ls} (starting...)
Jan 10 17:23:07 compute-0 ceph-mds[93917]: mds.cephfs.compute-0.anmivh asok_command: dump loads {prefix=dump loads} (starting...)
Jan 10 17:23:07 compute-0 ceph-mon[75249]: pgmap v869: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:23:07 compute-0 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.14706 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 17:23:07 compute-0 ceph-mds[93917]: mds.cephfs.compute-0.anmivh asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Jan 10 17:23:08 compute-0 ceph-mds[93917]: mds.cephfs.compute-0.anmivh asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Jan 10 17:23:08 compute-0 ceph-mds[93917]: mds.cephfs.compute-0.anmivh asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Jan 10 17:23:08 compute-0 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.14708 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 17:23:08 compute-0 ceph-mds[93917]: mds.cephfs.compute-0.anmivh asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Jan 10 17:23:08 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "report"} v 0)
Jan 10 17:23:08 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1534400192' entity='client.admin' cmd={"prefix": "report"} : dispatch
Jan 10 17:23:08 compute-0 ceph-mds[93917]: mds.cephfs.compute-0.anmivh asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Jan 10 17:23:08 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v870: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:23:08 compute-0 ceph-mds[93917]: mds.cephfs.compute-0.anmivh asok_command: get subtrees {prefix=get subtrees} (starting...)
Jan 10 17:23:08 compute-0 ceph-mon[75249]: from='client.14704 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 17:23:08 compute-0 ceph-mon[75249]: from='client.14706 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 17:23:08 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/1534400192' entity='client.admin' cmd={"prefix": "report"} : dispatch
Jan 10 17:23:08 compute-0 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.14712 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 17:23:08 compute-0 ceph-mgr[75538]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 10 17:23:08 compute-0 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-mgr-compute-0-mkxlpr[75534]: 2026-01-10T17:23:08.844+0000 7fd5c778b640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 10 17:23:08 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 17:23:08 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2192949448' entity='client.admin' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 17:23:09 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:23:09 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:23:09 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:23:09 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:23:09 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:23:09 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:23:09 compute-0 ceph-mds[93917]: mds.cephfs.compute-0.anmivh asok_command: ops {prefix=ops} (starting...)
Jan 10 17:23:09 compute-0 podman[244671]: 2026-01-10 17:23:09.079653232 +0000 UTC m=+0.074257530 container health_status 4a66317a3a3660e903224f5e823ead0a57597ae17c6ac34919f8a3aeea25124f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 10 17:23:09 compute-0 podman[244682]: 2026-01-10 17:23:09.119635197 +0000 UTC m=+0.108969217 container health_status a2049b55215718ead6719d161f51721cb7ba61ad8a59a8a1174e05b17008fa6f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 10 17:23:09 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0)
Jan 10 17:23:09 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2691955246' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm"} : dispatch
Jan 10 17:23:09 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config log"} v 0)
Jan 10 17:23:09 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4165695532' entity='client.admin' cmd={"prefix": "config log"} : dispatch
Jan 10 17:23:09 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:23:09 compute-0 ceph-mon[75249]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #39. Immutable memtables: 0.
Jan 10 17:23:09 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:23:09.707544) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 10 17:23:09 compute-0 ceph-mon[75249]: rocksdb: [db/flush_job.cc:856] [default] [JOB 17] Flushing memtable with next log file: 39
Jan 10 17:23:09 compute-0 ceph-mon[75249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768065789707730, "job": 17, "event": "flush_started", "num_memtables": 1, "num_entries": 1918, "num_deletes": 256, "total_data_size": 2019380, "memory_usage": 2057312, "flush_reason": "Manual Compaction"}
Jan 10 17:23:09 compute-0 ceph-mon[75249]: rocksdb: [db/flush_job.cc:885] [default] [JOB 17] Level-0 flush table #40: started
Jan 10 17:23:09 compute-0 ceph-mds[93917]: mds.cephfs.compute-0.anmivh asok_command: session ls {prefix=session ls} (starting...)
Jan 10 17:23:09 compute-0 ceph-mon[75249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768065789735290, "cf_name": "default", "job": 17, "event": "table_file_creation", "file_number": 40, "file_size": 1373494, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 15812, "largest_seqno": 17729, "table_properties": {"data_size": 1366255, "index_size": 4057, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2117, "raw_key_size": 17345, "raw_average_key_size": 20, "raw_value_size": 1350731, "raw_average_value_size": 1631, "num_data_blocks": 182, "num_entries": 828, "num_filter_entries": 828, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768065636, "oldest_key_time": 1768065636, "file_creation_time": 1768065789, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f71f9c2-f3c5-4fc3-bcd9-6ffe346ae9d4", "db_session_id": "VPFJD76VNV79HUMFHEYZ", "orig_file_number": 40, "seqno_to_time_mapping": "N/A"}}
Jan 10 17:23:09 compute-0 ceph-mon[75249]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 17] Flush lasted 27791 microseconds, and 9819 cpu microseconds.
Jan 10 17:23:09 compute-0 ceph-mon[75249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 10 17:23:09 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:23:09.735353) [db/flush_job.cc:967] [default] [JOB 17] Level-0 flush table #40: 1373494 bytes OK
Jan 10 17:23:09 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:23:09.735381) [db/memtable_list.cc:519] [default] Level-0 commit table #40 started
Jan 10 17:23:09 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:23:09.738004) [db/memtable_list.cc:722] [default] Level-0 commit table #40: memtable #1 done
Jan 10 17:23:09 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:23:09.738025) EVENT_LOG_v1 {"time_micros": 1768065789738020, "job": 17, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 10 17:23:09 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:23:09.738047) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 10 17:23:09 compute-0 ceph-mon[75249]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 17] Try to delete WAL files size 2011086, prev total WAL file size 2011086, number of live WAL files 2.
Jan 10 17:23:09 compute-0 ceph-mon[75249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000036.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 10 17:23:09 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:23:09.738987) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400353031' seq:72057594037927935, type:22 .. '6D67727374617400373532' seq:0, type:0; will stop at (end)
Jan 10 17:23:09 compute-0 ceph-mon[75249]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 18] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 10 17:23:09 compute-0 ceph-mon[75249]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 17 Base level 0, inputs: [40(1341KB)], [38(5816KB)]
Jan 10 17:23:09 compute-0 ceph-mon[75249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768065789739125, "job": 18, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [40], "files_L6": [38], "score": -1, "input_data_size": 7329086, "oldest_snapshot_seqno": -1}
Jan 10 17:23:09 compute-0 ceph-mon[75249]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 18] Generated table #41: 3959 keys, 5814869 bytes, temperature: kUnknown
Jan 10 17:23:09 compute-0 ceph-mon[75249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768065789789368, "cf_name": "default", "job": 18, "event": "table_file_creation", "file_number": 41, "file_size": 5814869, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 5786784, "index_size": 17095, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9925, "raw_key_size": 93119, "raw_average_key_size": 23, "raw_value_size": 5714042, "raw_average_value_size": 1443, "num_data_blocks": 735, "num_entries": 3959, "num_filter_entries": 3959, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768064235, "oldest_key_time": 0, "file_creation_time": 1768065789, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f71f9c2-f3c5-4fc3-bcd9-6ffe346ae9d4", "db_session_id": "VPFJD76VNV79HUMFHEYZ", "orig_file_number": 41, "seqno_to_time_mapping": "N/A"}}
Jan 10 17:23:09 compute-0 ceph-mon[75249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 10 17:23:09 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:23:09.789622) [db/compaction/compaction_job.cc:1663] [default] [JOB 18] Compacted 1@0 + 1@6 files to L6 => 5814869 bytes
Jan 10 17:23:09 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:23:09.791114) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 145.6 rd, 115.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 5.7 +0.0 blob) out(5.5 +0.0 blob), read-write-amplify(9.6) write-amplify(4.2) OK, records in: 4411, records dropped: 452 output_compression: NoCompression
Jan 10 17:23:09 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:23:09.791140) EVENT_LOG_v1 {"time_micros": 1768065789791123, "job": 18, "event": "compaction_finished", "compaction_time_micros": 50333, "compaction_time_cpu_micros": 22117, "output_level": 6, "num_output_files": 1, "total_output_size": 5814869, "num_input_records": 4411, "num_output_records": 3959, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 10 17:23:09 compute-0 ceph-mon[75249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000040.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 10 17:23:09 compute-0 ceph-mon[75249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768065789791555, "job": 18, "event": "table_file_deletion", "file_number": 40}
Jan 10 17:23:09 compute-0 ceph-mon[75249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000038.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 10 17:23:09 compute-0 ceph-mon[75249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768065789792802, "job": 18, "event": "table_file_deletion", "file_number": 38}
Jan 10 17:23:09 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:23:09.738815) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 10 17:23:09 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:23:09.792909) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 10 17:23:09 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:23:09.792919) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 10 17:23:09 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:23:09.792922) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 10 17:23:09 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:23:09.792925) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 10 17:23:09 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:23:09.792928) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 10 17:23:09 compute-0 ceph-mon[75249]: from='client.14708 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 17:23:09 compute-0 ceph-mon[75249]: pgmap v870: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:23:09 compute-0 ceph-mon[75249]: from='client.14712 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 17:23:09 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/2192949448' entity='client.admin' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 17:23:09 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/2691955246' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm"} : dispatch
Jan 10 17:23:09 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/4165695532' entity='client.admin' cmd={"prefix": "config log"} : dispatch
Jan 10 17:23:09 compute-0 ceph-mds[93917]: mds.cephfs.compute-0.anmivh asok_command: status {prefix=status} (starting...)
Jan 10 17:23:09 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0)
Jan 10 17:23:09 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1964299435' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Jan 10 17:23:09 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config-key dump"} v 0)
Jan 10 17:23:09 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/633338918' entity='client.admin' cmd={"prefix": "config-key dump"} : dispatch
Jan 10 17:23:10 compute-0 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.14726 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 17:23:10 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Jan 10 17:23:10 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1561152790' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Jan 10 17:23:10 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v871: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:23:10 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/1964299435' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Jan 10 17:23:10 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/633338918' entity='client.admin' cmd={"prefix": "config-key dump"} : dispatch
Jan 10 17:23:10 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/1561152790' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Jan 10 17:23:10 compute-0 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.14728 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 17:23:11 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Jan 10 17:23:11 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/43030912' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Jan 10 17:23:11 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "features"} v 0)
Jan 10 17:23:11 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1813902671' entity='client.admin' cmd={"prefix": "features"} : dispatch
Jan 10 17:23:11 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Jan 10 17:23:11 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3220948933' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Jan 10 17:23:11 compute-0 ceph-mon[75249]: from='client.14726 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 17:23:11 compute-0 ceph-mon[75249]: pgmap v871: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:23:11 compute-0 ceph-mon[75249]: from='client.14728 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 17:23:11 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/43030912' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Jan 10 17:23:11 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/1813902671' entity='client.admin' cmd={"prefix": "features"} : dispatch
Jan 10 17:23:11 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/3220948933' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Jan 10 17:23:12 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0)
Jan 10 17:23:12 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1164187188' entity='client.admin' cmd={"prefix": "health", "detail": "detail"} : dispatch
Jan 10 17:23:12 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0)
Jan 10 17:23:12 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3700083542' entity='client.admin' cmd={"prefix": "mgr stat"} : dispatch
Jan 10 17:23:12 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v872: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:23:12 compute-0 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.14740 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 17:23:12 compute-0 ceph-mgr[75538]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 10 17:23:12 compute-0 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-mgr-compute-0-mkxlpr[75534]: 2026-01-10T17:23:12.681+0000 7fd5c778b640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 10 17:23:12 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/1164187188' entity='client.admin' cmd={"prefix": "health", "detail": "detail"} : dispatch
Jan 10 17:23:12 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/3700083542' entity='client.admin' cmd={"prefix": "mgr stat"} : dispatch
Jan 10 17:23:12 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0)
Jan 10 17:23:12 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1576411142' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Jan 10 17:23:13 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0)
Jan 10 17:23:13 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1168958456' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} : dispatch
Jan 10 17:23:13 compute-0 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.14746 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.4( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.6( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.4( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000005 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.4( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.8( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.001029 2 0.000015
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.7( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.001028 2 0.000053
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.8( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.7( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.8( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.8( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.8( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000015 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.8( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.8( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.8( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.7( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.7( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.8( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000016 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.8( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.7( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000041 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.7( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.8( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000010 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.8( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.7( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.8( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.8( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.7( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.9( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.001112 2 0.000032
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.9( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.7( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000026 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.9( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.9( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.9( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.7( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.9( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.9( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.9( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.9( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000006 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.9( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.9( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000024 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.9( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.9( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.9( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.b( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.001170 2 0.000011
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.b( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.b( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.b( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.b( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.b( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.b( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.b( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.b( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000006 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.b( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.b( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000007 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.b( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.b( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000001 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.b( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.a( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.001207 2 0.000016
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.a( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.a( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.a( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.a( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.a( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.a( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.a( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.a( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000004 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.a( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.a( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000006 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.a( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.a( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000001 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.a( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.d( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.001218 2 0.000013
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.d( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.d( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.d( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.d( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.d( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.d( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.d( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.d( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000004 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.d( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.d( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000006 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.d( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.d( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000001 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.d( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.c( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.001240 2 0.000012
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.7( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000055 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.c( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.c( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.c( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.c( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.7( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.c( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.c( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.c( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.7( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000008 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.c( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000005 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.c( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.7( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.c( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000008 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.c( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.c( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000001 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.c( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.f( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.001316 2 0.000015
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.f( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.f( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.f( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.f( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.f( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.f( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.f( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.f( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000006 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.f( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.f( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000008 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.f( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.f( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.f( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.e( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.001347 2 0.000021
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.6( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000562 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.6( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.e( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.6( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.6( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.e( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.e( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.e( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000009 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.e( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.e( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.11( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.001381 2 0.000016
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.e( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.11( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.11( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.11( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.e( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000041 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.11( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000009 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.11( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.e( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.10( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.001416 2 0.000017
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.11( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.11( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.10( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.10( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.10( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.10( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.11( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000011 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.10( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.11( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.e( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000021 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.e( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.e( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000005 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.11( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000016 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.11( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.e( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.11( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.11( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.1a( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.001437 2 0.000022
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.1a( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.1a( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.1a( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.1a( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.1a( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.1a( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.1a( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.1a( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000007 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.1a( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.1a( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000011 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.1a( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.1a( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.1a( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.10( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.10( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.10( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000017 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.19( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.001465 2 0.000017
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.10( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.19( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.19( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.19( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.19( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.19( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.10( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000016 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.19( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.10( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.19( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.10( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.10( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.19( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000008 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.19( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.19( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000013 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.19( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.19( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.19( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.14( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.001522 2 0.000014
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.14( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.14( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.14( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.14( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.14( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.14( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.14( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.14( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000008 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.14( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.12( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.001567 2 0.000015
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.12( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.12( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.12( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.12( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.12( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.12( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.12( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.12( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000009 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.1e( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.001582 2 0.000037
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.12( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.14( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000049 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.14( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.1e( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.1e( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.1e( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.14( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.1e( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000024 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.12( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000016 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.14( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.1e( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.12( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.12( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000007 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.12( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.1e( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.1e( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.1f( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.001696 2 0.000021
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.1f( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.1f( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.1f( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.1f( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000007 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.1f( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.1f( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.1e( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000040 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.1f( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.1e( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.1f( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000007 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.1f( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.1f( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000011 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.1f( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.1f( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.1f( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.1e( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000033 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.1( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.001770 2 0.000013
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.1e( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.1( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.1( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.1( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.1( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.1e( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000008 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.1( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.1( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.1( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.1e( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.1( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000005 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.1( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.1( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000007 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.1( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.1( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.1( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:23:13 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 57753600 unmapped: 2007040 heap: 59760640 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T16:59:16.966669+0000)
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 40 handle_osd_map epochs [40,41], i have 40, src has [1,41]
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.1d( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.010594 4 0.000067
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.1c( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.010683 4 0.000097
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.1d( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.010680 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.1c( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.010791 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.1c( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.1d( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.1c( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.1e( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.009026 4 0.000278
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.1e( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.009186 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.1e( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.1e( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.1f( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.009239 4 0.000087
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.11( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.009776 4 0.000111
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.1d( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.11( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.009838 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.11( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.1f( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.009316 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.11( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.1f( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.1f( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.12( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.009457 4 0.000140
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.12( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.009572 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.12( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.12( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.13( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.011581 4 0.000110
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.13( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.011640 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.13( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.13( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.14( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.009661 4 0.000180
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.14( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.009811 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.10( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.009595 4 0.000237
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.10( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.009974 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.16( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.011530 4 0.000080
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.14( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.10( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.16( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.011579 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.14( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.16( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.10( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.16( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.15( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.011058 4 0.000161
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.8( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.010723 4 0.000099
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.15( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.011289 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.8( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.010773 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.8( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.15( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.8( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.15( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.9( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.010667 4 0.000072
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.9( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.010721 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.9( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.9( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.b( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.010703 4 0.000049
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.b( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.010744 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.b( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.b( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.17( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.011468 4 0.000232
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.17( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.011621 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.7( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.010609 4 0.000399
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.17( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.7( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.010903 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.7( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.7( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.a( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.010640 4 0.000040
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=39 pruub=12.634835243s) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering pruub 75.884468079s@ mbc={}] exit Started/Primary/Peering/WaitUpThru 1.013617 3 0.000174
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.6( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.010594 4 0.000636
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.6( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.011210 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=39 pruub=12.634835243s) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering pruub 75.884468079s@ mbc={}] exit Started/Primary/Peering 1.013764 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.6( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=39 pruub=12.634835243s) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown pruub 75.884468079s@ mbc={}] enter Started/Primary/Active
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.6( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.5( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.011321 4 0.000091
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.5( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.011370 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.5( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.0( empty local-lis/les=39/41 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.5( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.4( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.011250 4 0.000118
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.4( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.011334 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.4( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.3( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.011445 4 0.000201
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.4( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.3( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.011578 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.3( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.3( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.a( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.010844 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.2( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.011549 4 0.000073
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.2( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.011629 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.2( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.1( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.010090 4 0.000062
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.2( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.1( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.010130 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.1( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.f( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.010867 4 0.000053
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.f( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.010915 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.f( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.e( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.010748 4 0.000156
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.1( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.e( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.010864 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.e( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.e( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.a( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.f( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.d( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.011130 4 0.000039
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.d( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.011158 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.d( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.d( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.a( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.1b( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.012697 4 0.000167
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.1b( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.012828 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.c( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.011150 4 0.000045
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.1b( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.c( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.011249 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.c( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.1b( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.c( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.1a( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.010944 4 0.000072
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.1a( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.010997 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.19( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.010865 4 0.000073
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.1a( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.19( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.010915 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.19( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.1a( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.19( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.18( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.012694 4 0.000066
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.18( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.012745 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.18( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.18( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.17( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.1c( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.1c( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.004155 3 0.000355
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.1c( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.1c( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000019 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.1c( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.1e( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.1e( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.008803 3 0.000142
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.1d( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.1e( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.1e( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000019 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.1e( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.1d( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009004 3 0.000411
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.1d( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.11( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.1d( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000010 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.1d( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.11( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.008780 3 0.000054
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.11( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.11( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.11( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.13( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.1f( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.12( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.12( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.008963 3 0.000092
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.12( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.12( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.12( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.13( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.008894 3 0.000115
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.13( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.13( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.13( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.1f( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009030 3 0.000236
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.1f( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.1f( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.1f( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.10( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.14( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.16( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.8( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.10( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.008887 3 0.000363
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.10( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.8( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.008809 3 0.000052
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.10( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.8( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.10( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.8( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.8( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.16( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.008917 3 0.000078
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.16( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.16( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.16( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.14( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009024 3 0.000209
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.14( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.14( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.14( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.15( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.9( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.b( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.7( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.6( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.5( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.0( empty local-lis/les=39/41 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.3( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.4( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.2( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.e( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.d( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.f( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.9( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009179 3 0.000043
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.15( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009198 3 0.000200
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.9( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.15( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.9( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.15( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.9( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.15( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.b( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009099 3 0.000052
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.b( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.b( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.b( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.7( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009051 3 0.000092
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.7( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.7( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.6( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.008979 3 0.000061
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.7( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.6( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.6( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.6( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.5( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.008973 3 0.000049
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.5( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.5( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.5( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.3( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.008901 3 0.000063
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.0( empty local-lis/les=39/41 n=0 ec=22/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009013 3 0.000093
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.3( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.3( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000027 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.3( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.0( empty local-lis/les=39/41 n=0 ec=22/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.2( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.008853 3 0.000075
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.e( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.008871 3 0.000051
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.2( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.e( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.2( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.0( empty local-lis/les=39/41 n=0 ec=22/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000050 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.2( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.0( empty local-lis/les=39/41 n=0 ec=22/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.f( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.008972 3 0.000139
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.f( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.f( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.e( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.f( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.e( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.1b( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.4( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009009 3 0.000071
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.4( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.c( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.4( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.4( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.1( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.1a( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.19( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.a( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.18( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.17( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.1b( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009039 3 0.000060
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.1b( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 17:23:13 compute-0 ceph-mon[75249]: pgmap v872: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:23:13 compute-0 ceph-mon[75249]: from='client.14740 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 17:23:13 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/1576411142' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Jan 10 17:23:13 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/1168958456' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} : dispatch
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.c( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009039 3 0.000117
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.c( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.1b( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000009 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.c( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.1b( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.c( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.1( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009334 3 0.000113
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.1( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.1( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.1( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.1a( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009005 3 0.000060
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.19( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009004 3 0.000077
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.1a( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.19( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.1a( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.19( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.1a( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.19( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.a( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009253 3 0.000729
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.a( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.a( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.a( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.18( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.008987 3 0.000049
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.18( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.17( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009796 3 0.000923
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.18( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.17( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.18( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.17( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.17( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.d( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.008855 3 0.000038
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.d( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.d( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.d( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 17:23:13 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 57901056 unmapped: 2908160 heap: 60809216 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T16:59:17.966890+0000)
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 2.9 scrub starts
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 2.9 scrub ok
Jan 10 17:23:13 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:13 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:13 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 325799 data_alloc: 218103808 data_used: 0
Jan 10 17:23:13 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 57917440 unmapped: 2891776 heap: 60809216 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T16:59:18.967058+0000)
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_client  log_queue is 2 last_log 9 sent 7 num 2 unsent 2 sending 2
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T16:59:48.018767+0000 osd.2 (osd.2) 8 : cluster [DBG] 2.9 scrub starts
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T16:59:48.029320+0000 osd.2 (osd.2) 9 : cluster [DBG] 2.9 scrub ok
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:13 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 57925632 unmapped: 2883584 heap: 60809216 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 41 handle_osd_map epochs [42,42], i have 41, src has [1,42]
Jan 10 17:23:13 compute-0 ceph-osd[87867]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.575145721s of 10.665144920s, submitted: 238
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_client handle_log_ack log(last 9)
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T16:59:48.018767+0000 osd.2 (osd.2) 8 : cluster [DBG] 2.9 scrub starts
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T16:59:48.029320+0000 osd.2 (osd.2) 9 : cluster [DBG] 2.9 scrub ok
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T16:59:19.967354+0000)
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 42 heartbeat osd_stat(store_statfs(0x4fe164000/0x0/0x4ffc00000, data 0x291bd/0x68000, compress 0x0/0x0/0x0, omap 0x4878, meta 0x1a2b788), peers [0,1] op hist [])
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 42 handle_osd_map epochs [43,43], i have 42, src has [1,43]
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 42 handle_osd_map epochs [43,43], i have 43, src has [1,43]
Jan 10 17:23:13 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 57982976 unmapped: 2826240 heap: 60809216 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T16:59:20.967803+0000)
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 2.1f scrub starts
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 2.1f scrub ok
Jan 10 17:23:13 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 58130432 unmapped: 2678784 heap: 60809216 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T16:59:21.968063+0000)
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_client  log_queue is 2 last_log 11 sent 9 num 2 unsent 2 sending 2
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T16:59:51.095568+0000 osd.2 (osd.2) 10 : cluster [DBG] 2.1f scrub starts
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T16:59:51.106053+0000 osd.2 (osd.2) 11 : cluster [DBG] 2.1f scrub ok
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_client handle_log_ack log(last 11)
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T16:59:51.095568+0000 osd.2 (osd.2) 10 : cluster [DBG] 2.1f scrub starts
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T16:59:51.106053+0000 osd.2 (osd.2) 11 : cluster [DBG] 2.1f scrub ok
Jan 10 17:23:13 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 58130432 unmapped: 2678784 heap: 60809216 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T16:59:22.968529+0000)
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 2.1d scrub starts
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 2.1d scrub ok
Jan 10 17:23:13 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:13 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:13 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 336425 data_alloc: 218103808 data_used: 858
Jan 10 17:23:13 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 58146816 unmapped: 2662400 heap: 60809216 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T16:59:23.968818+0000)
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_client  log_queue is 2 last_log 13 sent 11 num 2 unsent 2 sending 2
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T16:59:53.133872+0000 osd.2 (osd.2) 12 : cluster [DBG] 2.1d scrub starts
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T16:59:53.144303+0000 osd.2 (osd.2) 13 : cluster [DBG] 2.1d scrub ok
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 2.a scrub starts
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 2.a scrub ok
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 43 handle_osd_map epochs [43,44], i have 43, src has [1,44]
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.1d( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 6.973673 7 0.000115
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.1d( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 6.982894 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.1d( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.993606 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.1d( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started 7.993657 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.1d( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.1e( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 6.974040 7 0.000128
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.1e( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 6.982990 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.1e( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.992219 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.1e( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started 7.992350 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.1e( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.19( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 13.053740 21 0.000199
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.19( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 13.060567 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 13.053251 21 0.000146
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.19( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 13.060701 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 13.060479 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 13.060522 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started 13.060549 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.946036339s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 active pruub 82.196174622s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.19( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started 13.060804 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.19( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.1e( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.025462151s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 active pruub 80.275505066s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.19( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.945967674s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 active pruub 82.196235657s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.1d( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.025725365s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 active pruub 80.275756836s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.19( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.945830345s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.196235657s@ mbc={}] exit Reset 0.000337 1 0.000466
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.19( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.945830345s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.196235657s@ mbc={}] enter Started
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.19( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.945830345s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.196235657s@ mbc={}] enter Start
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.19( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.945830345s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.196235657s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.19( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.945830345s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.196235657s@ mbc={}] exit Start 0.000015 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.19( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.945830345s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.196235657s@ mbc={}] enter Started/Stray
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.1d( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.025278091s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.275756836s@ mbc={}] exit Reset 0.000780 1 0.000984
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.1d( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.025278091s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.275756836s@ mbc={}] enter Started
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.1d( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.025278091s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.275756836s@ mbc={}] enter Start
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.1e( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.024790764s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.275505066s@ mbc={}] exit Reset 0.000728 1 0.000890
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.1e( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.024790764s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.275505066s@ mbc={}] enter Started
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.1e( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.024790764s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.275505066s@ mbc={}] enter Start
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.1e( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.024790764s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.275505066s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.1e( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.024790764s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.275505066s@ mbc={}] exit Start 0.000024 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.1e( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.024790764s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.275505066s@ mbc={}] enter Started/Stray
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.1d( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.025278091s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.275756836s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.1d( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.025278091s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.275756836s@ mbc={}] exit Start 0.000211 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.1d( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.025278091s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.275756836s@ mbc={}] enter Started/Stray
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.18( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 13.055061 21 0.000237
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.18( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 13.061608 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.18( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 13.061722 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.18( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started 13.061816 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.18( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.18( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.944757462s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 active pruub 82.195938110s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.945515633s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.196174622s@ mbc={}] exit Reset 0.000551 1 0.001240
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.18( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.944697380s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.195938110s@ mbc={}] exit Reset 0.000096 1 0.000174
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 13.055358 21 0.000886
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.18( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.944697380s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.195938110s@ mbc={}] enter Started
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.18( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.944697380s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.195938110s@ mbc={}] enter Start
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.18( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.944697380s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.195938110s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.18( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.944697380s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.195938110s@ mbc={}] exit Start 0.000042 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.18( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.944697380s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.195938110s@ mbc={}] enter Started/Stray
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.16( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 13.055371 21 0.000109
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.16( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 13.061839 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.16( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 13.062584 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 13.061939 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.16( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started 13.062646 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 13.062511 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.16( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started 13.062604 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.16( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.944560051s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 active pruub 82.196243286s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.16( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.944519043s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.196243286s@ mbc={}] exit Reset 0.000154 1 0.000241
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.16( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.944519043s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.196243286s@ mbc={}] enter Started
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.16( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.944519043s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.196243286s@ mbc={}] enter Start
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.16( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.944519043s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.196243286s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.16( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.944519043s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.196243286s@ mbc={}] exit Start 0.000011 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.16( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.944519043s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.196243286s@ mbc={}] enter Started/Stray
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.943988800s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 active pruub 82.195838928s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.11( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 6.975965 7 0.000104
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.11( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 6.984812 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.11( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.994691 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.943914413s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.195838928s@ mbc={}] exit Reset 0.000125 1 0.000673
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.11( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started 7.994732 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.11( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.943914413s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.195838928s@ mbc={}] enter Started
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.943914413s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.195838928s@ mbc={}] enter Start
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.11( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.023788452s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 active pruub 80.275848389s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.943914413s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.195838928s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.943914413s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.195838928s@ mbc={}] exit Start 0.000145 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.11( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.023736954s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.275848389s@ mbc={}] exit Reset 0.000147 1 0.000249
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.11( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.023736954s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.275848389s@ mbc={}] enter Started
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.11( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.023736954s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.275848389s@ mbc={}] enter Start
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.11( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.023736954s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.275848389s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.11( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.023736954s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.275848389s@ mbc={}] exit Start 0.000046 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.11( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.023736954s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.275848389s@ mbc={}] enter Started/Stray
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.943914413s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.195838928s@ mbc={}] enter Started/Stray
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.945515633s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.196174622s@ mbc={}] enter Started
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.945515633s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.196174622s@ mbc={}] enter Start
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 13.056667 21 0.000080
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.12( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 6.976379 7 0.000060
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.12( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 6.985402 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.12( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.994990 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.12( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started 7.995049 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.12( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.945515633s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.196174622s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.945515633s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.196174622s@ mbc={}] exit Start 0.000238 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.945515633s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.196174622s@ mbc={}] enter Started/Stray
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.12( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.023344040s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 active pruub 80.276062012s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.12( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.023289680s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276062012s@ mbc={}] exit Reset 0.000139 1 0.000224
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 13.063647 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 13.063952 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.12( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.023289680s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276062012s@ mbc={}] enter Started
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.12( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.023289680s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276062012s@ mbc={}] enter Start
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.12( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.023289680s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276062012s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.12( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.023289680s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276062012s@ mbc={}] exit Start 0.000023 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.12( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.023289680s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276062012s@ mbc={}] enter Started/Stray
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started 13.064086 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.942822456s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 active pruub 82.195831299s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.13( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 6.976894 7 0.000129
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.13( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 6.985842 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.13( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.997499 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.13( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started 7.997528 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.13( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.13( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.022882462s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 active pruub 80.276046753s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.13( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.022849083s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276046753s@ mbc={}] exit Reset 0.000065 1 0.000118
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.13( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.022849083s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276046753s@ mbc={}] enter Started
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.13( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.022849083s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276046753s@ mbc={}] enter Start
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.13( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.022849083s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276046753s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.13( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.022849083s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276046753s@ mbc={}] exit Start 0.000010 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.13( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.022849083s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276046753s@ mbc={}] enter Started/Stray
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.942521095s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.195831299s@ mbc={}] exit Reset 0.000347 1 0.000646
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.942521095s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.195831299s@ mbc={}] enter Started
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.942521095s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.195831299s@ mbc={}] enter Start
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.942521095s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.195831299s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.14( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 6.976969 7 0.000045
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.942521095s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.195831299s@ mbc={}] exit Start 0.000071 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.14( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 6.986092 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.14( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.995923 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.14( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started 7.995966 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.14( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.942521095s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.195831299s@ mbc={}] enter Started/Stray
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.14( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.022719383s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 active pruub 80.276260376s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.13( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 13.057922 21 0.000108
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.13( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 13.064806 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.13( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 13.064861 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.14( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.022646904s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276260376s@ mbc={}] exit Reset 0.000105 1 0.000197
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.14( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.022646904s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276260376s@ mbc={}] enter Started
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.14( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.022646904s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276260376s@ mbc={}] enter Start
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.14( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.022646904s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276260376s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.14( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.022646904s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276260376s@ mbc={}] exit Start 0.000011 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.14( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.022646904s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276260376s@ mbc={}] enter Started/Stray
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.13( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started 13.065001 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.13( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.15( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 6.977027 7 0.000245
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.15( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 6.986294 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.15( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.997609 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.13( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.941881180s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 active pruub 82.195747375s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.13( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.941844940s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.195747375s@ mbc={}] exit Reset 0.000174 1 0.000330
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.15( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started 7.997724 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.15( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.13( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.941844940s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.195747375s@ mbc={}] enter Started
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.13( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.941844940s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.195747375s@ mbc={}] enter Start
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.13( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.941844940s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.195747375s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.13( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.941844940s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.195747375s@ mbc={}] exit Start 0.000007 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.13( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.941844940s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.195747375s@ mbc={}] enter Started/Stray
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.15( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.022457123s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 active pruub 80.276466370s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.15( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.022409439s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276466370s@ mbc={}] exit Reset 0.000138 1 0.000310
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.15( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.022409439s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276466370s@ mbc={}] enter Started
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.15( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.022409439s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276466370s@ mbc={}] enter Start
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.15( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.022409439s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276466370s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.15( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.022409439s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276466370s@ mbc={}] exit Start 0.000019 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.15( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.022409439s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276466370s@ mbc={}] enter Started/Stray
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.11( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 13.058130 21 0.000105
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.11( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 13.065189 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.11( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 13.065574 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.11( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started 13.065639 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.11( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.16( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 6.977923 7 0.000143
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.11( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.941703796s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 active pruub 82.196022034s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.16( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 6.986918 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.16( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.998548 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.16( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started 7.998606 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.16( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.11( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.941620827s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.196022034s@ mbc={}] exit Reset 0.000133 1 0.000313
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.11( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.941620827s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.196022034s@ mbc={}] enter Started
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.11( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.941620827s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.196022034s@ mbc={}] enter Start
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.11( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.941620827s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.196022034s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.16( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.021822929s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 active pruub 80.276275635s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.11( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.941620827s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.196022034s@ mbc={}] exit Start 0.000008 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.16( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.021756172s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276275635s@ mbc={}] exit Reset 0.000100 1 0.000238
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.11( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.941620827s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.196022034s@ mbc={}] enter Started/Stray
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.16( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.021756172s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276275635s@ mbc={}] enter Started
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.16( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.021756172s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276275635s@ mbc={}] enter Start
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.16( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.021756172s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276275635s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.16( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.021756172s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276275635s@ mbc={}] exit Start 0.000024 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.16( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.021756172s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276275635s@ mbc={}] enter Started/Stray
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.9( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 6.977909 7 0.000065
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.9( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 6.987123 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.9( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.997898 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.9( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started 7.997932 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.9( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 13.059719 21 0.000083
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 13.066785 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.9( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.021719933s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 active pruub 80.276496887s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 13.066844 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started 13.067025 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.9( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.021435738s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276496887s@ mbc={}] exit Reset 0.000343 1 0.000424
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.9( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.021435738s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276496887s@ mbc={}] enter Started
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.9( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.021435738s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276496887s@ mbc={}] enter Start
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.9( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.021435738s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276496887s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.9( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.021435738s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276496887s@ mbc={}] exit Start 0.000008 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.9( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.021435738s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276496887s@ mbc={}] enter Started/Stray
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.7( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 6.978341 7 0.000062
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.7( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 6.987443 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.7( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.998403 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.939940453s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 active pruub 82.194961548s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.7( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started 7.998578 0 0.000000
Jan 10 17:23:13 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.7( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.939641953s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.194961548s@ mbc={}] exit Reset 0.000340 1 0.000617
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.7( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.021199226s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 active pruub 80.276519775s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.7( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.021158218s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276519775s@ mbc={}] exit Reset 0.000091 1 0.000243
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.7( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.021158218s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276519775s@ mbc={}] enter Started
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.7( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.021158218s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276519775s@ mbc={}] enter Start
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.7( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.021158218s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276519775s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.7( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.021158218s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276519775s@ mbc={}] exit Start 0.000012 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.7( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.021158218s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276519775s@ mbc={}] enter Started/Stray
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 13.059897 21 0.000081
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 13.066835 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 13.066978 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started 13.067445 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.939641953s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.194961548s@ mbc={}] enter Started
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.939641953s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.194961548s@ mbc={}] enter Start
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.939641953s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.194961548s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.939948082s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 active pruub 82.195663452s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.939641953s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.194961548s@ mbc={}] exit Start 0.000125 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.939641953s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.194961548s@ mbc={}] enter Started/Stray
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.939791679s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.195663452s@ mbc={}] exit Reset 0.000291 1 0.000388
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.939791679s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.195663452s@ mbc={}] enter Started
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.939791679s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.195663452s@ mbc={}] enter Start
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.939791679s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.195663452s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.939791679s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.195663452s@ mbc={}] exit Start 0.000005 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.939791679s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.195663452s@ mbc={}] enter Started/Stray
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.5( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 6.979177 7 0.000045
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 13.060807 21 0.000469
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 13.068370 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 13.068426 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.5( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 6.988218 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.5( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.999618 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.4( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 6.978958 7 0.000370
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.4( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 6.988196 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.4( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.999547 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.4( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started 7.999577 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.4( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.5( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started 7.999685 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.5( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.4( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.020483971s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 active pruub 80.276596069s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.4( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.020464897s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276596069s@ mbc={}] exit Reset 0.000039 1 0.000058
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.4( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.020464897s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276596069s@ mbc={}] enter Started
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.4( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.020464897s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276596069s@ mbc={}] enter Start
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.4( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.020464897s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276596069s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.4( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.020464897s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276596069s@ mbc={}] exit Start 0.000005 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.4( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.020464897s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276596069s@ mbc={}] enter Started/Stray
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started 13.068460 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.938565254s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 active pruub 82.194801331s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.938516617s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.194801331s@ mbc={}] exit Reset 0.000079 1 0.000284
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.938516617s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.194801331s@ mbc={}] enter Started
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.938516617s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.194801331s@ mbc={}] enter Start
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.938516617s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.194801331s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.938516617s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.194801331s@ mbc={}] exit Start 0.000035 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.938516617s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.194801331s@ mbc={}] enter Started/Stray
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 13.061614 21 0.000108
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 13.068831 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 13.068993 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_client handle_log_ack log(last 13)
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T16:59:53.133872+0000 osd.2 (osd.2) 12 : cluster [DBG] 2.1d scrub starts
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T16:59:53.144303+0000 osd.2 (osd.2) 13 : cluster [DBG] 2.1d scrub ok
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started 13.069192 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.937766075s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 active pruub 82.194831848s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.2( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 13.060440 21 0.000369
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.2( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 13.069561 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.2( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 13.069695 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.5( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.020411491s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 active pruub 80.276550293s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.5( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.019139290s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276550293s@ mbc={}] exit Reset 0.001305 1 0.001424
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.5( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.019139290s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276550293s@ mbc={}] enter Started
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.5( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.019139290s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276550293s@ mbc={}] enter Start
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.5( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.019139290s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276550293s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.5( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.019139290s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276550293s@ mbc={}] exit Start 0.000007 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.5( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.019139290s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276550293s@ mbc={}] enter Started/Stray
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.2( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started 13.069739 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.937311172s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.194831848s@ mbc={}] exit Reset 0.000800 1 0.000991
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.937311172s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.194831848s@ mbc={}] enter Started
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.937311172s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.194831848s@ mbc={}] enter Start
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.937311172s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.194831848s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.937311172s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.194831848s@ mbc={}] exit Start 0.000006 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.937311172s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.194831848s@ mbc={}] enter Started/Stray
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.2( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.2( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.937079430s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 active pruub 82.194824219s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.2( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.937025070s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.194824219s@ mbc={}] exit Reset 0.000110 1 0.002129
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.2( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.937025070s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.194824219s@ mbc={}] enter Started
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.2( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.937025070s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.194824219s@ mbc={}] enter Start
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.2( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.937025070s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.194824219s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.2( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.937025070s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.194824219s@ mbc={}] exit Start 0.000012 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.2( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.937025070s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.194824219s@ mbc={}] enter Started/Stray
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 13.063220 21 0.000168
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 13.070373 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 13.070464 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started 13.070493 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.936864853s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 active pruub 82.194839478s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.2( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 6.981005 7 0.000161
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.2( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 6.990012 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.2( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 8.001656 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.6( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 13.063546 21 0.000074
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.6( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 13.070611 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.6( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 13.070805 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.6( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started 13.070849 0 0.000000
Jan 10 17:23:13 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0)
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.2( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started 8.001799 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.2( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.936842918s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.194839478s@ mbc={}] exit Reset 0.000066 1 0.000137
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.936842918s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.194839478s@ mbc={}] enter Started
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.936842918s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.194839478s@ mbc={}] enter Start
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.936842918s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.194839478s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.936842918s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.194839478s@ mbc={}] exit Start 0.000005 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.936842918s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.194839478s@ mbc={}] enter Started/Stray
Jan 10 17:23:13 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1329769' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} : dispatch
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.3( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 6.980890 7 0.000070
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.3( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 6.990684 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.3( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 8.002294 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.2( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.018429756s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 active pruub 80.276603699s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.1( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 6.981281 7 0.000066
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.2( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.017952919s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276603699s@ mbc={}] exit Reset 0.000514 1 0.000701
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.2( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.017952919s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276603699s@ mbc={}] enter Started
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.2( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.017952919s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276603699s@ mbc={}] enter Start
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.2( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.017952919s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276603699s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.2( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.017952919s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276603699s@ mbc={}] exit Start 0.000009 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.2( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.017952919s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276603699s@ mbc={}] enter Started/Stray
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.1( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 6.990662 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.1( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 8.000895 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.1( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started 8.000914 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.1( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.8( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 13.063831 21 0.000073
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.8( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 13.070914 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.8( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 13.070963 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.1( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.018333435s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 active pruub 80.277122498s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.8( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started 13.071655 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.8( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.1( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.018314362s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.277122498s@ mbc={}] exit Reset 0.000037 1 0.000151
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.1( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.018314362s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.277122498s@ mbc={}] enter Started
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.1( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.018314362s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.277122498s@ mbc={}] enter Start
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.1( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.018314362s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.277122498s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.1( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.018314362s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.277122498s@ mbc={}] exit Start 0.000006 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.1( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.018314362s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.277122498s@ mbc={}] enter Started/Stray
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.8( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.936121941s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 active pruub 82.194946289s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.8( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.936101913s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.194946289s@ mbc={}] exit Reset 0.000042 1 0.000081
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.8( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.936101913s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.194946289s@ mbc={}] enter Started
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.8( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.936101913s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.194946289s@ mbc={}] enter Start
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.8( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.936101913s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.194946289s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.8( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.936101913s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.194946289s@ mbc={}] exit Start 0.000008 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.8( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.936101913s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.194946289s@ mbc={}] enter Started/Stray
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.3( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started 8.002343 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.3( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.f( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 6.981927 7 0.000057
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.f( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 6.990945 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.f( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 8.001873 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.9( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 13.065055 21 0.000086
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.f( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started 8.001903 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.9( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 13.071680 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.9( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 13.071926 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.9( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started 13.071972 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.f( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.f( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 13.058743 21 0.000101
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.f( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 13.070621 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.f( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.017609596s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 active pruub 80.276664734s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.f( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.017589569s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276664734s@ mbc={}] exit Reset 0.000064 1 0.000122
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.f( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.017589569s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276664734s@ mbc={}] enter Started
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.f( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.017589569s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276664734s@ mbc={}] enter Start
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.f( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.017589569s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276664734s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.f( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.017589569s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276664734s@ mbc={}] exit Start 0.000008 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.f( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.017589569s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276664734s@ mbc={}] enter Started/Stray
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.9( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.9( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.934791565s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 active pruub 82.194023132s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.9( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.934760094s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.194023132s@ mbc={}] exit Reset 0.000230 1 0.000257
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.a( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 13.064819 21 0.000292
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.9( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.934760094s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.194023132s@ mbc={}] enter Started
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.9( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.934760094s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.194023132s@ mbc={}] enter Start
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.9( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.934760094s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.194023132s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.9( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.934760094s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.194023132s@ mbc={}] exit Start 0.000005 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.a( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 13.072218 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.9( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.934760094s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.194023132s@ mbc={}] enter Started/Stray
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.a( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 13.072282 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.a( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started 13.072312 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.a( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.a( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.934700012s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 active pruub 82.194023132s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.a( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.934677124s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.194023132s@ mbc={}] exit Reset 0.000043 1 0.000081
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.a( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.934677124s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.194023132s@ mbc={}] enter Started
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.a( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.934677124s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.194023132s@ mbc={}] enter Start
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.a( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.934677124s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.194023132s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.a( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.934677124s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.194023132s@ mbc={}] exit Start 0.000007 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.a( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.934677124s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.194023132s@ mbc={}] enter Started/Stray
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.3( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.017175674s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 active pruub 80.276603699s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.3( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.017109871s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276603699s@ mbc={}] exit Reset 0.000575 1 0.001736
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.3( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.017109871s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276603699s@ mbc={}] enter Started
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.3( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.017109871s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276603699s@ mbc={}] enter Start
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.3( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.017109871s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276603699s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.b( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 13.065705 21 0.000158
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.b( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 13.072747 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.b( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 13.073102 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.b( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started 13.073220 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.b( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.3( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.017109871s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276603699s@ mbc={}] exit Start 0.000109 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.b( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.934269905s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 active pruub 82.193969727s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.f( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 13.071390 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.b( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.934208870s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.193969727s@ mbc={}] exit Reset 0.000100 1 0.000122
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.b( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.934208870s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.193969727s@ mbc={}] enter Started
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.b( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.934208870s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.193969727s@ mbc={}] enter Start
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.b( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.934208870s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.193969727s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.b( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.934208870s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.193969727s@ mbc={}] exit Start 0.000013 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.b( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.934208870s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.193969727s@ mbc={}] enter Started/Stray
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.c( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 6.982477 7 0.000070
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.c( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 6.991565 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.c( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 8.002841 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.c( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started 8.002858 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.c( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.c( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.017211914s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 active pruub 80.277099609s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.3( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.017109871s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276603699s@ mbc={}] enter Started/Stray
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.1c( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 13.069621 21 0.000198
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.1c( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 13.073403 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.1c( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 13.073530 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.1c( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started 13.073556 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.1c( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.1d( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 13.065992 21 0.000181
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.1c( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.930295944s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 active pruub 82.190406799s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.1c( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.930240631s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.190406799s@ mbc={}] exit Reset 0.000078 1 0.000160
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.1c( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.930240631s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.190406799s@ mbc={}] enter Started
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.1c( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.930240631s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.190406799s@ mbc={}] enter Start
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.1c( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.930240631s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.190406799s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.1c( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.930240631s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.190406799s@ mbc={}] exit Start 0.000007 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.1c( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.930240631s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.190406799s@ mbc={}] enter Started/Stray
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.1d( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 13.073559 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.1d( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 13.073859 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.1a( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 6.982871 7 0.000066
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.1a( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 6.991946 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.1a( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 8.002959 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.1d( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started 13.073993 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.1a( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started 8.003007 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.1a( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.1d( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.1a( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.016772270s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 active pruub 80.277183533s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.1a( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.016730309s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.277183533s@ mbc={}] exit Reset 0.000074 1 0.000160
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.1a( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.016730309s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.277183533s@ mbc={}] enter Started
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.1a( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.016730309s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.277183533s@ mbc={}] enter Start
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.1a( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.016730309s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.277183533s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.1a( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.016730309s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.277183533s@ mbc={}] exit Start 0.000009 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.1a( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.016730309s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.277183533s@ mbc={}] enter Started/Stray
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.1d( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.933568954s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 active pruub 82.194084167s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.1d( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.933501244s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.194084167s@ mbc={}] exit Reset 0.000204 1 0.000480
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.1d( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.933501244s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.194084167s@ mbc={}] enter Started
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.1d( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.933501244s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.194084167s@ mbc={}] enter Start
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.1d( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.933501244s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.194084167s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.19( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 6.983269 7 0.000066
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.19( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 6.992312 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.19( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 8.003257 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.1d( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.933501244s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.194084167s@ mbc={}] exit Start 0.000082 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.1d( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.933501244s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.194084167s@ mbc={}] enter Started/Stray
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.1f( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 13.066844 21 0.000206
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.1f( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 13.073601 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.1f( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 13.074550 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.19( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started 8.003318 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.c( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.016199112s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.277099609s@ mbc={}] exit Reset 0.001036 1 0.000197
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.c( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.016199112s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.277099609s@ mbc={}] enter Started
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.c( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.016199112s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.277099609s@ mbc={}] enter Start
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.c( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.016199112s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.277099609s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.19( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.c( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.016199112s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.277099609s@ mbc={}] exit Start 0.000008 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.c( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.016199112s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.277099609s@ mbc={}] enter Started/Stray
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.19( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.016240120s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 active pruub 80.277198792s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.19( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.016202927s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.277198792s@ mbc={}] exit Reset 0.000070 1 0.000300
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.19( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.016202927s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.277198792s@ mbc={}] enter Started
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.19( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.016202927s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.277198792s@ mbc={}] enter Start
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.1f( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started 13.074736 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.1f( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.19( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.016202927s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.277198792s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.19( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.016202927s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.277198792s@ mbc={}] exit Start 0.000097 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.19( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.016202927s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.277198792s@ mbc={}] enter Started/Stray
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.1f( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.932877541s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 active pruub 82.193984985s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.1f( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.932753563s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.193984985s@ mbc={}] exit Reset 0.000164 1 0.000365
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.1f( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.932753563s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.193984985s@ mbc={}] enter Started
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.1f( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.932753563s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.193984985s@ mbc={}] enter Start
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.1f( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.932753563s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.193984985s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.18( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 6.983774 7 0.000055
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.18( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 6.992861 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.18( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 8.005622 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.18( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started 8.005661 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.18( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.18( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.015887260s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 active pruub 80.277290344s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.18( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.015865326s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.277290344s@ mbc={}] exit Reset 0.000046 1 0.000228
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.18( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.015865326s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.277290344s@ mbc={}] enter Started
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.18( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.015865326s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.277290344s@ mbc={}] enter Start
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.18( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.015865326s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.277290344s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.18( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.015865326s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.277290344s@ mbc={}] exit Start 0.000007 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.18( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.015865326s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.277290344s@ mbc={}] enter Started/Stray
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 44 handle_osd_map epochs [44,44], i have 44, src has [1,44]
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.1f( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.932753563s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.193984985s@ mbc={}] exit Start 0.000074 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.1f( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.932753563s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.193984985s@ mbc={}] enter Started/Stray
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.f( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started 13.071466 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.f( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.f( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.933269501s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 active pruub 82.195274353s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.f( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.933234215s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.195274353s@ mbc={}] exit Reset 0.000101 1 0.007726
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.6( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.6( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.932654381s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 active pruub 82.194847107s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.6( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.932610512s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.194847107s@ mbc={}] exit Reset 0.004133 1 0.004147
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.6( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.932610512s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.194847107s@ mbc={}] enter Started
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.6( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.932610512s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.194847107s@ mbc={}] enter Start
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.6( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.932610512s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.194847107s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.6( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.932610512s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.194847107s@ mbc={}] exit Start 0.000005 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.6( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.932610512s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.194847107s@ mbc={}] enter Started/Stray
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.1c(unlocked)] enter Initial
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.1c( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=0 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000114 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.1c( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=0 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.1c( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000010 1 0.000031
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.1c( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.1c( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.1c( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.1c( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.1c( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.1c( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.1c( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.f( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.933234215s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.195274353s@ mbc={}] enter Started
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.f( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.933234215s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.195274353s@ mbc={}] enter Start
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.f( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.933234215s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.195274353s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.f( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.933234215s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.195274353s@ mbc={}] exit Start 0.000006 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.f( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.933234215s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.195274353s@ mbc={}] enter Started/Stray
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.1c( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000243 1 0.000047
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.1c( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.18(unlocked)] enter Initial
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.18( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=0 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000493 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.18( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=0 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.18( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000009 1 0.000016
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.18( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.18( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.18( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.18( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000136 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.18( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.18( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.18( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.16(unlocked)] enter Initial
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.16( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=0 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000114 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.16( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=0 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.16( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000007 1 0.000017
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.16( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.16( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.16( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.16( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.16( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.16( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.16( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.16( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000202 1 0.000035
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.16( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.18( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.001624 1 0.000443
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.18( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 44 handle_osd_map epochs [44,44], i have 44, src has [1,44]
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.11(unlocked)] enter Initial
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.11( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=0 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000225 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.11( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=0 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.11( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000018 1 0.000031
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.11( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.11( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.11( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.11( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000008 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.11( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.11( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.11( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.11( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000615 1 0.000147
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.11( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.15(unlocked)] enter Initial
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.15( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=0 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000048 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.15( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=0 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.15( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000008 1 0.000014
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.15( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.15( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.15( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.15( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.15( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.15( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.15( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.15( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000111 1 0.000948
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.15( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.11(unlocked)] enter Initial
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.11( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=0 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000038 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.11( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=0 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.11( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000005 1 0.000008
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.11( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.11( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.11( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.11( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.11( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.11( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.11( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.11( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000240 1 0.000216
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.11( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.e(unlocked)] enter Initial
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.e( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=0 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000269 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.e( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=0 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.e( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000009 1 0.000027
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.e( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.e( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.e( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.e( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000008 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.e( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.e( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.e( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.e( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000159 1 0.000041
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.e( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.a(unlocked)] enter Initial
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.a( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=0 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000165 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.a( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=0 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.a( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000067 1 0.000095
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.a( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.a( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.a( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.a( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000086 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.a( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.a( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.a( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.a( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000137 1 0.000235
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.a( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.8(unlocked)] enter Initial
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.8( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=0 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000115 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.8( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=0 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.8( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000014 1 0.000033
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.8( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.8( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.8( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.8( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.8( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.8( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.8( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.8( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000133 1 0.000047
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.8( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.5(unlocked)] enter Initial
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.5( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=0 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000062 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.5( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=0 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.5( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000015 1 0.000025
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.5( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.5( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.5( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.5( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.5( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.5( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.5( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.5( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000123 1 0.000039
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.5( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.5(unlocked)] enter Initial
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.5( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=0 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000054 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.5( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=0 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.5( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000007 1 0.000017
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.5( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.5( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.5( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.5( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000008 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.5( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.5( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.5( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.5( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000063 1 0.000036
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.5( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.1(unlocked)] enter Initial
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.1( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=0 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000116 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.1( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=0 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.1( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000033 1 0.000059
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.1( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.1( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.1( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.1( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000081 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.1( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.1( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.1( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.1( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000111 1 0.000226
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.1( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.2(unlocked)] enter Initial
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.2( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=0 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000087 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.2( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=0 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.2( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000012 1 0.000027
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.2( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.2( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.2( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.2( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.2( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.2( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.2( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.2( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000122 1 0.000047
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.2( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.7(unlocked)] enter Initial
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.7( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=0 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000199 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.7( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=0 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.7( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000008 1 0.000016
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.7( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.7( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.7( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.7( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.7( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.7( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.7( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.7( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000120 1 0.000033
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.7( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.8(unlocked)] enter Initial
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.8( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=0 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000091 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.8( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=0 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.8( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000008 1 0.000014
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.8( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.8( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.8( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.8( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.8( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.8( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.8( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.8( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000249 1 0.000034
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.8( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.c(unlocked)] enter Initial
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.c( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=0 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000171 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.c( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=0 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.c( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000014 1 0.000033
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.c( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.c( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.c( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.c( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000009 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.c( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.c( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.c( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.c( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000142 1 0.000060
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.c( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.1d(unlocked)] enter Initial
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.1d( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=0 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000108 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.1d( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=0 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.1d( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000027 1 0.000053
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.1d( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.1d( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.1d( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.1d( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000067 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.1d( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.1d( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.1d( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.1d( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000125 1 0.000241
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.1d( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.1a(unlocked)] enter Initial
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.1a( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=0 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000064 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.1a( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=0 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.1a( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000008 1 0.000017
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.1a( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.1a( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.1a( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.1a( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.1a( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.1a( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.1a( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.1a( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000175 1 0.000034
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.1a( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.1e(unlocked)] enter Initial
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.1e( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=0 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000053 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.1e( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=0 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.1e( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000007 1 0.000017
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.1e( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.1e( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.1e( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.1e( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.1e( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.1e( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.1e( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.1e( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000057 1 0.000032
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.1e( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 17:23:13 compute-0 ceph-osd[87867]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 17:23:13 compute-0 ceph-osd[87867]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.16( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.025449 2 0.000044
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.16( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.16( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000006 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 17:23:13 compute-0 ceph-osd[87867]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.16( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.18( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.025367 2 0.000705
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.18( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.18( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000013 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.18( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.e(unlocked)] enter Initial
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.e( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=0 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000098 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.e( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=0 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.e( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000017 1 0.000029
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.e( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.e( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.e( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.e( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000013 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.e( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.e( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.e( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.e( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000267 1 0.000077
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.e( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 17:23:13 compute-0 ceph-osd[87867]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 17:23:13 compute-0 ceph-osd[87867]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.11( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.011889 2 0.000164
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.11( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.11( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.11( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:23:13 compute-0 ceph-osd[87867]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 17:23:13 compute-0 ceph-osd[87867]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 17:23:13 compute-0 ceph-osd[87867]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 17:23:13 compute-0 ceph-osd[87867]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.15( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.012354 2 0.000027
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.11( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.013651 2 0.000289
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.15( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.15( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.15( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.11( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.11( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000007 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.11( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:23:13 compute-0 ceph-osd[87867]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 17:23:13 compute-0 ceph-osd[87867]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.e( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.008549 2 0.000217
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.e( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.e( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.e( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:23:13 compute-0 ceph-osd[87867]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 17:23:13 compute-0 ceph-osd[87867]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.a( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.007857 2 0.000095
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.a( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.a( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.a( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:23:13 compute-0 ceph-osd[87867]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 17:23:13 compute-0 ceph-osd[87867]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 17:23:13 compute-0 ceph-osd[87867]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 17:23:13 compute-0 ceph-osd[87867]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.1c( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.029087 2 0.000237
Jan 10 17:23:13 compute-0 ceph-osd[87867]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 17:23:13 compute-0 ceph-osd[87867]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.5( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.007148 2 0.000035
Jan 10 17:23:13 compute-0 ceph-osd[87867]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.5( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:23:13 compute-0 ceph-osd[87867]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.8( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.007442 2 0.000045
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.5( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000008 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.8( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.5( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.006930 2 0.000029
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.8( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000006 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.8( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.5( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.5( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000005 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.5( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.1c( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.5( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.1c( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000141 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.1c( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:23:13 compute-0 ceph-osd[87867]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 17:23:13 compute-0 ceph-osd[87867]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 17:23:13 compute-0 ceph-osd[87867]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 17:23:13 compute-0 ceph-osd[87867]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.1( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.006781 2 0.000072
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.1( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.1( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000005 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.2( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.006438 2 0.000041
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.1( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.2( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.2( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000007 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.2( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:23:13 compute-0 ceph-osd[87867]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 17:23:13 compute-0 ceph-osd[87867]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.1a( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.003838 2 0.000054
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.1a( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.1a( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.1a( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.18(unlocked)] enter Initial
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.18( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=0 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000180 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.18( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=0 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.18( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000010 1 0.000024
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.18( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.18( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.18( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.18( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000009 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.18( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.18( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.18( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.18( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000415 1 0.000052
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.18( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.13(unlocked)] enter Initial
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.13( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=0 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000114 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.13( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=0 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.13( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000014 1 0.000035
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.13( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.13( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.13( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.13( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000010 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.13( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.13( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.13( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.13( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000158 1 0.000114
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.13( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 17:23:13 compute-0 ceph-osd[87867]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 17:23:13 compute-0 ceph-osd[87867]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.1d( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.007023 2 0.000085
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.1d( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.1d( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000008 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.1d( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:23:13 compute-0 ceph-osd[87867]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 17:23:13 compute-0 ceph-osd[87867]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.7( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.009133 2 0.000026
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.7( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.7( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.7( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:23:13 compute-0 ceph-osd[87867]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 17:23:13 compute-0 ceph-osd[87867]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.c( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.008001 2 0.002376
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.c( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.c( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000007 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.c( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:23:13 compute-0 ceph-osd[87867]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 17:23:13 compute-0 ceph-osd[87867]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 17:23:13 compute-0 ceph-osd[87867]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 17:23:13 compute-0 ceph-osd[87867]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.1e( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.005219 2 0.000025
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.1e( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.1e( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.1e( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.e( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.004372 2 0.000061
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.e( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:23:13 compute-0 ceph-osd[87867]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.e( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000008 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.e( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.8( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.008707 2 0.000062
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.8( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.8( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000008 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.8( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.11(unlocked)] enter Initial
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.11( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=0 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000069 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.11( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=0 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.11( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000010 1 0.000018
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.11( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.11( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.11( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.11( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000008 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.11( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.11( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.11( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.11( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000104 1 0.000040
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.11( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.e(unlocked)] enter Initial
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.e( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=0 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000076 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.e( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=0 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.e( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000009 1 0.000021
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.e( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.e( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.e( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.e( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.e( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.e( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.e( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.e( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000187 1 0.000138
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.e( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.1(unlocked)] enter Initial
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.1( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=0 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000045 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.1( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=0 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.1( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000017
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.1( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.1( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.1( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.1( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.1( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.1( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.1( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.1( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000115 1 0.000028
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.1( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.1a(unlocked)] enter Initial
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.1a( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=0 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000047 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.1a( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=0 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.1a( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000016 1 0.000028
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.1a( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.1a( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.1a( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.1a( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.1a( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.1a( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.1a( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.1a( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000145 1 0.000038
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.1a( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.a(unlocked)] enter Initial
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.a( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=0 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000065 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.a( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=0 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.a( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000042 1 0.000053
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.a( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.a( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.a( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.a( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.a( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.a( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.a( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.a( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000076 1 0.000033
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.a( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.1b(unlocked)] enter Initial
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.1b( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=0 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000176 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.1b( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=0 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.1b( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000014 1 0.000035
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.1b( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.1b( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.1b( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.1b( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000009 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.1b( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.1b( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.1b( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.1b( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.001008 1 0.000639
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.1b( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.1c(unlocked)] enter Initial
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.1c( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=0 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000175 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.1c( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=0 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.1c( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000017 1 0.000036
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.1c( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.1c( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.1c( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.1c( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000016 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.1c( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.1c( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.1c( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.1c( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000179 1 0.000076
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.1c( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 17:23:13 compute-0 ceph-osd[87867]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 17:23:13 compute-0 ceph-osd[87867]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.18( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.008695 2 0.000116
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.18( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:23:13 compute-0 ceph-osd[87867]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 17:23:13 compute-0 ceph-osd[87867]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.18( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000009 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.13( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.006956 2 0.000075
Jan 10 17:23:13 compute-0 ceph-osd[87867]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.18( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.13( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:23:13 compute-0 ceph-osd[87867]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.13( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.13( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.11( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.006367 2 0.000050
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.11( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.11( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000007 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.11( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:23:13 compute-0 ceph-osd[87867]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 17:23:13 compute-0 ceph-osd[87867]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.e( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.005876 2 0.000039
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.e( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.e( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000006 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.e( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:23:13 compute-0 ceph-osd[87867]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 17:23:13 compute-0 ceph-osd[87867]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.1( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.005468 2 0.000042
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.1( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.1( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000008 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.1( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:23:13 compute-0 ceph-osd[87867]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 17:23:13 compute-0 ceph-osd[87867]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.004559 2 0.000035
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:23:13 compute-0 ceph-osd[87867]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 17:23:13 compute-0 ceph-osd[87867]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.1a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.005093 2 0.000049
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.1a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.1a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000007 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.1a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:23:13 compute-0 ceph-osd[87867]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 17:23:13 compute-0 ceph-osd[87867]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.1b( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.002853 2 0.000104
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.1b( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.1b( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000005 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.1b( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:23:13 compute-0 ceph-osd[87867]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 17:23:13 compute-0 ceph-osd[87867]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.1c( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.002377 2 0.000060
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.1c( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.1c( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.1c( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:23:13 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 59940864 unmapped: 868352 heap: 60809216 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T16:59:24.969237+0000)
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_client  log_queue is 2 last_log 15 sent 13 num 2 unsent 2 sending 2
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T16:59:54.159375+0000 osd.2 (osd.2) 14 : cluster [DBG] 2.a scrub starts
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T16:59:54.170048+0000 osd.2 (osd.2) 15 : cluster [DBG] 2.a scrub ok
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_client handle_log_ack log(last 15)
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T16:59:54.159375+0000 osd.2 (osd.2) 14 : cluster [DBG] 2.a scrub starts
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T16:59:54.170048+0000 osd.2 (osd.2) 15 : cluster [DBG] 2.a scrub ok
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 44 handle_osd_map epochs [44,45], i have 44, src has [1,45]
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 44 handle_osd_map epochs [45,45], i have 45, src has [1,45]
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.1b( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.114002 2 0.000053
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.18( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.114601 2 0.000228
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.18( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.123926 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.18( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.1a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.114244 2 0.000084
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.1a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.119599 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.1a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.1a( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.e( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.114656 2 0.000028
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.e( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.120758 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.e( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.e( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.1b( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.118133 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.1b( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.1b( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.1( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.114641 2 0.000058
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.1( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.120273 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.1( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.1( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.114638 2 0.000031
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.13( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.114905 2 0.000039
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.119306 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.13( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.122074 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.13( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.a( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.13( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.11( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.114952 2 0.000056
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.11( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.121476 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.11( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.11( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.1c( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.114429 2 0.000035
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.1c( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.117033 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.1c( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.1c( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.125372 2 0.000194
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.1c( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.1c( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.154879 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.1c( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.1c( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.18( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.18( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.127425 2 0.000045
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.16( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.127619 2 0.000054
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.16( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.153342 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.16( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.16( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.11( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.126785 2 0.000055
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.11( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.141191 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.11( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.11( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 45 handle_osd_map epochs [45,45], i have 45, src has [1,45]
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.18( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.155005 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.18( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.18( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.15( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.127108 2 0.000023
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.11( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.127085 2 0.000034
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.11( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.139374 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.11( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.15( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.139641 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.15( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.15( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.11( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.a( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.127189 2 0.000026
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.a( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.135291 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.a( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.e( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.127243 2 0.000022
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.e( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.136255 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.e( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.e( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.5( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.127339 2 0.000145
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.5( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.134656 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.5( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.a( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.5( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.8( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.127193 2 0.000140
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.8( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.135108 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.8( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.8( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.2( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.127180 2 0.000068
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.2( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.133902 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.2( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.2( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.1( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.127475 2 0.000054
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.1( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.134462 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.1( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.1( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.5( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.128018 2 0.000043
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.5( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.135059 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.5( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.5( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.7( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.124656 2 0.000031
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.7( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.133952 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.7( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.7( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.c( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.124769 2 0.000029
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.c( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.132972 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.c( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.c( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.e( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.125000 2 0.000048
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.e( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.129703 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.e( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.e( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.1d( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.125328 2 0.000054
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.1d( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.132577 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.1d( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.1d( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.8( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.124669 2 0.000065
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.8( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.134342 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.8( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.8( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.1e( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.125522 2 0.000045
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.1e( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.130830 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.1e( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.1e( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.1a( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.128557 2 0.000032
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.1a( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.132610 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.1a( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.1a( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.1b( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.1a( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.e( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.1( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.13( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.a( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.11( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.1c( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.1b( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.004617 4 0.000281
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.1b( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.1a( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.004712 4 0.000175
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.1a( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.1b( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000009 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.1a( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.1b( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.1a( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.e( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.004735 4 0.000053
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.e( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.e( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.e( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.1( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.004643 4 0.000036
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.1( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.1( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.1( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.13( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.004599 4 0.000060
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.13( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.13( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.13( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.a( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.004610 4 0.000058
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.a( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.a( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000022 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.1c( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.004546 4 0.000071
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.a( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.1c( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.1c( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.1c( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.11( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.004581 4 0.000034
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.11( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.11( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.11( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.18( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.11( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.18( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.16( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.15( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.1c( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.11( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.e( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.18( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009560 4 0.001011
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.5( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.18( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.18( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.18( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.a( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.2( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.8( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.5( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.7( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.c( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.11( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=44/40 les/c/f=45/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.008240 4 0.000103
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.11( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=44/40 les/c/f=45/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.18( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=44/37 les/c/f=45/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.008074 4 0.000656
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.18( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=44/37 les/c/f=45/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.18( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=44/37 les/c/f=45/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.11( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=44/40 les/c/f=45/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000025 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.18( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=44/37 les/c/f=45/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.11( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=44/40 les/c/f=45/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.15( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=44/40 les/c/f=45/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007925 4 0.000169
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.16( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=44/37 les/c/f=45/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.008526 4 0.000181
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.16( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=44/37 les/c/f=45/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.16( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=44/37 les/c/f=45/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.16( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=44/37 les/c/f=45/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.15( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=44/40 les/c/f=45/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.15( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=44/40 les/c/f=45/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000044 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.15( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=44/40 les/c/f=45/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.1c( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=44/40 les/c/f=45/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009459 4 0.000239
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.1c( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=44/40 les/c/f=45/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.1c( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=44/40 les/c/f=45/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.1c( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=44/40 les/c/f=45/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.11( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=44/37 les/c/f=45/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.008065 4 0.000317
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.11( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=44/37 les/c/f=45/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.11( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=44/37 les/c/f=45/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000014 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.11( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=44/37 les/c/f=45/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.e( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=44/37 les/c/f=45/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007777 4 0.000243
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.5( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=44/40 les/c/f=45/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007681 4 0.000054
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.5( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=44/40 les/c/f=45/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.e( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=44/37 les/c/f=45/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.e( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=44/37 les/c/f=45/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000024 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.5( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=44/40 les/c/f=45/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.e( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=44/37 les/c/f=45/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.5( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=44/40 les/c/f=45/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.a( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=44/40 les/c/f=45/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.008045 4 0.000510
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.2( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=44/40 les/c/f=45/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007411 4 0.000193
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.2( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=44/40 les/c/f=45/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.a( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=44/40 les/c/f=45/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.2( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=44/40 les/c/f=45/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.2( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=44/40 les/c/f=45/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.a( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=44/40 les/c/f=45/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000013 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.a( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=44/40 les/c/f=45/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.5( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=44/37 les/c/f=45/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007149 4 0.000084
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.7( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=44/37 les/c/f=45/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006970 4 0.000097
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.7( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=44/37 les/c/f=45/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.5( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=44/37 les/c/f=45/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.7( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=44/37 les/c/f=45/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000029 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.7( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=44/37 les/c/f=45/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.5( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=44/37 les/c/f=45/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000022 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.5( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=44/37 les/c/f=45/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.c( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=44/40 les/c/f=45/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006887 4 0.000085
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.c( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=44/40 les/c/f=45/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.c( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=44/40 les/c/f=45/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000018 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.c( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=44/40 les/c/f=45/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.8( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=44/40 les/c/f=45/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007985 4 0.000435
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.8( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=44/40 les/c/f=45/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.8( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=44/40 les/c/f=45/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.8( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=44/40 les/c/f=45/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.14( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.177982 7 0.000118
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.14( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.14( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.11( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.177145 7 0.000201
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.11( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.11( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.15( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.177578 7 0.000085
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.15( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.15( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.16( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.179996 7 0.000068
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.16( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.16( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.2( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.173939 7 0.000086
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.2( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.2( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.3( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.172051 7 0.000525
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.8( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.172916 7 0.000089
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.2( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.173103 7 0.000063
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.3( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.8( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.8( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.3( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.2( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.2( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.b( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.172031 7 0.000061
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.1f( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.169954 7 0.000700
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.1f( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.f( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.169114 7 0.000807
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.1f( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.f( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.f( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.b( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.5( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.174467 7 0.000053
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.14( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000251 1 0.000048
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.5( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.5( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.14( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.b( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.13( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.177843 7 0.000152
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.13( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.13( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.1d( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.171234 7 0.000245
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.1c( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.171819 7 0.000052
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.1c( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.7( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.176589 7 0.000097
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.4( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.175796 7 0.001971
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.7( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.7( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.1d( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.18( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.180718 7 0.000269
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.18( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.18( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.1d( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.19( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.181530 7 0.000194
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.19( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.19( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.1e( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.181350 7 0.000090
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.1d( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.1c( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.1e( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.1e( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.e( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.8( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.1e( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.4( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.15( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000968 1 0.000048
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.15( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.1( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.1a( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.4( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.1d( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=44/37 les/c/f=45/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.011871 4 0.000055
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.1d( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=44/37 les/c/f=45/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.1d( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=44/37 les/c/f=45/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.1d( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=44/37 les/c/f=45/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.e( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=44/40 les/c/f=45/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.012009 4 0.000197
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.e( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=44/40 les/c/f=45/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.e( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=44/40 les/c/f=45/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.e( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=44/40 les/c/f=45/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.11( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001125 1 0.000112
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.1e( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=44/37 les/c/f=45/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.011588 4 0.000511
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.1e( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=44/37 les/c/f=45/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.1e( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=44/37 les/c/f=45/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.1e( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=44/37 les/c/f=45/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.11( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.1( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=44/40 les/c/f=45/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.012914 4 0.000517
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.1( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=44/40 les/c/f=45/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.1( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=44/40 les/c/f=45/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000016 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.1( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=44/40 les/c/f=45/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.1a( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=44/40 les/c/f=45/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.011625 4 0.000619
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.1a( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=44/40 les/c/f=45/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.16( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001182 1 0.000020
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.1a( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=44/40 les/c/f=45/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000009 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.1a( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=44/40 les/c/f=45/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.16( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.2( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001300 1 0.000019
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.2( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.8( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001382 1 0.000023
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.8( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.8( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=44/37 les/c/f=45/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.012172 4 0.001145
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.8( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=44/37 les/c/f=45/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.3( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001492 1 0.000062
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.8( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=44/37 les/c/f=45/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000024 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.8( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=44/37 les/c/f=45/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.3( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.2( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001652 1 0.000079
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.2( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.1f( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001647 1 0.000070
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.1f( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.f( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001693 1 0.000054
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.f( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.5( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001752 1 0.000026
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.5( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.b( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001796 1 0.000139
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.b( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.13( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001851 1 0.000133
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.13( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.7( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001829 1 0.000051
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.7( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.18( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001855 1 0.000019
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.18( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.1d( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001917 1 0.000112
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.1d( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.19( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001941 1 0.000039
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.19( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.1c( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.002117 1 0.000170
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.1c( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.182100 7 0.002075
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.182629 7 0.000442
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.13( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.181586 7 0.000096
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.181368 7 0.000372
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.13( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.12( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.181991 7 0.000161
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.12( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.16( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.180292 7 0.000146
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.12( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.16( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.16( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.13( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.a( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.175600 7 0.000115
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.a( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.9( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.179851 7 0.000101
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.a( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.179175 7 0.000538
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.1e( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.002847 1 0.000067
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.1e( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.178644 7 0.000140
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.176859 7 0.000295
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.177490 7 0.000141
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.179265 7 0.000054
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.11( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.182918 7 0.000188
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.11( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.11( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.1( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.176404 7 0.000089
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.1( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.4( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.002561 1 0.000830
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.1( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.4( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.6( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.173060 7 0.000075
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.6( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.9( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.176084 7 0.000047
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.9( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.9( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.6( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.1d( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.184455 7 0.000552
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.1d( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.1d( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000633 1 0.000039
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.c( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.174539 7 0.001357
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.c( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.f( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.176340 7 0.000053
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.f( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.f( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.1a( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.175023 7 0.000049
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.1a( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.1a( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.18( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.174149 7 0.000097
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.19( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.174474 7 0.000155
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.19( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.19( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000819 1 0.000044
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.18( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.c( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.18( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000966 1 0.000034
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.12( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001175 1 0.000046
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.12( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.16( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001237 1 0.000023
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.16( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.a( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001341 1 0.000018
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.a( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.13( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001436 1 0.000145
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.13( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001438 1 0.000028
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001530 1 0.000076
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.9( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.9( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001713 1 0.000023
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001760 1 0.000047
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001757 1 0.000062
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.11( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001754 1 0.000073
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.11( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.1( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001760 1 0.000096
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.1( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.9( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001710 1 0.000024
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.9( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.6( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001791 1 0.000078
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.6( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.1d( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001842 1 0.000046
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.1d( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.f( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001789 1 0.000129
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.f( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.1a( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001828 1 0.000093
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.1a( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.19( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001823 1 0.000042
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.19( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.c( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001970 1 0.000226
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.c( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.18( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001894 1 0.000120
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.18( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.9( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001041 1 0.001719
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.9( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.14( empty lb MIN local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=-1 lpr=44 DELETING pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.010814 1 0.000047
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.14( empty lb MIN local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.011124 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.14( empty lb MIN local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.189181 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.15( empty lb MIN local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=-1 lpr=44 DELETING pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.016018 1 0.000030
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.15( empty lb MIN local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.017041 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.15( empty lb MIN local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.194700 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.11( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 DELETING pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.023133 1 0.000068
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.11( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.024312 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.11( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.201556 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.16( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 DELETING pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.030795 1 0.000113
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.16( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.032081 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.16( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.212116 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.2( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 DELETING pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.037749 1 0.000078
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.2( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.039108 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.2( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.213085 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.8( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 DELETING pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.045062 1 0.000073
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.8( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.046509 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.8( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.219476 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.3( empty lb MIN local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=-1 lpr=44 DELETING pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.052527 1 0.000155
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.3( empty lb MIN local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.054143 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.3( empty lb MIN local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.226420 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.2( empty lb MIN local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=-1 lpr=44 DELETING pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.059856 1 0.000036
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.2( empty lb MIN local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.061582 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.2( empty lb MIN local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.234744 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.1f( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 DELETING pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.067515 1 0.000051
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.1f( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.069228 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.1f( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.239845 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.f( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 DELETING pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.074574 1 0.000064
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.f( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.076360 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.f( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.245540 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.5( empty lb MIN local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=-1 lpr=44 DELETING pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.082061 1 0.000053
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.5( empty lb MIN local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.083860 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.5( empty lb MIN local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.258363 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.b( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 DELETING pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.089295 1 0.000063
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.b( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.091265 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.b( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.263374 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.13( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 DELETING pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.096624 1 0.000047
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.13( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.098529 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.13( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.276605 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.7( empty lb MIN local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=-1 lpr=44 DELETING pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.103968 1 0.000028
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.7( empty lb MIN local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.105893 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.7( empty lb MIN local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.282584 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.18( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 DELETING pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.111432 1 0.000023
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.18( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.113338 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.18( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.294127 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 60178432 unmapped: 630784 heap: 60809216 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.1d( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 DELETING pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.118422 1 0.000021
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.1d( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.120440 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.1d( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.291838 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.19( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 DELETING pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.126044 1 0.000062
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.19( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.128039 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.19( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.309631 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.1c( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 DELETING pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.132905 1 0.000578
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.1c( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.135065 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.1c( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.306926 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.1e( empty lb MIN local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=-1 lpr=44 DELETING pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.139443 1 0.000086
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.1e( empty lb MIN local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.142336 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.1e( empty lb MIN local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.323774 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.4( empty lb MIN local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=-1 lpr=44 DELETING pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.146741 1 0.000056
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.4( empty lb MIN local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.149611 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.4( empty lb MIN local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.325915 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.1b( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 DELETING pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.153911 1 0.000028
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.1b( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.154573 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.1b( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.338121 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.17( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 DELETING pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.161082 1 0.000030
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.17( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.161941 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.17( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.344774 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.15( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 DELETING pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.168168 1 0.000229
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.15( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.169364 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.15( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.350899 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.12( empty lb MIN local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 DELETING pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.175822 1 0.000038
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.12( empty lb MIN local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.177059 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.12( empty lb MIN local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.359187 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.16( empty lb MIN local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 DELETING pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.182836 1 0.000076
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.16( empty lb MIN local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.184135 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.16( empty lb MIN local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.364562 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.a( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 DELETING pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.190208 1 0.000034
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.a( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.191586 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.a( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.367214 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.13( empty lb MIN local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 DELETING pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.197582 1 0.000060
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.13( empty lb MIN local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.199143 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.13( empty lb MIN local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.380780 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.d( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 DELETING pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.204732 1 0.000120
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.d( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.206235 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.d( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.385876 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.5( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 DELETING pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.211951 1 0.000033
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.5( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.213700 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.5( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.390599 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.3( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 DELETING pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.219742 1 0.000206
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.3( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.221354 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.3( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.400078 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.4( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 DELETING pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.226734 1 0.000026
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.4( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.228541 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.4( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.406081 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.7( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 DELETING pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.233847 1 0.000020
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.7( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.235643 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.7( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.414942 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.11( empty lb MIN local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 DELETING pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.241535 1 0.000025
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.11( empty lb MIN local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.243342 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.11( empty lb MIN local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.426393 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.1( empty lb MIN local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 DELETING pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.248931 1 0.000023
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.1( empty lb MIN local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.250751 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.1( empty lb MIN local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.427222 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.9( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 DELETING pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.256149 1 0.000020
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.9( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.257917 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.9( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.434038 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.6( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 DELETING pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.263616 1 0.000072
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.6( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.265491 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.6( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.438594 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.1d( empty lb MIN local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 DELETING pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.270587 1 0.000053
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.1d( empty lb MIN local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.272482 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.1d( empty lb MIN local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.457276 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.f( empty lb MIN local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 DELETING pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.277937 1 0.000030
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.f( empty lb MIN local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.279773 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.f( empty lb MIN local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.456242 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.1a( empty lb MIN local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 DELETING pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.285694 1 0.000023
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.1a( empty lb MIN local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.287563 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.1a( empty lb MIN local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.462693 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.19( empty lb MIN local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 DELETING pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.292778 1 0.000022
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.19( empty lb MIN local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.294667 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.19( empty lb MIN local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.469356 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.c( empty lb MIN local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 DELETING pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.300082 1 0.000063
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.c( empty lb MIN local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.302156 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.c( empty lb MIN local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.476769 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.9( empty lb MIN local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 DELETING pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.307604 1 0.000036
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.9( empty lb MIN local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.310377 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.9( empty lb MIN local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.490278 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.18( empty lb MIN local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 DELETING pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.314784 1 0.000049
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.18( empty lb MIN local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.316793 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.18( empty lb MIN local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.490987 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 45 heartbeat osd_stat(store_statfs(0x4fe159000/0x0/0x4ffc00000, data 0x2d791/0x71000, compress 0x0/0x0/0x0, omap 0x5019, meta 0x1a2afe7), peers [0,1] op hist [])
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T16:59:25.969498+0000)
Jan 10 17:23:13 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 60104704 unmapped: 704512 heap: 60809216 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T16:59:26.969729+0000)
Jan 10 17:23:13 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 60137472 unmapped: 671744 heap: 60809216 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T16:59:27.969867+0000)
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 45 heartbeat osd_stat(store_statfs(0x4fe154000/0x0/0x4ffc00000, data 0x2ec21/0x74000, compress 0x0/0x0/0x0, omap 0x52a4, meta 0x1a2ad5c), peers [0,1] op hist [])
Jan 10 17:23:13 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:13 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:13 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 311352 data_alloc: 218103808 data_used: 858
Jan 10 17:23:13 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 60137472 unmapped: 671744 heap: 60809216 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T16:59:28.970020+0000)
Jan 10 17:23:13 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 60137472 unmapped: 671744 heap: 60809216 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T16:59:29.970204+0000)
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 2.1a scrub starts
Jan 10 17:23:13 compute-0 ceph-osd[87867]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.037272453s of 10.459216118s, submitted: 327
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 2.1a scrub ok
Jan 10 17:23:13 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 60153856 unmapped: 655360 heap: 60809216 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T16:59:30.970343+0000)
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_client  log_queue is 2 last_log 17 sent 15 num 2 unsent 2 sending 2
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:00:00.227422+0000 osd.2 (osd.2) 16 : cluster [DBG] 2.1a scrub starts
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:00:00.237820+0000 osd.2 (osd.2) 17 : cluster [DBG] 2.1a scrub ok
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 45 heartbeat osd_stat(store_statfs(0x4fe154000/0x0/0x4ffc00000, data 0x2ec21/0x74000, compress 0x0/0x0/0x0, omap 0x52a4, meta 0x1a2ad5c), peers [0,1] op hist [])
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 45 handle_osd_map epochs [46,46], i have 45, src has [1,46]
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 45 handle_osd_map epochs [46,46], i have 46, src has [1,46]
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 5.1c scrub starts
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 5.1c scrub ok
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_client handle_log_ack log(last 17)
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:00:00.227422+0000 osd.2 (osd.2) 16 : cluster [DBG] 2.1a scrub starts
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:00:00.237820+0000 osd.2 (osd.2) 17 : cluster [DBG] 2.1a scrub ok
Jan 10 17:23:13 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 60162048 unmapped: 647168 heap: 60809216 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe153000/0x0/0x4ffc00000, data 0x30237/0x77000, compress 0x0/0x0/0x0, omap 0x552f, meta 0x1a2aad1), peers [0,1] op hist [])
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 46 handle_osd_map epochs [47,47], i have 46, src has [1,47]
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 46 handle_osd_map epochs [47,47], i have 47, src has [1,47]
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T16:59:31.970541+0000)
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_client  log_queue is 2 last_log 19 sent 17 num 2 unsent 2 sending 2
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:00:01.257510+0000 osd.2 (osd.2) 18 : cluster [DBG] 5.1c scrub starts
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:00:01.268017+0000 osd.2 (osd.2) 19 : cluster [DBG] 5.1c scrub ok
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_client handle_log_ack log(last 19)
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:00:01.257510+0000 osd.2 (osd.2) 18 : cluster [DBG] 5.1c scrub starts
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:00:01.268017+0000 osd.2 (osd.2) 19 : cluster [DBG] 5.1c scrub ok
Jan 10 17:23:13 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 60170240 unmapped: 638976 heap: 60809216 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T16:59:32.970752+0000)
Jan 10 17:23:13 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:13 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:13 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 321722 data_alloc: 218103808 data_used: 858
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 5.1f scrub starts
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 5.1f scrub ok
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 47 heartbeat osd_stat(store_statfs(0x4fe14e000/0x0/0x4ffc00000, data 0x316b7/0x7a000, compress 0x0/0x0/0x0, omap 0x57ba, meta 0x1a2a846), peers [0,1] op hist [])
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 47 handle_osd_map epochs [48,48], i have 47, src has [1,48]
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 47 handle_osd_map epochs [48,48], i have 48, src has [1,48]
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 48 handle_osd_map epochs [49,49], i have 48, src has [1,49]
Jan 10 17:23:13 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 60227584 unmapped: 581632 heap: 60809216 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T16:59:33.970953+0000)
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_client  log_queue is 2 last_log 21 sent 19 num 2 unsent 2 sending 2
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:00:03.258503+0000 osd.2 (osd.2) 20 : cluster [DBG] 5.1f scrub starts
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:00:03.268971+0000 osd.2 (osd.2) 21 : cluster [DBG] 5.1f scrub ok
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 2.14 scrub starts
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 2.14 scrub ok
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_client handle_log_ack log(last 21)
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:00:03.258503+0000 osd.2 (osd.2) 20 : cluster [DBG] 5.1f scrub starts
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:00:03.268971+0000 osd.2 (osd.2) 21 : cluster [DBG] 5.1f scrub ok
Jan 10 17:23:13 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 60243968 unmapped: 565248 heap: 60809216 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T16:59:34.971202+0000)
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_client  log_queue is 2 last_log 23 sent 21 num 2 unsent 2 sending 2
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:00:04.267588+0000 osd.2 (osd.2) 22 : cluster [DBG] 2.14 scrub starts
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:00:04.278145+0000 osd.2 (osd.2) 23 : cluster [DBG] 2.14 scrub ok
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_client handle_log_ack log(last 23)
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:00:04.267588+0000 osd.2 (osd.2) 22 : cluster [DBG] 2.14 scrub starts
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:00:04.278145+0000 osd.2 (osd.2) 23 : cluster [DBG] 2.14 scrub ok
Jan 10 17:23:13 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 60243968 unmapped: 565248 heap: 60809216 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T16:59:35.971480+0000)
Jan 10 17:23:13 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 60268544 unmapped: 540672 heap: 60809216 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 49 heartbeat osd_stat(store_statfs(0x4fe148000/0x0/0x4ffc00000, data 0x3414d/0x80000, compress 0x0/0x0/0x0, omap 0x5cd0, meta 0x1a2a330), peers [0,1] op hist [])
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 49 handle_osd_map epochs [50,51], i have 49, src has [1,51]
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 49 handle_osd_map epochs [50,51], i have 51, src has [1,51]
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T16:59:36.971752+0000)
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 2.12 scrub starts
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 2.12 scrub ok
Jan 10 17:23:13 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 60252160 unmapped: 557056 heap: 60809216 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T16:59:37.971906+0000)
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_client  log_queue is 2 last_log 25 sent 23 num 2 unsent 2 sending 2
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:00:07.252641+0000 osd.2 (osd.2) 24 : cluster [DBG] 2.12 scrub starts
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:00:07.263207+0000 osd.2 (osd.2) 25 : cluster [DBG] 2.12 scrub ok
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:13 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:13 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:13 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 340017 data_alloc: 218103808 data_used: 858
Jan 10 17:23:13 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 60252160 unmapped: 557056 heap: 60809216 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_client handle_log_ack log(last 25)
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:00:07.252641+0000 osd.2 (osd.2) 24 : cluster [DBG] 2.12 scrub starts
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:00:07.263207+0000 osd.2 (osd.2) 25 : cluster [DBG] 2.12 scrub ok
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T16:59:38.972145+0000)
Jan 10 17:23:13 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 60284928 unmapped: 524288 heap: 60809216 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T16:59:39.972343+0000)
Jan 10 17:23:13 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 60284928 unmapped: 524288 heap: 60809216 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T16:59:40.972536+0000)
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 51 handle_osd_map epochs [52,52], i have 51, src has [1,52]
Jan 10 17:23:13 compute-0 ceph-osd[87867]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.106193542s of 11.144284248s, submitted: 15
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 52 heartbeat osd_stat(store_statfs(0x4fe144000/0x0/0x4ffc00000, data 0x36be3/0x86000, compress 0x0/0x0/0x0, omap 0x5f5b, meta 0x1a2a0a5), peers [0,1] op hist [])
Jan 10 17:23:13 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 60342272 unmapped: 1515520 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T16:59:41.972779+0000)
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 5.10 scrub starts
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 5.10 scrub ok
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 52 handle_osd_map epochs [52,53], i have 52, src has [1,53]
Jan 10 17:23:13 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 60391424 unmapped: 1466368 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T16:59:42.972925+0000)
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_client  log_queue is 2 last_log 27 sent 25 num 2 unsent 2 sending 2
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:00:12.268804+0000 osd.2 (osd.2) 26 : cluster [DBG] 5.10 scrub starts
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:00:12.279295+0000 osd.2 (osd.2) 27 : cluster [DBG] 5.10 scrub ok
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:13 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:13 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:13 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 347542 data_alloc: 218103808 data_used: 858
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 53 handle_osd_map epochs [53,54], i have 53, src has [1,54]
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_client handle_log_ack log(last 27)
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:00:12.268804+0000 osd.2 (osd.2) 26 : cluster [DBG] 5.10 scrub starts
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:00:12.279295+0000 osd.2 (osd.2) 27 : cluster [DBG] 5.10 scrub ok
Jan 10 17:23:13 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 60424192 unmapped: 1433600 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T16:59:43.973098+0000)
Jan 10 17:23:13 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 60424192 unmapped: 1433600 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T16:59:44.973250+0000)
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 2.10 scrub starts
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 2.10 scrub ok
Jan 10 17:23:13 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 60399616 unmapped: 1458176 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T16:59:45.973413+0000)
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_client  log_queue is 2 last_log 29 sent 27 num 2 unsent 2 sending 2
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:00:15.198426+0000 osd.2 (osd.2) 28 : cluster [DBG] 2.10 scrub starts
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:00:15.208961+0000 osd.2 (osd.2) 29 : cluster [DBG] 2.10 scrub ok
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:13 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 60399616 unmapped: 1458176 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_client handle_log_ack log(last 29)
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:00:15.198426+0000 osd.2 (osd.2) 28 : cluster [DBG] 2.10 scrub starts
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:00:15.208961+0000 osd.2 (osd.2) 29 : cluster [DBG] 2.10 scrub ok
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T16:59:46.973656+0000)
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 54 heartbeat osd_stat(store_statfs(0x4fe13d000/0x0/0x4ffc00000, data 0x3ac8f/0x8f000, compress 0x0/0x0/0x0, omap 0x66fc, meta 0x1a29904), peers [0,1] op hist [])
Jan 10 17:23:13 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 60407808 unmapped: 1449984 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T16:59:47.973901+0000)
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 5.17 scrub starts
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 5.17 scrub ok
Jan 10 17:23:13 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:13 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:13 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 354420 data_alloc: 218103808 data_used: 858
Jan 10 17:23:13 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 60407808 unmapped: 1449984 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T16:59:48.974047+0000)
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_client  log_queue is 2 last_log 31 sent 29 num 2 unsent 2 sending 2
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:00:18.225918+0000 osd.2 (osd.2) 30 : cluster [DBG] 5.17 scrub starts
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:00:18.236242+0000 osd.2 (osd.2) 31 : cluster [DBG] 5.17 scrub ok
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_client handle_log_ack log(last 31)
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:00:18.225918+0000 osd.2 (osd.2) 30 : cluster [DBG] 5.17 scrub starts
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:00:18.236242+0000 osd.2 (osd.2) 31 : cluster [DBG] 5.17 scrub ok
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 5.8 scrub starts
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 5.8 scrub ok
Jan 10 17:23:13 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 60407808 unmapped: 1449984 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T16:59:49.974225+0000)
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_client  log_queue is 2 last_log 33 sent 31 num 2 unsent 2 sending 2
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:00:19.228025+0000 osd.2 (osd.2) 32 : cluster [DBG] 5.8 scrub starts
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:00:19.238569+0000 osd.2 (osd.2) 33 : cluster [DBG] 5.8 scrub ok
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 54 handle_osd_map epochs [55,56], i have 54, src has [1,56]
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_client handle_log_ack log(last 33)
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:00:19.228025+0000 osd.2 (osd.2) 32 : cluster [DBG] 5.8 scrub starts
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:00:19.238569+0000 osd.2 (osd.2) 33 : cluster [DBG] 5.8 scrub ok
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 56 pg[6.8(unlocked)] enter Initial
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 56 pg[6.8( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=56) [2] r=0 lpr=0 pi=[39,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000150 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 56 pg[6.8( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=56) [2] r=0 lpr=0 pi=[39,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 56 pg[6.8( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=56) [2] r=0 lpr=56 pi=[39,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000025 1 0.000048
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 56 pg[6.8( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=56) [2] r=0 lpr=56 pi=[39,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 56 pg[6.8( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=56) [2] r=0 lpr=56 pi=[39,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 56 pg[6.8( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=56) [2] r=0 lpr=56 pi=[39,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 56 pg[6.8( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=56) [2] r=0 lpr=56 pi=[39,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000007 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 56 pg[6.8( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=56) [2] r=0 lpr=56 pi=[39,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 56 pg[6.8( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=56) [2] r=0 lpr=56 pi=[39,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 56 pg[6.8( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=56) [2] r=0 lpr=56 pi=[39,56)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 56 pg[6.8( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=56) [2] r=0 lpr=56 pi=[39,56)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000289 1 0.000065
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 56 pg[6.8( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=56) [2] r=0 lpr=56 pi=[39,56)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:13 compute-0 ceph-osd[87867]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 17:23:13 compute-0 ceph-osd[87867]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 56 pg[6.8( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=56) [2] r=0 lpr=56 pi=[39,56)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001192 2 0.000136
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 56 pg[6.8( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=56) [2] r=0 lpr=56 pi=[39,56)/1 crt=33'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 56 pg[6.8( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=56) [2] r=0 lpr=56 pi=[39,56)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000013 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 56 pg[6.8( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=56) [2] r=0 lpr=56 pi=[39,56)/1 crt=33'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:23:13 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 60440576 unmapped: 1417216 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T16:59:50.974458+0000)
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 2.e scrub starts
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 2.e scrub ok
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 56 handle_osd_map epochs [56,57], i have 56, src has [1,57]
Jan 10 17:23:13 compute-0 ceph-osd[87867]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.811351776s of 10.216451645s, submitted: 16
Jan 10 17:23:13 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 60465152 unmapped: 1392640 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 56 handle_osd_map epochs [56,57], i have 57, src has [1,57]
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 57 pg[6.8( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=56) [2] r=0 lpr=56 pi=[39,56)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.007088 2 0.000096
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 57 pg[6.8( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=56) [2] r=0 lpr=56 pi=[39,56)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.008713 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 57 pg[6.8( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=56) [2] r=0 lpr=56 pi=[39,56)/1 crt=33'39 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 57 pg[6.8( v 33'39 (0'0,33'39] local-lis/les=56/57 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=56) [2] r=0 lpr=56 pi=[39,56)/1 crt=33'39 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 57 handle_osd_map epochs [57,57], i have 57, src has [1,57]
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 57 pg[6.8( v 33'39 (0'0,33'39] local-lis/les=56/57 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=56) [2] r=0 lpr=56 pi=[39,56)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 57 pg[6.8( v 33'39 (0'0,33'39] local-lis/les=56/57 n=1 ec=39/23 lis/c=56/39 les/c/f=57/42/0 sis=56) [2] r=0 lpr=56 pi=[39,56)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002444 4 0.000176
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 57 pg[6.8( v 33'39 (0'0,33'39] local-lis/les=56/57 n=1 ec=39/23 lis/c=56/39 les/c/f=57/42/0 sis=56) [2] r=0 lpr=56 pi=[39,56)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 57 pg[6.8( v 33'39 (0'0,33'39] local-lis/les=56/57 n=1 ec=39/23 lis/c=56/39 les/c/f=57/42/0 sis=56) [2] r=0 lpr=56 pi=[39,56)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000010 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 57 pg[6.8( v 33'39 (0'0,33'39] local-lis/les=56/57 n=1 ec=39/23 lis/c=56/39 les/c/f=57/42/0 sis=56) [2] r=0 lpr=56 pi=[39,56)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T16:59:51.974664+0000)
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_client  log_queue is 2 last_log 35 sent 33 num 2 unsent 2 sending 2
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:00:21.242341+0000 osd.2 (osd.2) 34 : cluster [DBG] 2.e scrub starts
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:00:21.252882+0000 osd.2 (osd.2) 35 : cluster [DBG] 2.e scrub ok
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:13 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 60481536 unmapped: 1376256 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 57 handle_osd_map epochs [57,58], i have 57, src has [1,58]
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_client handle_log_ack log(last 35)
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:00:21.242341+0000 osd.2 (osd.2) 34 : cluster [DBG] 2.e scrub starts
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:00:21.252882+0000 osd.2 (osd.2) 35 : cluster [DBG] 2.e scrub ok
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 58 heartbeat osd_stat(store_statfs(0x4fe132000/0x0/0x4ffc00000, data 0x3ed3b/0x98000, compress 0x0/0x0/0x0, omap 0x6c12, meta 0x1a293ee), peers [0,1] op hist [])
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T16:59:52.974963+0000)
Jan 10 17:23:13 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:13 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:13 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 372924 data_alloc: 218103808 data_used: 858
Jan 10 17:23:13 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 60489728 unmapped: 1368064 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T16:59:53.975138+0000)
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 5.a scrub starts
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 5.a scrub ok
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 58 heartbeat osd_stat(store_statfs(0x4fe12d000/0x0/0x4ffc00000, data 0x40351/0x9b000, compress 0x0/0x0/0x0, omap 0x6e9d, meta 0x1a29163), peers [0,1] op hist [])
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 58 handle_osd_map epochs [59,59], i have 58, src has [1,59]
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 58 handle_osd_map epochs [59,59], i have 59, src has [1,59]
Jan 10 17:23:13 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61595648 unmapped: 262144 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 59 handle_osd_map epochs [59,60], i have 59, src has [1,60]
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 60 heartbeat osd_stat(store_statfs(0x4fe12d000/0x0/0x4ffc00000, data 0x40351/0x9b000, compress 0x0/0x0/0x0, omap 0x6e9d, meta 0x1a29163), peers [0,1] op hist [])
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T16:59:54.975393+0000)
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_client  log_queue is 2 last_log 37 sent 35 num 2 unsent 2 sending 2
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:00:24.188275+0000 osd.2 (osd.2) 36 : cluster [DBG] 5.a scrub starts
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:00:24.198813+0000 osd.2 (osd.2) 37 : cluster [DBG] 5.a scrub ok
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:13 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61628416 unmapped: 229376 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_client handle_log_ack log(last 37)
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:00:24.188275+0000 osd.2 (osd.2) 36 : cluster [DBG] 5.a scrub starts
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:00:24.198813+0000 osd.2 (osd.2) 37 : cluster [DBG] 5.a scrub ok
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T16:59:55.975641+0000)
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 2.c scrub starts
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 2.c scrub ok
Jan 10 17:23:13 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61644800 unmapped: 212992 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T16:59:56.975837+0000)
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_client  log_queue is 2 last_log 39 sent 37 num 2 unsent 2 sending 2
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:00:26.175652+0000 osd.2 (osd.2) 38 : cluster [DBG] 2.c scrub starts
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:00:26.185922+0000 osd.2 (osd.2) 39 : cluster [DBG] 2.c scrub ok
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:13 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 155648 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_client handle_log_ack log(last 39)
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:00:26.175652+0000 osd.2 (osd.2) 38 : cluster [DBG] 2.c scrub starts
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:00:26.185922+0000 osd.2 (osd.2) 39 : cluster [DBG] 2.c scrub ok
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T16:59:57.976063+0000)
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 60 handle_osd_map epochs [61,61], i have 60, src has [1,61]
Jan 10 17:23:13 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:13 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:13 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 386526 data_alloc: 218103808 data_used: 858
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 61 heartbeat osd_stat(store_statfs(0x4fe122000/0x0/0x4ffc00000, data 0x44267/0xa4000, compress 0x0/0x0/0x0, omap 0x763e, meta 0x1a289c2), peers [0,1] op hist [])
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 61 handle_osd_map epochs [62,62], i have 61, src has [1,62]
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 61 handle_osd_map epochs [62,62], i have 62, src has [1,62]
Jan 10 17:23:13 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61652992 unmapped: 204800 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 62 handle_osd_map epochs [62,63], i have 62, src has [1,63]
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T16:59:58.976210+0000)
Jan 10 17:23:13 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61661184 unmapped: 196608 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T16:59:59.976505+0000)
Jan 10 17:23:13 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61661184 unmapped: 196608 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 63 handle_osd_map epochs [63,64], i have 63, src has [1,64]
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:00.976652+0000)
Jan 10 17:23:13 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 172032 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:01.976776+0000)
Jan 10 17:23:13 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 172032 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 64 handle_osd_map epochs [65,65], i have 64, src has [1,65]
Jan 10 17:23:13 compute-0 ceph-osd[87867]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.029172897s of 11.149907112s, submitted: 15
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 65 handle_osd_map epochs [65,66], i have 65, src has [1,66]
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:02.976903+0000)
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 5.b scrub starts
Jan 10 17:23:13 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:13 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:13 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 402333 data_alloc: 218103808 data_used: 858
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 5.b scrub ok
Jan 10 17:23:13 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61644800 unmapped: 212992 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:03.977053+0000)
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_client  log_queue is 2 last_log 41 sent 39 num 2 unsent 2 sending 2
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:00:33.325897+0000 osd.2 (osd.2) 40 : cluster [DBG] 5.b scrub starts
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:00:33.336008+0000 osd.2 (osd.2) 41 : cluster [DBG] 5.b scrub ok
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 66 heartbeat osd_stat(store_statfs(0x4fe115000/0x0/0x4ffc00000, data 0x4af47/0xb3000, compress 0x0/0x0/0x0, omap 0x82f5, meta 0x1a27d0b), peers [0,1] op hist [])
Jan 10 17:23:13 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61652992 unmapped: 204800 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_client handle_log_ack log(last 41)
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:00:33.325897+0000 osd.2 (osd.2) 40 : cluster [DBG] 5.b scrub starts
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:00:33.336008+0000 osd.2 (osd.2) 41 : cluster [DBG] 5.b scrub ok
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 66 handle_osd_map epochs [66,67], i have 66, src has [1,67]
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:04.977285+0000)
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 2.0 scrub starts
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 2.0 scrub ok
Jan 10 17:23:13 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61677568 unmapped: 180224 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:05.977457+0000)
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_client  log_queue is 2 last_log 43 sent 41 num 2 unsent 2 sending 2
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:00:35.349606+0000 osd.2 (osd.2) 42 : cluster [DBG] 2.0 scrub starts
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:00:35.360109+0000 osd.2 (osd.2) 43 : cluster [DBG] 2.0 scrub ok
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:13 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61751296 unmapped: 106496 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:06.977989+0000)
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_client handle_log_ack log(last 43)
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:00:35.349606+0000 osd.2 (osd.2) 42 : cluster [DBG] 2.0 scrub starts
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:00:35.360109+0000 osd.2 (osd.2) 43 : cluster [DBG] 2.0 scrub ok
Jan 10 17:23:13 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61751296 unmapped: 106496 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:07.978134+0000)
Jan 10 17:23:13 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:13 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:13 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 407980 data_alloc: 218103808 data_used: 858
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 67 heartbeat osd_stat(store_statfs(0x4fe110000/0x0/0x4ffc00000, data 0x4c55d/0xb6000, compress 0x0/0x0/0x0, omap 0x8580, meta 0x1a27a80), peers [0,1] op hist [])
Jan 10 17:23:13 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61751296 unmapped: 106496 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:08.978280+0000)
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 67 pg[6.f(unlocked)] enter Initial
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 67 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=67) [2] r=0 lpr=0 pi=[48,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.001224 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 67 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=67) [2] r=0 lpr=0 pi=[48,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 67 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=67) [2] r=0 lpr=67 pi=[48,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000076 1 0.000192
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 67 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=67) [2] r=0 lpr=67 pi=[48,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 67 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=67) [2] r=0 lpr=67 pi=[48,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 67 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=67) [2] r=0 lpr=67 pi=[48,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 67 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=67) [2] r=0 lpr=67 pi=[48,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000776 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 67 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=67) [2] r=0 lpr=67 pi=[48,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 67 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=67) [2] r=0 lpr=67 pi=[48,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 67 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=67) [2] r=0 lpr=67 pi=[48,67)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 67 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=67) [2] r=0 lpr=67 pi=[48,67)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000444 1 0.001106
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 67 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=67) [2] r=0 lpr=67 pi=[48,67)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:13 compute-0 ceph-osd[87867]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 17:23:13 compute-0 ceph-osd[87867]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 67 pg[6.f( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=67) [2] r=0 lpr=67 pi=[48,67)/1 crt=33'39 mlcod 0'0 peering m=3 mbc={}] exit Started/Primary/Peering/GetLog 0.000831 2 0.000183
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 67 pg[6.f( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=67) [2] r=0 lpr=67 pi=[48,67)/1 crt=33'39 mlcod 0'0 peering m=3 mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 67 pg[6.f( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=67) [2] r=0 lpr=67 pi=[48,67)/1 crt=33'39 mlcod 0'0 peering m=3 mbc={}] exit Started/Primary/Peering/GetMissing 0.000019 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 67 pg[6.f( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=67) [2] r=0 lpr=67 pi=[48,67)/1 crt=33'39 mlcod 0'0 peering m=3 mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:23:13 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61759488 unmapped: 98304 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:09.978382+0000)
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 67 handle_osd_map epochs [67,68], i have 67, src has [1,68]
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 68 pg[6.f( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=67) [2] r=0 lpr=67 pi=[48,67)/1 crt=33'39 mlcod 0'0 peering m=3 mbc={}] exit Started/Primary/Peering/WaitUpThru 1.012900 2 0.000203
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 68 pg[6.f( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=67) [2] r=0 lpr=67 pi=[48,67)/1 crt=33'39 mlcod 0'0 peering m=3 mbc={}] exit Started/Primary/Peering 1.014360 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 68 pg[6.f( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=67) [2] r=0 lpr=67 pi=[48,67)/1 crt=33'39 mlcod 0'0 unknown m=3 mbc={}] enter Started/Primary/Active
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 68 pg[6.f( v 33'39 lc 33'1 (0'0,33'39] local-lis/les=67/68 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=67) [2] r=0 lpr=67 pi=[48,67)/1 crt=33'39 lcod 0'0 mlcod 0'0 activating+degraded m=3 mbc={255={(0+1)=3}}] enter Started/Primary/Active/Activating
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 68 pg[6.f( v 33'39 lc 33'1 (0'0,33'39] local-lis/les=67/68 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=67) [2] r=0 lpr=67 pi=[48,67)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 68 pg[6.f( v 33'39 lc 33'1 (0'0,33'39] local-lis/les=67/68 n=1 ec=39/23 lis/c=67/48 les/c/f=68/49/0 sis=67) [2] r=0 lpr=67 pi=[48,67)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] exit Started/Primary/Active/Activating 0.003906 3 0.000352
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 68 pg[6.f( v 33'39 lc 33'1 (0'0,33'39] local-lis/les=67/68 n=1 ec=39/23 lis/c=67/48 les/c/f=68/49/0 sis=67) [2] r=0 lpr=67 pi=[48,67)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 68 pg[6.f( v 33'39 lc 33'1 (0'0,33'39] local-lis/les=67/68 n=1 ec=39/23 lis/c=67/48 les/c/f=68/49/0 sis=67) [2] r=0 lpr=67 pi=[48,67)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=3 mbc={255={(0+1)=3}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000141 1 0.000221
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 68 pg[6.f( v 33'39 lc 33'1 (0'0,33'39] local-lis/les=67/68 n=1 ec=39/23 lis/c=67/48 les/c/f=68/49/0 sis=67) [2] r=0 lpr=67 pi=[48,67)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=3 mbc={255={(0+1)=3}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 68 pg[6.f( v 33'39 lc 33'1 (0'0,33'39] local-lis/les=67/68 n=1 ec=39/23 lis/c=67/48 les/c/f=68/49/0 sis=67) [2] r=0 lpr=67 pi=[48,67)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=3 mbc={255={(0+1)=3}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000014 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 68 pg[6.f( v 33'39 lc 33'1 (0'0,33'39] local-lis/les=67/68 n=1 ec=39/23 lis/c=67/48 les/c/f=68/49/0 sis=67) [2] r=0 lpr=67 pi=[48,67)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=3 mbc={255={(0+1)=3}}] enter Started/Primary/Active/Recovering
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 68 handle_osd_map epochs [68,68], i have 68, src has [1,68]
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 68 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=67/68 n=1 ec=39/23 lis/c=67/48 les/c/f=68/49/0 sis=67) [2] r=0 lpr=67 pi=[48,67)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.137376 3 0.000191
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 68 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=67/68 n=1 ec=39/23 lis/c=67/48 les/c/f=68/49/0 sis=67) [2] r=0 lpr=67 pi=[48,67)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 68 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=67/68 n=1 ec=39/23 lis/c=67/48 les/c/f=68/49/0 sis=67) [2] r=0 lpr=67 pi=[48,67)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000012 0 0.000000
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 68 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=67/68 n=1 ec=39/23 lis/c=67/48 les/c/f=68/49/0 sis=67) [2] r=0 lpr=67 pi=[48,67)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Jan 10 17:23:13 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61775872 unmapped: 81920 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe10f000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:10.978575+0000)
Jan 10 17:23:13 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61775872 unmapped: 81920 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:11.978792+0000)
Jan 10 17:23:13 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61775872 unmapped: 81920 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:12.978987+0000)
Jan 10 17:23:13 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:13 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:13 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 416166 data_alloc: 218103808 data_used: 858
Jan 10 17:23:13 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61784064 unmapped: 73728 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:13.979163+0000)
Jan 10 17:23:13 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61784064 unmapped: 73728 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:14.979304+0000)
Jan 10 17:23:13 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61792256 unmapped: 65536 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:15.979413+0000)
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe10f000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:13 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61792256 unmapped: 65536 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:16.979567+0000)
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe10f000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:13 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61800448 unmapped: 57344 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:17.979760+0000)
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe10f000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 5.0 scrub starts
Jan 10 17:23:13 compute-0 ceph-osd[87867]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.526871681s of 15.590026855s, submitted: 16
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 5.0 scrub ok
Jan 10 17:23:13 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:13 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:13 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 418577 data_alloc: 218103808 data_used: 858
Jan 10 17:23:13 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61784064 unmapped: 73728 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:18.979911+0000)
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_client  log_queue is 2 last_log 45 sent 43 num 2 unsent 2 sending 2
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:00:48.328322+0000 osd.2 (osd.2) 44 : cluster [DBG] 5.0 scrub starts
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:00:48.338834+0000 osd.2 (osd.2) 45 : cluster [DBG] 5.0 scrub ok
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_client handle_log_ack log(last 45)
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:00:48.328322+0000 osd.2 (osd.2) 44 : cluster [DBG] 5.0 scrub starts
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:00:48.338834+0000 osd.2 (osd.2) 45 : cluster [DBG] 5.0 scrub ok
Jan 10 17:23:13 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61784064 unmapped: 73728 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:19.980276+0000)
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 2.1 scrub starts
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 2.1 scrub ok
Jan 10 17:23:13 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61849600 unmapped: 8192 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:20.980524+0000)
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_client  log_queue is 2 last_log 47 sent 45 num 2 unsent 2 sending 2
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:00:50.347104+0000 osd.2 (osd.2) 46 : cluster [DBG] 2.1 scrub starts
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:00:50.357581+0000 osd.2 (osd.2) 47 : cluster [DBG] 2.1 scrub ok
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 5.6 scrub starts
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 5.6 scrub ok
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_client handle_log_ack log(last 47)
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:00:50.347104+0000 osd.2 (osd.2) 46 : cluster [DBG] 2.1 scrub starts
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:00:50.357581+0000 osd.2 (osd.2) 47 : cluster [DBG] 2.1 scrub ok
Jan 10 17:23:13 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61865984 unmapped: 1040384 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:21.980830+0000)
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_client  log_queue is 2 last_log 49 sent 47 num 2 unsent 2 sending 2
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:00:51.376339+0000 osd.2 (osd.2) 48 : cluster [DBG] 5.6 scrub starts
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:00:51.386905+0000 osd.2 (osd.2) 49 : cluster [DBG] 5.6 scrub ok
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 5.e scrub starts
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 5.e scrub ok
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_client handle_log_ack log(last 49)
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:00:51.376339+0000 osd.2 (osd.2) 48 : cluster [DBG] 5.6 scrub starts
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:00:51.386905+0000 osd.2 (osd.2) 49 : cluster [DBG] 5.6 scrub ok
Jan 10 17:23:13 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61898752 unmapped: 1007616 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:22.981197+0000)
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_client  log_queue is 2 last_log 51 sent 49 num 2 unsent 2 sending 2
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:00:52.342343+0000 osd.2 (osd.2) 50 : cluster [DBG] 5.e scrub starts
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:00:52.352828+0000 osd.2 (osd.2) 51 : cluster [DBG] 5.e scrub ok
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:13 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:13 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:13 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 424802 data_alloc: 218103808 data_used: 858
Jan 10 17:23:13 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61898752 unmapped: 1007616 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_client handle_log_ack log(last 51)
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:00:52.342343+0000 osd.2 (osd.2) 50 : cluster [DBG] 5.e scrub starts
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:00:52.352828+0000 osd.2 (osd.2) 51 : cluster [DBG] 5.e scrub ok
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:23.981478+0000)
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 5.d scrub starts
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 5.d scrub ok
Jan 10 17:23:13 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61898752 unmapped: 1007616 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:24.981601+0000)
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_client  log_queue is 2 last_log 53 sent 51 num 2 unsent 2 sending 2
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:00:54.355494+0000 osd.2 (osd.2) 52 : cluster [DBG] 5.d scrub starts
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:00:54.365955+0000 osd.2 (osd.2) 53 : cluster [DBG] 5.d scrub ok
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:13 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61906944 unmapped: 999424 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_client handle_log_ack log(last 53)
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:00:54.355494+0000 osd.2 (osd.2) 52 : cluster [DBG] 5.d scrub starts
Jan 10 17:23:13 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:00:54.365955+0000 osd.2 (osd.2) 53 : cluster [DBG] 5.d scrub ok
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:25.981861+0000)
Jan 10 17:23:13 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61906944 unmapped: 999424 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:13 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:26.982112+0000)
Jan 10 17:23:13 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61915136 unmapped: 991232 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:27.982283+0000)
Jan 10 17:23:13 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:13 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:13 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 427213 data_alloc: 218103808 data_used: 858
Jan 10 17:23:13 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61915136 unmapped: 991232 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:13 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:28.982466+0000)
Jan 10 17:23:13 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61915136 unmapped: 991232 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:29.982629+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61923328 unmapped: 983040 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:30.982825+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61931520 unmapped: 974848 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:31.982962+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61939712 unmapped: 966656 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:32.983109+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 5.1b scrub starts
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.689327240s of 15.030009270s, submitted: 10
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 5.1b scrub ok
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 429626 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61939712 unmapped: 966656 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:33.983315+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  log_queue is 2 last_log 55 sent 53 num 2 unsent 2 sending 2
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:01:03.358290+0000 osd.2 (osd.2) 54 : cluster [DBG] 5.1b scrub starts
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:01:03.368852+0000 osd.2 (osd.2) 55 : cluster [DBG] 5.1b scrub ok
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client handle_log_ack log(last 55)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:01:03.358290+0000 osd.2 (osd.2) 54 : cluster [DBG] 5.1b scrub starts
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:01:03.368852+0000 osd.2 (osd.2) 55 : cluster [DBG] 5.1b scrub ok
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61947904 unmapped: 958464 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:34.985300+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 2.1e scrub starts
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 2.1e scrub ok
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61956096 unmapped: 950272 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:35.986338+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  log_queue is 2 last_log 57 sent 55 num 2 unsent 2 sending 2
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:01:05.312219+0000 osd.2 (osd.2) 56 : cluster [DBG] 2.1e scrub starts
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:01:05.322797+0000 osd.2 (osd.2) 57 : cluster [DBG] 2.1e scrub ok
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client handle_log_ack log(last 57)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:01:05.312219+0000 osd.2 (osd.2) 56 : cluster [DBG] 2.1e scrub starts
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:01:05.322797+0000 osd.2 (osd.2) 57 : cluster [DBG] 2.1e scrub ok
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61964288 unmapped: 942080 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:36.988291+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61980672 unmapped: 925696 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:37.988523+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 432039 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61980672 unmapped: 925696 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:38.988751+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61988864 unmapped: 917504 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:39.989852+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61988864 unmapped: 917504 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:40.990410+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61988864 unmapped: 917504 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:41.990619+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61997056 unmapped: 909312 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:42.991484+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 432039 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61997056 unmapped: 909312 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:43.991739+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 4.1b scrub starts
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.899189949s of 10.917829514s, submitted: 4
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 4.1b scrub ok
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62013440 unmapped: 892928 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:44.991974+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  log_queue is 2 last_log 59 sent 57 num 2 unsent 2 sending 2
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:01:14.276039+0000 osd.2 (osd.2) 58 : cluster [DBG] 4.1b scrub starts
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:01:14.286600+0000 osd.2 (osd.2) 59 : cluster [DBG] 4.1b scrub ok
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 4.1a scrub starts
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 4.1a scrub ok
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client handle_log_ack log(last 59)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:01:14.276039+0000 osd.2 (osd.2) 58 : cluster [DBG] 4.1b scrub starts
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:01:14.286600+0000 osd.2 (osd.2) 59 : cluster [DBG] 4.1b scrub ok
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62013440 unmapped: 892928 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:45.992254+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  log_queue is 2 last_log 61 sent 59 num 2 unsent 2 sending 2
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:01:15.261022+0000 osd.2 (osd.2) 60 : cluster [DBG] 4.1a scrub starts
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:01:15.271506+0000 osd.2 (osd.2) 61 : cluster [DBG] 4.1a scrub ok
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client handle_log_ack log(last 61)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:01:15.261022+0000 osd.2 (osd.2) 60 : cluster [DBG] 4.1a scrub starts
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:01:15.271506+0000 osd.2 (osd.2) 61 : cluster [DBG] 4.1a scrub ok
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62013440 unmapped: 892928 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:46.992605+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62029824 unmapped: 876544 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:47.992769+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 436865 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62029824 unmapped: 876544 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:48.992976+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62038016 unmapped: 868352 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:49.993177+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62046208 unmapped: 860160 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:50.993476+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 4.e scrub starts
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 4.e scrub ok
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62054400 unmapped: 851968 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:51.993781+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  log_queue is 2 last_log 63 sent 61 num 2 unsent 2 sending 2
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:01:21.328668+0000 osd.2 (osd.2) 62 : cluster [DBG] 4.e scrub starts
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:01:21.339212+0000 osd.2 (osd.2) 63 : cluster [DBG] 4.e scrub ok
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 4.1 scrub starts
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 4.1 scrub ok
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client handle_log_ack log(last 63)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:01:21.328668+0000 osd.2 (osd.2) 62 : cluster [DBG] 4.e scrub starts
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:01:21.339212+0000 osd.2 (osd.2) 63 : cluster [DBG] 4.e scrub ok
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62062592 unmapped: 843776 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:52.994051+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  log_queue is 2 last_log 65 sent 63 num 2 unsent 2 sending 2
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:01:22.301434+0000 osd.2 (osd.2) 64 : cluster [DBG] 4.1 scrub starts
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:01:22.311971+0000 osd.2 (osd.2) 65 : cluster [DBG] 4.1 scrub ok
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client handle_log_ack log(last 65)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:01:22.301434+0000 osd.2 (osd.2) 64 : cluster [DBG] 4.1 scrub starts
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:01:22.311971+0000 osd.2 (osd.2) 65 : cluster [DBG] 4.1 scrub ok
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 441687 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62062592 unmapped: 843776 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:53.994318+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:54.994451+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62070784 unmapped: 835584 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 4.13 scrub starts
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.884464264s of 10.945921898s, submitted: 8
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 4.13 scrub ok
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:55.994609+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  log_queue is 2 last_log 67 sent 65 num 2 unsent 2 sending 2
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:01:25.222328+0000 osd.2 (osd.2) 66 : cluster [DBG] 4.13 scrub starts
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:01:25.232911+0000 osd.2 (osd.2) 67 : cluster [DBG] 4.13 scrub ok
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62078976 unmapped: 827392 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client handle_log_ack log(last 67)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:01:25.222328+0000 osd.2 (osd.2) 66 : cluster [DBG] 4.13 scrub starts
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:01:25.232911+0000 osd.2 (osd.2) 67 : cluster [DBG] 4.13 scrub ok
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:56.994907+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62087168 unmapped: 819200 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 4.a scrub starts
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 4.a scrub ok
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:57.995156+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  log_queue is 2 last_log 69 sent 67 num 2 unsent 2 sending 2
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:01:27.257872+0000 osd.2 (osd.2) 68 : cluster [DBG] 4.a scrub starts
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:01:27.268483+0000 osd.2 (osd.2) 69 : cluster [DBG] 4.a scrub ok
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62095360 unmapped: 811008 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client handle_log_ack log(last 69)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:01:27.257872+0000 osd.2 (osd.2) 68 : cluster [DBG] 4.a scrub starts
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:01:27.268483+0000 osd.2 (osd.2) 69 : cluster [DBG] 4.a scrub ok
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 446511 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:58.995422+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62103552 unmapped: 802816 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:59.995608+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62111744 unmapped: 794624 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 4.11 scrub starts
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 4.11 scrub ok
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:00.995819+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  log_queue is 2 last_log 71 sent 69 num 2 unsent 2 sending 2
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:01:30.223299+0000 osd.2 (osd.2) 70 : cluster [DBG] 4.11 scrub starts
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:01:30.233862+0000 osd.2 (osd.2) 71 : cluster [DBG] 4.11 scrub ok
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62111744 unmapped: 794624 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client handle_log_ack log(last 71)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:01:30.223299+0000 osd.2 (osd.2) 70 : cluster [DBG] 4.11 scrub starts
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:01:30.233862+0000 osd.2 (osd.2) 71 : cluster [DBG] 4.11 scrub ok
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:01.996029+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62128128 unmapped: 778240 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 4.18 scrub starts
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 4.18 scrub ok
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:02.996279+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  log_queue is 2 last_log 73 sent 71 num 2 unsent 2 sending 2
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:01:32.234819+0000 osd.2 (osd.2) 72 : cluster [DBG] 4.18 scrub starts
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:01:32.245355+0000 osd.2 (osd.2) 73 : cluster [DBG] 4.18 scrub ok
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62136320 unmapped: 770048 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client handle_log_ack log(last 73)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:01:32.234819+0000 osd.2 (osd.2) 72 : cluster [DBG] 4.18 scrub starts
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:01:32.245355+0000 osd.2 (osd.2) 73 : cluster [DBG] 4.18 scrub ok
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 451337 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:03.996520+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62144512 unmapped: 761856 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:04.996674+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62144512 unmapped: 761856 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:05.996877+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62144512 unmapped: 761856 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:06.997197+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62152704 unmapped: 753664 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 3.18 scrub starts
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.035273552s of 12.054781914s, submitted: 8
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 3.18 scrub ok
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:07.997336+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  log_queue is 2 last_log 75 sent 73 num 2 unsent 2 sending 2
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:01:37.276984+0000 osd.2 (osd.2) 74 : cluster [DBG] 3.18 scrub starts
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:01:37.287497+0000 osd.2 (osd.2) 75 : cluster [DBG] 3.18 scrub ok
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62160896 unmapped: 745472 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 7.11 scrub starts
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 7.11 scrub ok
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client handle_log_ack log(last 75)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:01:37.276984+0000 osd.2 (osd.2) 74 : cluster [DBG] 3.18 scrub starts
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:01:37.287497+0000 osd.2 (osd.2) 75 : cluster [DBG] 3.18 scrub ok
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 456163 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:08.997746+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  log_queue is 2 last_log 77 sent 75 num 2 unsent 2 sending 2
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:01:38.320031+0000 osd.2 (osd.2) 76 : cluster [DBG] 7.11 scrub starts
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:01:38.330556+0000 osd.2 (osd.2) 77 : cluster [DBG] 7.11 scrub ok
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62169088 unmapped: 737280 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client handle_log_ack log(last 77)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:01:38.320031+0000 osd.2 (osd.2) 76 : cluster [DBG] 7.11 scrub starts
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:01:38.330556+0000 osd.2 (osd.2) 77 : cluster [DBG] 7.11 scrub ok
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:09.997996+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62177280 unmapped: 729088 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:10.998153+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62177280 unmapped: 729088 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:11.998312+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62185472 unmapped: 720896 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:12.998483+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62185472 unmapped: 720896 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 456163 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:13.998617+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62193664 unmapped: 712704 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:14.998825+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62210048 unmapped: 696320 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:15.999012+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62210048 unmapped: 696320 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 3.16 scrub starts
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 3.16 scrub ok
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:16.999730+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  log_queue is 2 last_log 79 sent 77 num 2 unsent 2 sending 2
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:01:46.342819+0000 osd.2 (osd.2) 78 : cluster [DBG] 3.16 scrub starts
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:01:46.353248+0000 osd.2 (osd.2) 79 : cluster [DBG] 3.16 scrub ok
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62226432 unmapped: 679936 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client handle_log_ack log(last 79)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:01:46.342819+0000 osd.2 (osd.2) 78 : cluster [DBG] 3.16 scrub starts
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:01:46.353248+0000 osd.2 (osd.2) 79 : cluster [DBG] 3.16 scrub ok
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:17.999970+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62234624 unmapped: 671744 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 458576 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:19.000112+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62242816 unmapped: 663552 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 7.15 scrub starts
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.947065353s of 12.031906128s, submitted: 6
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 7.15 scrub ok
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:20.000260+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  log_queue is 2 last_log 81 sent 79 num 2 unsent 2 sending 2
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:01:49.308973+0000 osd.2 (osd.2) 80 : cluster [DBG] 7.15 scrub starts
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:01:49.319768+0000 osd.2 (osd.2) 81 : cluster [DBG] 7.15 scrub ok
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62251008 unmapped: 655360 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client handle_log_ack log(last 81)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:01:49.308973+0000 osd.2 (osd.2) 80 : cluster [DBG] 7.15 scrub starts
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:01:49.319768+0000 osd.2 (osd.2) 81 : cluster [DBG] 7.15 scrub ok
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:21.000490+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62259200 unmapped: 647168 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:22.000719+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62259200 unmapped: 647168 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 7.1c scrub starts
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 7.1c scrub ok
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:23.000966+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  log_queue is 2 last_log 83 sent 81 num 2 unsent 2 sending 2
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:01:52.413294+0000 osd.2 (osd.2) 82 : cluster [DBG] 7.1c scrub starts
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:01:52.424019+0000 osd.2 (osd.2) 83 : cluster [DBG] 7.1c scrub ok
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62210048 unmapped: 696320 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client handle_log_ack log(last 83)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:01:52.413294+0000 osd.2 (osd.2) 82 : cluster [DBG] 7.1c scrub starts
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:01:52.424019+0000 osd.2 (osd.2) 83 : cluster [DBG] 7.1c scrub ok
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 463402 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:24.001282+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62218240 unmapped: 688128 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:25.001504+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62218240 unmapped: 688128 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 3.11 scrub starts
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 3.11 scrub ok
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:26.001807+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  log_queue is 2 last_log 85 sent 83 num 2 unsent 2 sending 2
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:01:55.371920+0000 osd.2 (osd.2) 84 : cluster [DBG] 3.11 scrub starts
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:01:55.382448+0000 osd.2 (osd.2) 85 : cluster [DBG] 3.11 scrub ok
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62259200 unmapped: 647168 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client handle_log_ack log(last 85)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:01:55.371920+0000 osd.2 (osd.2) 84 : cluster [DBG] 3.11 scrub starts
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:01:55.382448+0000 osd.2 (osd.2) 85 : cluster [DBG] 3.11 scrub ok
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:27.002235+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62267392 unmapped: 638976 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:28.002400+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62275584 unmapped: 630784 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 465815 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:29.002606+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62275584 unmapped: 630784 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:30.002882+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62275584 unmapped: 630784 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:31.003039+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62283776 unmapped: 622592 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:32.003221+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62283776 unmapped: 622592 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.14750 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:33.003354+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62291968 unmapped: 614400 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 465815 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:34.003510+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62300160 unmapped: 606208 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 7.5 scrub starts
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.993368149s of 15.010603905s, submitted: 6
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 7.5 scrub ok
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:35.003657+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  log_queue is 2 last_log 87 sent 85 num 2 unsent 2 sending 2
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:02:04.319657+0000 osd.2 (osd.2) 86 : cluster [DBG] 7.5 scrub starts
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:02:04.330214+0000 osd.2 (osd.2) 87 : cluster [DBG] 7.5 scrub ok
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62300160 unmapped: 606208 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client handle_log_ack log(last 87)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:02:04.319657+0000 osd.2 (osd.2) 86 : cluster [DBG] 7.5 scrub starts
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:02:04.330214+0000 osd.2 (osd.2) 87 : cluster [DBG] 7.5 scrub ok
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 3.e scrub starts
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 3.e scrub ok
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:36.003957+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  log_queue is 2 last_log 89 sent 87 num 2 unsent 2 sending 2
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:02:05.354079+0000 osd.2 (osd.2) 88 : cluster [DBG] 3.e scrub starts
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:02:05.364685+0000 osd.2 (osd.2) 89 : cluster [DBG] 3.e scrub ok
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62308352 unmapped: 598016 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client handle_log_ack log(last 89)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:02:05.354079+0000 osd.2 (osd.2) 88 : cluster [DBG] 3.e scrub starts
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:02:05.364685+0000 osd.2 (osd.2) 89 : cluster [DBG] 3.e scrub ok
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:37.004253+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62308352 unmapped: 598016 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:38.004469+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62316544 unmapped: 589824 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 470637 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:39.004598+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62316544 unmapped: 589824 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:40.004784+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62324736 unmapped: 581632 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:41.004973+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62332928 unmapped: 573440 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:42.005136+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62341120 unmapped: 565248 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 7.2 scrub starts
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 7.2 scrub ok
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:43.005274+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  log_queue is 2 last_log 91 sent 89 num 2 unsent 2 sending 2
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:02:12.291897+0000 osd.2 (osd.2) 90 : cluster [DBG] 7.2 scrub starts
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:02:12.302215+0000 osd.2 (osd.2) 91 : cluster [DBG] 7.2 scrub ok
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62349312 unmapped: 557056 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client handle_log_ack log(last 91)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:02:12.291897+0000 osd.2 (osd.2) 90 : cluster [DBG] 7.2 scrub starts
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:02:12.302215+0000 osd.2 (osd.2) 91 : cluster [DBG] 7.2 scrub ok
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 473048 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:44.005426+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62349312 unmapped: 557056 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 7.a scrub starts
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 7.a scrub ok
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:45.005576+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  log_queue is 2 last_log 93 sent 91 num 2 unsent 2 sending 2
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:02:14.246625+0000 osd.2 (osd.2) 92 : cluster [DBG] 7.a scrub starts
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:02:14.257175+0000 osd.2 (osd.2) 93 : cluster [DBG] 7.a scrub ok
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62365696 unmapped: 540672 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client handle_log_ack log(last 93)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:02:14.246625+0000 osd.2 (osd.2) 92 : cluster [DBG] 7.a scrub starts
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:02:14.257175+0000 osd.2 (osd.2) 93 : cluster [DBG] 7.a scrub ok
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:46.005759+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62365696 unmapped: 540672 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 3.7 scrub starts
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.969283104s of 11.986205101s, submitted: 8
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 3.7 scrub ok
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:47.006134+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  log_queue is 2 last_log 95 sent 93 num 2 unsent 2 sending 2
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:02:16.306010+0000 osd.2 (osd.2) 94 : cluster [DBG] 3.7 scrub starts
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:02:16.316609+0000 osd.2 (osd.2) 95 : cluster [DBG] 3.7 scrub ok
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62373888 unmapped: 532480 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 3.5 scrub starts
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 3.5 scrub ok
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client handle_log_ack log(last 95)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:02:16.306010+0000 osd.2 (osd.2) 94 : cluster [DBG] 3.7 scrub starts
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:02:16.316609+0000 osd.2 (osd.2) 95 : cluster [DBG] 3.7 scrub ok
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:48.006352+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  log_queue is 2 last_log 97 sent 95 num 2 unsent 2 sending 2
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:02:17.259419+0000 osd.2 (osd.2) 96 : cluster [DBG] 3.5 scrub starts
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:02:17.269660+0000 osd.2 (osd.2) 97 : cluster [DBG] 3.5 scrub ok
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62390272 unmapped: 516096 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 7.c scrub starts
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 7.c scrub ok
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client handle_log_ack log(last 97)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:02:17.259419+0000 osd.2 (osd.2) 96 : cluster [DBG] 3.5 scrub starts
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:02:17.269660+0000 osd.2 (osd.2) 97 : cluster [DBG] 3.5 scrub ok
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 482692 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:49.006620+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  log_queue is 2 last_log 99 sent 97 num 2 unsent 2 sending 2
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:02:18.228062+0000 osd.2 (osd.2) 98 : cluster [DBG] 7.c scrub starts
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:02:18.238650+0000 osd.2 (osd.2) 99 : cluster [DBG] 7.c scrub ok
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62390272 unmapped: 516096 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client handle_log_ack log(last 99)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:02:18.228062+0000 osd.2 (osd.2) 98 : cluster [DBG] 7.c scrub starts
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:02:18.238650+0000 osd.2 (osd.2) 99 : cluster [DBG] 7.c scrub ok
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:50.006937+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62406656 unmapped: 499712 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:51.007293+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62414848 unmapped: 491520 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:52.007494+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62414848 unmapped: 491520 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:53.007899+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62423040 unmapped: 483328 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 482692 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:54.008179+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 7.8 scrub starts
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 7.8 scrub ok
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62423040 unmapped: 483328 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:55.008385+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  log_queue is 2 last_log 101 sent 99 num 2 unsent 2 sending 2
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:02:24.162907+0000 osd.2 (osd.2) 100 : cluster [DBG] 7.8 scrub starts
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:02:24.173427+0000 osd.2 (osd.2) 101 : cluster [DBG] 7.8 scrub ok
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62431232 unmapped: 475136 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client handle_log_ack log(last 101)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:02:24.162907+0000 osd.2 (osd.2) 100 : cluster [DBG] 7.8 scrub starts
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:02:24.173427+0000 osd.2 (osd.2) 101 : cluster [DBG] 7.8 scrub ok
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:56.008730+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62431232 unmapped: 475136 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:57.008970+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62439424 unmapped: 466944 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:58.009207+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62439424 unmapped: 466944 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 485103 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:59.009371+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62439424 unmapped: 466944 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:00.009553+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 3.1d scrub starts
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.836256981s of 13.859765053s, submitted: 8
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 3.1d scrub ok
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62447616 unmapped: 458752 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:01.009775+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  log_queue is 2 last_log 103 sent 101 num 2 unsent 2 sending 2
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:02:30.165880+0000 osd.2 (osd.2) 102 : cluster [DBG] 3.1d scrub starts
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:02:30.176364+0000 osd.2 (osd.2) 103 : cluster [DBG] 3.1d scrub ok
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client handle_log_ack log(last 103)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:02:30.165880+0000 osd.2 (osd.2) 102 : cluster [DBG] 3.1d scrub starts
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:02:30.176364+0000 osd.2 (osd.2) 103 : cluster [DBG] 3.1d scrub ok
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62455808 unmapped: 450560 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:02.010073+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62464000 unmapped: 442368 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:03.010252+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62464000 unmapped: 442368 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 487516 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:04.010419+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62464000 unmapped: 442368 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 7.e scrub starts
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 7.e scrub ok
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:05.010583+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  log_queue is 2 last_log 105 sent 103 num 2 unsent 2 sending 2
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:02:34.355948+0000 osd.2 (osd.2) 104 : cluster [DBG] 7.e scrub starts
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:02:34.366518+0000 osd.2 (osd.2) 105 : cluster [DBG] 7.e scrub ok
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62480384 unmapped: 425984 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client handle_log_ack log(last 105)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:02:34.355948+0000 osd.2 (osd.2) 104 : cluster [DBG] 7.e scrub starts
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:02:34.366518+0000 osd.2 (osd.2) 105 : cluster [DBG] 7.e scrub ok
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:06.010850+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62472192 unmapped: 434176 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:07.011038+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62480384 unmapped: 425984 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:08.011173+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62480384 unmapped: 425984 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 489927 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:09.011417+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62504960 unmapped: 401408 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:10.011594+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62513152 unmapped: 393216 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:11.011766+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62513152 unmapped: 393216 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 7.1 scrub starts
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.158172607s of 11.166720390s, submitted: 4
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 7.1 scrub ok
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:12.011918+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  log_queue is 2 last_log 107 sent 105 num 2 unsent 2 sending 2
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:02:41.332514+0000 osd.2 (osd.2) 106 : cluster [DBG] 7.1 scrub starts
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:02:41.342931+0000 osd.2 (osd.2) 107 : cluster [DBG] 7.1 scrub ok
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client handle_log_ack log(last 107)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:02:41.332514+0000 osd.2 (osd.2) 106 : cluster [DBG] 7.1 scrub starts
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:02:41.342931+0000 osd.2 (osd.2) 107 : cluster [DBG] 7.1 scrub ok
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62537728 unmapped: 368640 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:13.012244+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62537728 unmapped: 368640 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 7.1a scrub starts
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 7.1a scrub ok
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 494751 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:14.012396+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  log_queue is 2 last_log 109 sent 107 num 2 unsent 2 sending 2
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:02:43.346577+0000 osd.2 (osd.2) 108 : cluster [DBG] 7.1a scrub starts
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:02:43.357209+0000 osd.2 (osd.2) 109 : cluster [DBG] 7.1a scrub ok
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client handle_log_ack log(last 109)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:02:43.346577+0000 osd.2 (osd.2) 108 : cluster [DBG] 7.1a scrub starts
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:02:43.357209+0000 osd.2 (osd.2) 109 : cluster [DBG] 7.1a scrub ok
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62562304 unmapped: 344064 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:15.012668+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62562304 unmapped: 344064 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:16.012899+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62570496 unmapped: 335872 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:17.013110+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62578688 unmapped: 327680 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 3.8 scrub starts
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 3.8 scrub ok
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:18.013222+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  log_queue is 2 last_log 111 sent 109 num 2 unsent 2 sending 2
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:02:47.311792+0000 osd.2 (osd.2) 110 : cluster [DBG] 3.8 scrub starts
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:02:47.322191+0000 osd.2 (osd.2) 111 : cluster [DBG] 3.8 scrub ok
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client handle_log_ack log(last 111)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:02:47.311792+0000 osd.2 (osd.2) 110 : cluster [DBG] 3.8 scrub starts
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:02:47.322191+0000 osd.2 (osd.2) 111 : cluster [DBG] 3.8 scrub ok
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62578688 unmapped: 327680 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 497162 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:19.013462+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62586880 unmapped: 319488 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:20.013615+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 6.8 scrub starts
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 6.8 scrub ok
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62586880 unmapped: 319488 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:21.013800+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  log_queue is 2 last_log 113 sent 111 num 2 unsent 2 sending 2
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:02:50.290185+0000 osd.2 (osd.2) 112 : cluster [DBG] 6.8 scrub starts
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:02:50.300715+0000 osd.2 (osd.2) 113 : cluster [DBG] 6.8 scrub ok
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client handle_log_ack log(last 113)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:02:50.290185+0000 osd.2 (osd.2) 112 : cluster [DBG] 6.8 scrub starts
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:02:50.300715+0000 osd.2 (osd.2) 113 : cluster [DBG] 6.8 scrub ok
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 6.f scrub starts
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 6.f scrub ok
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62595072 unmapped: 311296 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:22.014274+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  log_queue is 2 last_log 115 sent 113 num 2 unsent 2 sending 2
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:02:51.304651+0000 osd.2 (osd.2) 114 : cluster [DBG] 6.f scrub starts
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:02:51.325810+0000 osd.2 (osd.2) 115 : cluster [DBG] 6.f scrub ok
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client handle_log_ack log(last 115)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:02:51.304651+0000 osd.2 (osd.2) 114 : cluster [DBG] 6.f scrub starts
Jan 10 17:23:14 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:02:51.325810+0000 osd.2 (osd.2) 115 : cluster [DBG] 6.f scrub ok
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62611456 unmapped: 294912 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:23.014488+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62611456 unmapped: 294912 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:24.014650+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62619648 unmapped: 286720 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:25.014847+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62619648 unmapped: 286720 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:26.014987+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62619648 unmapped: 286720 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:27.015185+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62627840 unmapped: 278528 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:28.015365+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62627840 unmapped: 278528 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:29.015531+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62636032 unmapped: 270336 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:30.015822+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62636032 unmapped: 270336 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:31.015979+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62644224 unmapped: 262144 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:32.016107+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62644224 unmapped: 262144 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:33.016261+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62660608 unmapped: 245760 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:34.016406+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62660608 unmapped: 245760 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:35.016579+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62660608 unmapped: 245760 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:36.016767+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62668800 unmapped: 237568 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:37.016942+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62668800 unmapped: 237568 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:38.017077+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62676992 unmapped: 229376 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:39.017266+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62685184 unmapped: 221184 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:40.017468+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62685184 unmapped: 221184 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:41.017652+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62693376 unmapped: 212992 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:42.017791+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62693376 unmapped: 212992 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:43.017983+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62701568 unmapped: 204800 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:44.018142+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62701568 unmapped: 204800 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:45.018320+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62701568 unmapped: 204800 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:46.018550+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62709760 unmapped: 196608 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:47.018815+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62709760 unmapped: 196608 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:48.018965+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62717952 unmapped: 188416 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:49.019176+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62717952 unmapped: 188416 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:50.019314+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62717952 unmapped: 188416 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:51.019468+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62726144 unmapped: 180224 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:52.019667+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62726144 unmapped: 180224 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:53.019958+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62742528 unmapped: 163840 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:54.020123+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62742528 unmapped: 163840 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:55.020258+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62742528 unmapped: 163840 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:56.020421+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62750720 unmapped: 155648 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:57.020656+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62750720 unmapped: 155648 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:58.025908+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62758912 unmapped: 147456 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:59.026052+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62758912 unmapped: 147456 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:00.026210+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62758912 unmapped: 147456 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:01.026367+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62767104 unmapped: 139264 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:02.026573+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62767104 unmapped: 139264 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:03.026740+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62775296 unmapped: 131072 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:04.026939+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62775296 unmapped: 131072 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:05.027118+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62783488 unmapped: 122880 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:06.027295+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62783488 unmapped: 122880 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:07.027505+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62783488 unmapped: 122880 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:08.027628+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62791680 unmapped: 114688 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:09.027764+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62791680 unmapped: 114688 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:10.027973+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62808064 unmapped: 98304 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:11.028118+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62816256 unmapped: 90112 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:12.028296+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62816256 unmapped: 90112 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:13.028465+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62824448 unmapped: 81920 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:14.028594+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62824448 unmapped: 81920 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:15.028802+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62832640 unmapped: 73728 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:16.028953+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62832640 unmapped: 73728 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:17.029188+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62832640 unmapped: 73728 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:18.029344+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62840832 unmapped: 65536 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:19.029594+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62840832 unmapped: 65536 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:20.029798+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62849024 unmapped: 57344 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:21.029981+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62857216 unmapped: 49152 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:22.030110+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62857216 unmapped: 49152 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:23.030279+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62865408 unmapped: 40960 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:24.030422+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62865408 unmapped: 40960 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:25.030561+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62873600 unmapped: 32768 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:26.030721+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62873600 unmapped: 32768 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:27.030879+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62881792 unmapped: 24576 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:28.031019+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62881792 unmapped: 24576 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:29.031201+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62881792 unmapped: 24576 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:30.031419+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62889984 unmapped: 16384 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:31.031623+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62889984 unmapped: 16384 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:32.031785+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62889984 unmapped: 16384 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:33.032305+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62898176 unmapped: 8192 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:34.032751+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62898176 unmapped: 8192 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:35.033087+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62906368 unmapped: 0 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:36.033324+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62906368 unmapped: 0 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:37.033652+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62914560 unmapped: 1040384 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:38.034002+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62914560 unmapped: 1040384 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:39.036124+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62914560 unmapped: 1040384 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:40.036287+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62922752 unmapped: 1032192 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:41.036541+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62922752 unmapped: 1032192 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:42.036766+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62930944 unmapped: 1024000 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:43.036977+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62930944 unmapped: 1024000 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:44.037138+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62930944 unmapped: 1024000 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:45.037361+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62939136 unmapped: 1015808 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:46.037492+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62939136 unmapped: 1015808 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:47.037765+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62947328 unmapped: 1007616 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:48.037941+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62947328 unmapped: 1007616 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:49.038042+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62947328 unmapped: 1007616 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:50.038267+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62963712 unmapped: 991232 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:51.038426+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62971904 unmapped: 983040 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:52.038595+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62980096 unmapped: 974848 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:53.038788+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62980096 unmapped: 974848 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:54.039013+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62980096 unmapped: 974848 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:55.039231+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62980096 unmapped: 974848 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:56.039419+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62980096 unmapped: 974848 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:57.039680+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62988288 unmapped: 966656 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:58.039947+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62988288 unmapped: 966656 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:59.040126+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62988288 unmapped: 966656 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:00.058683+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62996480 unmapped: 958464 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:01.059033+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62996480 unmapped: 958464 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:02.059272+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63004672 unmapped: 950272 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:03.059472+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63004672 unmapped: 950272 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:04.059764+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63004672 unmapped: 950272 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:05.059949+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63012864 unmapped: 942080 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:06.060163+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63012864 unmapped: 942080 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:07.060477+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63021056 unmapped: 933888 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:08.060682+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63021056 unmapped: 933888 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:09.060906+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63021056 unmapped: 933888 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:10.061126+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63029248 unmapped: 925696 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:11.061351+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63029248 unmapped: 925696 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:12.061509+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63037440 unmapped: 917504 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:13.061788+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63037440 unmapped: 917504 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:14.061983+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63037440 unmapped: 917504 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:15.062286+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63053824 unmapped: 901120 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:16.062467+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63053824 unmapped: 901120 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:17.062747+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63053824 unmapped: 901120 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:18.062909+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63062016 unmapped: 892928 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:19.063167+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63062016 unmapped: 892928 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:20.063315+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63078400 unmapped: 876544 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:21.063566+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63078400 unmapped: 876544 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:22.063850+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63078400 unmapped: 876544 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:23.064031+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63086592 unmapped: 868352 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:24.064200+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63086592 unmapped: 868352 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:25.064345+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63102976 unmapped: 851968 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:26.064491+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63102976 unmapped: 851968 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:27.064786+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63102976 unmapped: 851968 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:28.064935+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63111168 unmapped: 843776 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:29.065086+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63111168 unmapped: 843776 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:30.065229+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63119360 unmapped: 835584 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:31.065427+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63119360 unmapped: 835584 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:32.065586+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63119360 unmapped: 835584 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:33.065729+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63119360 unmapped: 835584 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:34.065900+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63119360 unmapped: 835584 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:35.066053+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63127552 unmapped: 827392 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:36.066215+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63127552 unmapped: 827392 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:37.066419+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63135744 unmapped: 819200 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:38.066616+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63135744 unmapped: 819200 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:39.066813+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63135744 unmapped: 819200 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:40.066995+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63143936 unmapped: 811008 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:41.067137+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63143936 unmapped: 811008 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:42.067368+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63152128 unmapped: 802816 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:43.067508+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63152128 unmapped: 802816 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:44.067740+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63152128 unmapped: 802816 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:45.067951+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63160320 unmapped: 794624 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:46.068276+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63160320 unmapped: 794624 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:47.068563+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63160320 unmapped: 794624 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:48.068885+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63168512 unmapped: 786432 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:49.069069+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63168512 unmapped: 786432 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:50.069272+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63176704 unmapped: 778240 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:51.069491+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63176704 unmapped: 778240 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:52.069651+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63176704 unmapped: 778240 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:53.069845+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63184896 unmapped: 770048 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:54.070083+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63184896 unmapped: 770048 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:55.070283+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63193088 unmapped: 761856 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:56.070457+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:57.070800+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63193088 unmapped: 761856 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:58.071020+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63201280 unmapped: 753664 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:59.071253+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63201280 unmapped: 753664 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:00.071553+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63201280 unmapped: 753664 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:01.071798+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63209472 unmapped: 745472 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:02.072047+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63209472 unmapped: 745472 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:03.072239+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63209472 unmapped: 745472 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:04.072419+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63217664 unmapped: 737280 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:05.072613+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63217664 unmapped: 737280 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:06.072821+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63225856 unmapped: 729088 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:07.073065+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63225856 unmapped: 729088 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:08.073235+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63225856 unmapped: 729088 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:09.073373+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63234048 unmapped: 720896 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:10.073609+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63234048 unmapped: 720896 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:11.073858+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63242240 unmapped: 712704 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:12.074083+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63242240 unmapped: 712704 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:13.074295+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63250432 unmapped: 704512 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:14.074470+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63250432 unmapped: 704512 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:15.074647+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63250432 unmapped: 704512 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:16.074900+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63258624 unmapped: 696320 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:17.075098+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63258624 unmapped: 696320 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:18.075260+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63266816 unmapped: 688128 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:19.075426+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63266816 unmapped: 688128 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:20.075587+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63266816 unmapped: 688128 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:21.075803+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63266816 unmapped: 688128 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:22.075944+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63266816 unmapped: 688128 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:23.076120+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63275008 unmapped: 679936 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:24.076580+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63275008 unmapped: 679936 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:25.076804+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63275008 unmapped: 679936 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:26.076974+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63283200 unmapped: 671744 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:27.077329+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63283200 unmapped: 671744 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:28.077533+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63283200 unmapped: 671744 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:29.077689+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63291392 unmapped: 663552 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:30.077811+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63291392 unmapped: 663552 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:31.078009+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63299584 unmapped: 655360 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:32.078282+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63299584 unmapped: 655360 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:33.078507+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63299584 unmapped: 655360 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:34.078747+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63307776 unmapped: 647168 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:35.078898+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63307776 unmapped: 647168 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:36.079028+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63307776 unmapped: 647168 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:37.079194+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63315968 unmapped: 638976 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:38.079326+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63315968 unmapped: 638976 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:39.079473+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63324160 unmapped: 630784 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:40.079631+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63324160 unmapped: 630784 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:41.079853+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63324160 unmapped: 630784 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:42.079986+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63332352 unmapped: 622592 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:43.080176+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63332352 unmapped: 622592 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:44.080440+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63332352 unmapped: 622592 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:45.080692+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63340544 unmapped: 614400 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:46.080915+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63340544 unmapped: 614400 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:47.143253+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63348736 unmapped: 606208 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:48.143458+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63348736 unmapped: 606208 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:49.143763+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63356928 unmapped: 598016 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:50.143899+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63356928 unmapped: 598016 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:51.144099+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63356928 unmapped: 598016 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:52.144384+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63365120 unmapped: 589824 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:53.144550+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63365120 unmapped: 589824 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:54.144909+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63373312 unmapped: 581632 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:55.145212+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63373312 unmapped: 581632 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:56.145438+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63389696 unmapped: 565248 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:57.145802+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63389696 unmapped: 565248 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:58.146019+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63389696 unmapped: 565248 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:59.146203+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63397888 unmapped: 557056 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:00.146331+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63397888 unmapped: 557056 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:01.146460+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63406080 unmapped: 548864 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:02.146629+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63406080 unmapped: 548864 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:03.146831+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63406080 unmapped: 548864 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:04.147024+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63414272 unmapped: 540672 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:05.147162+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63414272 unmapped: 540672 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:06.147326+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63414272 unmapped: 540672 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:07.147733+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63422464 unmapped: 532480 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:08.147978+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63422464 unmapped: 532480 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:09.148185+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63430656 unmapped: 524288 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:10.148362+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63438848 unmapped: 516096 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:11.148518+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63438848 unmapped: 516096 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:12.148657+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63447040 unmapped: 507904 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:13.149021+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63447040 unmapped: 507904 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:14.149261+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63455232 unmapped: 499712 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:15.149590+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63455232 unmapped: 499712 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:16.149753+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63463424 unmapped: 491520 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:17.150056+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63463424 unmapped: 491520 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:18.150230+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63463424 unmapped: 491520 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:19.150389+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63471616 unmapped: 483328 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:20.150533+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63471616 unmapped: 483328 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:21.150791+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63479808 unmapped: 475136 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:22.150961+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63479808 unmapped: 475136 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:23.151092+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63479808 unmapped: 475136 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:24.151233+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63488000 unmapped: 466944 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:25.151383+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63488000 unmapped: 466944 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:26.151523+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63488000 unmapped: 466944 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:27.151841+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63496192 unmapped: 458752 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:28.152013+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63496192 unmapped: 458752 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:29.152317+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63504384 unmapped: 450560 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:30.152490+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63504384 unmapped: 450560 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:31.152629+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63504384 unmapped: 450560 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:32.152791+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63512576 unmapped: 442368 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:33.152977+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63512576 unmapped: 442368 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:34.153245+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63520768 unmapped: 434176 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:35.153411+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63520768 unmapped: 434176 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:36.153610+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63520768 unmapped: 434176 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:37.154047+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63528960 unmapped: 425984 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:38.154303+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63528960 unmapped: 425984 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:39.154576+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63537152 unmapped: 417792 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:40.154803+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63537152 unmapped: 417792 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:41.155055+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63545344 unmapped: 409600 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:42.155272+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63545344 unmapped: 409600 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:43.155442+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63545344 unmapped: 409600 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:44.155612+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63553536 unmapped: 401408 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:45.155868+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63553536 unmapped: 401408 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:46.156147+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63553536 unmapped: 401408 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:47.156587+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63561728 unmapped: 393216 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:48.156849+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63561728 unmapped: 393216 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:49.157061+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63561728 unmapped: 393216 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:50.157331+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63569920 unmapped: 385024 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:51.157517+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63569920 unmapped: 385024 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:52.157662+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63569920 unmapped: 385024 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:53.157832+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63578112 unmapped: 376832 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:54.157991+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63578112 unmapped: 376832 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:55.158119+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63586304 unmapped: 368640 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:56.158276+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63586304 unmapped: 368640 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:57.158461+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63586304 unmapped: 368640 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:58.158598+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63594496 unmapped: 360448 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:59.158788+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63594496 unmapped: 360448 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:00.159023+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63602688 unmapped: 352256 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:01.159248+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63594496 unmapped: 360448 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:02.159494+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63594496 unmapped: 360448 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:03.159620+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63602688 unmapped: 352256 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:04.159760+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63602688 unmapped: 352256 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:05.159877+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63610880 unmapped: 344064 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:06.159992+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63610880 unmapped: 344064 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:07.160183+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63610880 unmapped: 344064 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:08.160305+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63619072 unmapped: 335872 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:09.160422+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63627264 unmapped: 327680 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:10.160569+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63627264 unmapped: 327680 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:11.160693+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63635456 unmapped: 319488 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:12.160833+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63635456 unmapped: 319488 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:13.160985+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63643648 unmapped: 311296 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:14.161118+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63643648 unmapped: 311296 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:15.161313+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63643648 unmapped: 311296 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:16.161464+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63651840 unmapped: 303104 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:17.161638+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63651840 unmapped: 303104 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:18.161796+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63660032 unmapped: 294912 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:19.162018+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63660032 unmapped: 294912 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:20.162269+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63660032 unmapped: 294912 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:21.162486+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63660032 unmapped: 294912 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:22.162672+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63668224 unmapped: 286720 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:23.162925+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63668224 unmapped: 286720 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:24.163159+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63676416 unmapped: 278528 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:25.163412+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63676416 unmapped: 278528 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:26.163680+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63684608 unmapped: 270336 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:27.164236+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63684608 unmapped: 270336 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:28.164416+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63684608 unmapped: 270336 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:29.164633+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63692800 unmapped: 262144 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:30.164983+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63692800 unmapped: 262144 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:31.165213+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63700992 unmapped: 253952 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:32.165381+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63700992 unmapped: 253952 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:33.165539+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63700992 unmapped: 253952 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:34.165811+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63709184 unmapped: 245760 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:35.165944+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63709184 unmapped: 245760 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:36.166078+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63709184 unmapped: 245760 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:37.166283+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63717376 unmapped: 237568 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:38.166405+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63717376 unmapped: 237568 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:39.166505+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63725568 unmapped: 229376 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:40.166660+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63725568 unmapped: 229376 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:41.166758+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63733760 unmapped: 221184 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:42.166890+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63733760 unmapped: 221184 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:43.167020+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63733760 unmapped: 221184 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:44.167217+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63741952 unmapped: 212992 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:45.167372+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63750144 unmapped: 204800 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:46.167543+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63758336 unmapped: 196608 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:47.167771+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63758336 unmapped: 196608 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:48.168041+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63758336 unmapped: 196608 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:49.168160+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63766528 unmapped: 188416 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:50.170032+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63766528 unmapped: 188416 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:51.170224+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63766528 unmapped: 188416 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:52.170398+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63774720 unmapped: 180224 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:53.170553+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63774720 unmapped: 180224 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:54.170733+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63774720 unmapped: 180224 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:55.170910+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63782912 unmapped: 172032 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:56.171151+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63782912 unmapped: 172032 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:57.171436+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63791104 unmapped: 163840 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:58.171597+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63791104 unmapped: 163840 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:59.171761+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63799296 unmapped: 155648 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:00.171883+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63799296 unmapped: 155648 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:01.172041+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63799296 unmapped: 155648 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:02.172195+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63807488 unmapped: 147456 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:03.172357+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63807488 unmapped: 147456 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:04.172532+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63815680 unmapped: 139264 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:05.172688+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63815680 unmapped: 139264 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:06.172872+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63815680 unmapped: 139264 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:07.173101+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63823872 unmapped: 131072 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:08.173295+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63823872 unmapped: 131072 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:09.173474+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63832064 unmapped: 122880 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:10.173629+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63832064 unmapped: 122880 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:11.173784+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63832064 unmapped: 122880 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:12.173959+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Cumulative writes: 4222 writes, 19K keys, 4222 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 4222 writes, 393 syncs, 10.74 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 4222 writes, 19K keys, 4222 commit groups, 1.0 writes per commit group, ingest: 16.31 MB, 0.03 MB/s
                                           Interval WAL: 4222 writes, 393 syncs, 10.74 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5621ddea9a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5621ddea9a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5621ddea9a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5621ddea9a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5621ddea9a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5621ddea9a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5621ddea9a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5621ddea94b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5621ddea94b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5621ddea94b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5621ddea9a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5621ddea9a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63913984 unmapped: 40960 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:13.174139+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63913984 unmapped: 40960 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:14.174306+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63913984 unmapped: 40960 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:15.174575+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63922176 unmapped: 32768 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:16.174803+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63922176 unmapped: 32768 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:17.175009+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63922176 unmapped: 32768 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:18.175234+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63930368 unmapped: 24576 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:19.175523+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63930368 unmapped: 24576 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:20.175766+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63930368 unmapped: 24576 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:21.175940+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63938560 unmapped: 16384 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:22.176064+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63938560 unmapped: 16384 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:23.176245+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63946752 unmapped: 8192 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:24.176383+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63946752 unmapped: 8192 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:25.176507+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63946752 unmapped: 8192 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:26.176638+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63954944 unmapped: 0 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:27.176898+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63954944 unmapped: 0 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:28.177108+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63963136 unmapped: 1040384 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:29.177309+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63963136 unmapped: 1040384 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:30.177487+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63963136 unmapped: 1040384 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:31.177677+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63971328 unmapped: 1032192 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:32.177950+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63971328 unmapped: 1032192 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:33.178114+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63979520 unmapped: 1024000 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:34.178265+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63979520 unmapped: 1024000 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:35.178472+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63979520 unmapped: 1024000 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:36.178653+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63987712 unmapped: 1015808 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:37.178960+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63987712 unmapped: 1015808 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:38.179145+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63995904 unmapped: 1007616 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:39.179327+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63995904 unmapped: 1007616 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:40.179500+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63995904 unmapped: 1007616 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:41.179727+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63995904 unmapped: 1007616 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:42.179880+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64004096 unmapped: 999424 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:43.180041+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64004096 unmapped: 999424 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:44.180188+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64012288 unmapped: 991232 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:45.180359+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64004096 unmapped: 999424 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:46.180552+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64012288 unmapped: 991232 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:47.180753+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64012288 unmapped: 991232 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:48.180991+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64012288 unmapped: 991232 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:49.181213+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64020480 unmapped: 983040 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:50.181397+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64020480 unmapped: 983040 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:51.181552+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64020480 unmapped: 983040 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:52.181729+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64028672 unmapped: 974848 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:53.181859+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64028672 unmapped: 974848 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:54.182071+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64036864 unmapped: 966656 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:55.182224+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64036864 unmapped: 966656 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:56.197066+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64036864 unmapped: 966656 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:57.197351+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64045056 unmapped: 958464 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:58.197508+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64045056 unmapped: 958464 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:59.197658+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64045056 unmapped: 958464 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:00.371817+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64053248 unmapped: 950272 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:01.371940+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64053248 unmapped: 950272 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:02.372084+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64061440 unmapped: 942080 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:03.372314+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64061440 unmapped: 942080 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:04.372498+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64061440 unmapped: 942080 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:05.372661+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64069632 unmapped: 933888 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:06.372789+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64069632 unmapped: 933888 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:07.372990+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64077824 unmapped: 925696 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:08.373164+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64086016 unmapped: 917504 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:09.373450+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64086016 unmapped: 917504 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:10.373629+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64094208 unmapped: 909312 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:11.373821+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64094208 unmapped: 909312 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:12.373975+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64102400 unmapped: 901120 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:13.374103+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64102400 unmapped: 901120 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:14.374224+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64102400 unmapped: 901120 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:15.374410+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64110592 unmapped: 892928 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:16.374582+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64110592 unmapped: 892928 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:17.374779+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64118784 unmapped: 884736 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:18.374951+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64118784 unmapped: 884736 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:19.375237+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64118784 unmapped: 884736 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:20.375386+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64126976 unmapped: 876544 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:21.375542+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64126976 unmapped: 876544 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:22.375834+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64135168 unmapped: 868352 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:23.376049+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64135168 unmapped: 868352 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:24.376192+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64135168 unmapped: 868352 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:25.376359+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64143360 unmapped: 860160 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:26.376524+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64143360 unmapped: 860160 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:27.376769+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64151552 unmapped: 851968 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:28.376924+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64151552 unmapped: 851968 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:29.377124+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64151552 unmapped: 851968 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:30.377303+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 843776 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:31.377459+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 843776 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:32.377742+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 835584 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:33.377933+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 835584 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:34.378111+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 835584 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:35.378328+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 835584 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:36.378501+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 827392 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:37.378745+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 827392 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:38.378978+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 819200 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:39.380788+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 819200 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:40.380952+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64192512 unmapped: 811008 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:41.381098+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64200704 unmapped: 802816 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:42.381344+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64200704 unmapped: 802816 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:43.381583+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64200704 unmapped: 802816 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:44.381769+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64208896 unmapped: 794624 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:45.381879+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:46.382070+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64208896 unmapped: 794624 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:47.382252+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64208896 unmapped: 794624 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:48.382419+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64217088 unmapped: 786432 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:49.382575+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64217088 unmapped: 786432 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:50.382714+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64225280 unmapped: 778240 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:51.382897+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64225280 unmapped: 778240 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:52.383079+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64225280 unmapped: 778240 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:53.383217+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64233472 unmapped: 770048 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:54.383349+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64233472 unmapped: 770048 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:55.383478+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64241664 unmapped: 761856 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:56.383643+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64241664 unmapped: 761856 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:57.383800+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64241664 unmapped: 761856 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:58.383951+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64249856 unmapped: 753664 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:59.384229+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64249856 unmapped: 753664 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:00.384364+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64249856 unmapped: 753664 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:01.384667+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64258048 unmapped: 745472 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:02.384752+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64258048 unmapped: 745472 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:03.384887+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 737280 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:04.385005+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 737280 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:05.385135+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 729088 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:06.385289+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 729088 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:07.385494+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 729088 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:08.385659+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 720896 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:09.385838+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 720896 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:10.386025+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64290816 unmapped: 712704 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:11.386179+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64290816 unmapped: 712704 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:12.386360+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64290816 unmapped: 712704 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:13.386841+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:14.386976+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:15.387751+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:16.388350+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:17.388595+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:18.388799+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:19.390086+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:20.390245+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:21.390396+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:22.390569+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:23.391279+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:24.391798+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:25.392204+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:26.392402+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:27.393122+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:28.393365+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:29.393519+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:30.393744+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:31.394045+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:32.394209+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:33.394425+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:34.394666+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:35.394862+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:36.395016+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:37.395224+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:38.395386+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:39.395538+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:40.395692+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64290816 unmapped: 712704 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:41.395846+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64290816 unmapped: 712704 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:42.396028+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64290816 unmapped: 712704 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:43.396180+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64290816 unmapped: 712704 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:44.396384+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64290816 unmapped: 712704 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:45.396542+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:46.396727+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:47.397130+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:48.397276+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:49.397416+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:50.397587+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:51.397764+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:52.397965+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:53.398118+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:54.398279+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:55.398442+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:56.398592+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:57.398756+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:58.398921+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:59.399097+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:00.399253+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:01.399387+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:02.399552+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:03.399783+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:04.399930+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:05.400080+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:06.400238+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:07.400437+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:08.400601+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:09.400786+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:10.401002+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:11.401131+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:12.401330+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:13.401489+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:14.401665+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:15.401779+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:16.401962+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:17.402145+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:18.402325+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:19.402490+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:20.403004+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:21.403164+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:22.404621+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:23.404827+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:24.405019+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:25.405170+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:26.405507+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:27.405878+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:28.406522+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:29.407019+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:30.407225+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:31.407393+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:32.407536+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:33.408059+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:34.408212+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:35.408386+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:36.408526+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:37.408787+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:38.409493+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:39.409938+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:40.410094+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:41.410249+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:42.410515+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:43.410677+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:44.410885+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:45.411039+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:46.411175+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:47.411399+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:48.411588+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:49.411779+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:50.411992+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:51.412151+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:52.412505+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:53.412687+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:54.412957+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:55.413275+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:56.413486+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:57.413682+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:58.413821+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:59.414003+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:00.414194+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:01.414336+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:02.414508+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:03.414742+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:04.414916+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:05.415140+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:06.415356+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:07.415560+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:08.415725+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:09.415875+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:10.416058+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:11.416263+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:12.416633+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:13.416867+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:14.417147+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:15.417452+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:16.417608+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:17.418146+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:18.418314+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:19.418457+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:20.418600+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:21.418765+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:22.418902+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:23.419039+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:24.419184+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:25.419330+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:26.419507+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:27.419745+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:28.419861+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:29.420059+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:30.420314+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:31.420504+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:32.420749+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:33.420894+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:34.421060+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:35.421219+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:36.421423+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:37.421646+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:38.421794+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:39.421934+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:40.422071+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:41.422278+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:42.422462+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:43.422678+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:44.422912+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:45.423059+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:46.423275+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:47.423552+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:48.423716+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:49.423979+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:50.424249+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:51.424409+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:52.424571+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:53.424760+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:54.424928+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:55.425164+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:56.425379+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:57.425637+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:58.425809+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:59.425965+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:00.426143+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:01.426291+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:02.426463+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:03.426803+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:04.426980+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:05.427147+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:06.427309+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:07.427565+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:08.427755+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:09.427905+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:10.428090+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:11.428238+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:12.428388+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:13.428888+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:14.429069+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:15.429222+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:16.429367+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:17.429543+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:18.429688+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:19.429893+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:20.430016+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: mgrc ms_handle_reset ms_handle_reset con 0x5621df718000
Jan 10 17:23:14 compute-0 ceph-osd[87867]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3703679480
Jan 10 17:23:14 compute-0 ceph-osd[87867]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3703679480,v1:192.168.122.100:6801/3703679480]
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: get_auth_request con 0x5621dffbd400 auth_method 0
Jan 10 17:23:14 compute-0 ceph-osd[87867]: mgrc handle_mgr_configure stats_period=5
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:21.430148+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:22.430319+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:23.430536+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:24.430795+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:25.430993+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:26.431134+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:27.438064+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:28.438199+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:29.438403+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:30.438738+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:31.438910+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:32.439082+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:33.439263+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:34.439399+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:35.439630+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:36.439758+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:37.439954+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:38.440119+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:39.440354+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:40.440575+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:41.440908+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:42.441106+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:43.441275+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:44.441565+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:45.441784+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:46.442031+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:47.442265+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:48.442404+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:49.442565+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:50.442749+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:51.442935+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:52.443108+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:53.443294+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:54.443505+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:55.443734+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:56.443891+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:57.444137+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:58.444327+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:59.444528+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:00.444676+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:01.444906+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:02.445032+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:03.445237+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:04.445417+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:05.445592+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:06.445771+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:07.445941+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:08.446164+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:09.446369+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:10.446570+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:11.446766+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:12.446905+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:13.447076+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:14.447220+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:15.447414+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:16.447576+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:17.447852+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:18.448124+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:19.448397+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:20.448639+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:21.448826+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:22.449105+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:23.449292+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:24.449489+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:25.449758+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:26.449994+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:27.450195+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:28.450335+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:29.450480+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:30.450742+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:31.450913+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:32.451057+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:33.451313+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:34.451644+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:35.451916+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:36.452180+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:37.452542+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:38.452841+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:39.453261+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:40.453477+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:41.453853+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:42.454013+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:43.454242+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:44.454580+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:45.454873+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:46.455073+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:47.455343+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:48.455477+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:49.455617+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:50.455831+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:51.456019+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:52.456283+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:53.456427+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:54.456651+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:55.456816+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:56.457595+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:57.458028+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:58.482374+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:59.482549+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:00.482847+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:01.482995+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:02.483169+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:03.483335+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:04.483482+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:05.483605+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:06.483887+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:07.484155+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:08.484327+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:09.484487+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64503808 unmapped: 499712 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:10.484642+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64503808 unmapped: 499712 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:11.484821+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64503808 unmapped: 499712 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:12.484999+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64503808 unmapped: 499712 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:13.485196+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64503808 unmapped: 499712 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:14.485343+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64503808 unmapped: 499712 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:15.485592+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:16.485777+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:17.485961+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:18.486089+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:19.486221+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:20.486328+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:21.486585+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:22.486753+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:23.487015+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:24.487242+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:25.487550+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:26.487821+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:27.488082+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:28.488332+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:29.488598+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:30.488806+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:31.489074+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:32.489252+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:33.489496+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:34.489772+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:35.489929+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:36.490085+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:37.491950+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:38.492082+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:39.493319+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:40.494282+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:41.494659+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:42.494846+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:43.495448+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:44.495887+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:45.496206+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:46.496411+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:47.496767+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:48.497045+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:49.497234+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:50.497436+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:51.497621+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:52.497793+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:53.497945+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:54.498359+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:55.498748+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:56.499249+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:57.499562+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:58.499955+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:59.500363+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:00.500550+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:01.500855+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:02.501128+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:03.501355+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:04.501833+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:05.502015+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:06.502190+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:07.502466+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:08.502635+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:09.502785+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:10.503005+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:11.503174+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:12.503348+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:13.503518+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:14.503687+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:15.503910+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:16.504102+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:17.504359+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:18.504469+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:19.504754+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:20.504895+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:21.505052+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:22.505212+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:23.505408+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:24.505644+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:25.505810+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:26.505997+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:27.506281+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:28.506433+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:29.506517+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:30.506805+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:31.506992+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:32.507230+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:33.507427+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:34.507600+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:35.507768+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:36.507936+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:37.508215+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:38.508400+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:39.508570+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:40.508768+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:41.508914+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:42.509062+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:43.509223+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:44.509763+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:45.509920+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:46.510328+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:47.511224+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:48.511786+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:49.512635+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:50.513133+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:51.513550+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:52.513829+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:53.514076+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:54.514323+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:55.514516+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:56.514878+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:57.515153+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:58.515360+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:59.515537+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:00.515940+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:01.516123+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:02.516405+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:03.516583+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:04.516821+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:05.516959+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:06.517110+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:07.517285+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:08.517475+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:09.517789+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:10.517939+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:11.518123+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:12.518357+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:13.518593+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:14.518819+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:15.519043+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:16.519256+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:17.519499+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:18.519747+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:19.519918+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:20.520047+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:21.520220+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:22.520402+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:23.520583+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:24.520792+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:25.520988+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:26.521225+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:27.521444+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:28.521612+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:29.521778+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:30.521973+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:31.522107+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:32.522509+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:33.522641+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:34.522837+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:35.523039+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:36.523221+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:37.523420+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:38.523653+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:39.523821+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:40.524038+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:41.524238+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:42.524433+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:43.524560+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:44.524798+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:45.524970+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:46.525115+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:47.525295+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:48.525507+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:49.525683+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:50.526007+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:51.526281+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:52.526497+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:53.526815+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:54.527009+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:55.527270+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:56.527489+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:57.527742+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:58.527939+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:59.528135+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:00.528376+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:01.528652+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:02.528855+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:03.529072+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:04.529258+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:05.529463+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:06.529639+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:07.529983+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:08.530272+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:09.530510+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64512000 unmapped: 491520 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:10.530824+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64512000 unmapped: 491520 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:11.531062+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64512000 unmapped: 491520 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:12.531347+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Cumulative writes: 4222 writes, 19K keys, 4222 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 4222 writes, 393 syncs, 10.74 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5621ddea9a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 0.000109 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5621ddea9a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 0.000109 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5621ddea9a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 0.000109 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5621ddea9a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 0.000109 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5621ddea9a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 0.000109 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5621ddea9a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 0.000109 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5621ddea9a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 0.000109 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5621ddea94b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5621ddea94b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5621ddea94b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5621ddea9a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 0.000109 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5621ddea9a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 0.000109 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:13.531687+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:14.532053+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:15.532448+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:16.532821+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:17.533203+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:18.533536+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:19.533831+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:20.534158+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:21.534310+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:22.534436+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:23.534625+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:24.534783+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:25.534932+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:26.535223+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:27.535509+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:28.535788+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:29.535966+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:30.536155+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:31.536461+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:32.536680+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:33.536971+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:34.537172+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:35.537406+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:36.537742+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:37.537976+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:38.538170+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:39.538435+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:40.538739+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:41.539005+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:42.539266+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:43.539535+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:44.539751+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:45.539974+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:46.540280+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:47.540689+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:48.540969+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:49.541191+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:50.541428+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:51.541785+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:52.542455+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:53.543629+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:54.543814+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:55.544008+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:56.544199+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:57.544414+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:58.544543+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:59.546048+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:00.546522+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:01.546957+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:02.547283+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:03.547516+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:04.547831+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:05.548059+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64536576 unmapped: 466944 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:06.548340+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64536576 unmapped: 466944 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:07.548953+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64536576 unmapped: 466944 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:08.549310+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: handle_auth_request added challenge on 0x5621e1456c00
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64667648 unmapped: 335872 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:09.549795+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 68 handle_osd_map epochs [69,69], i have 68, src has [1,69]
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 1018.819763184s of 1018.845520020s, submitted: 10
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64667648 unmapped: 335872 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:10.550073+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _renew_subs
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 69 handle_osd_map epochs [70,70], i have 69, src has [1,70]
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 69369856 unmapped: 1335296 heap: 70705152 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:11.550366+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 70 heartbeat osd_stat(store_statfs(0x4fe10c000/0x0/0x4ffc00000, data 0x4f210/0xbe000, compress 0x0/0x0/0x0, omap 0x8be6, meta 0x1a2741a), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 70 ms_handle_reset con 0x5621e1456c00 session 0x5621e19bee00
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 70 handle_osd_map epochs [71,71], i have 70, src has [1,71]
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64831488 unmapped: 10534912 heap: 75366400 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:12.550557+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: handle_auth_request added challenge on 0x5621e1457000
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 583003 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65159168 unmapped: 18604032 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:13.550886+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 71 handle_osd_map epochs [71,72], i have 71, src has [1,72]
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 72 ms_handle_reset con 0x5621e1457000 session 0x5621e19db180
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65200128 unmapped: 18563072 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:14.551094+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 72 heartbeat osd_stat(store_statfs(0x4fd48d000/0x0/0x4ffc00000, data 0xcc3469/0xd3b000, compress 0x0/0x0/0x0, omap 0x9501, meta 0x1a26aff), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 18522112 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:15.551449+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 18522112 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:16.551761+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 18522112 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:17.552335+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 587971 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 18522112 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:18.552589+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 72 heartbeat osd_stat(store_statfs(0x4fd48d000/0x0/0x4ffc00000, data 0xcc3469/0xd3b000, compress 0x0/0x0/0x0, omap 0x9501, meta 0x1a26aff), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 18522112 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:19.552794+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 18522112 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:20.552993+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 72 heartbeat osd_stat(store_statfs(0x4fd48d000/0x0/0x4ffc00000, data 0xcc3469/0xd3b000, compress 0x0/0x0/0x0, omap 0x9501, meta 0x1a26aff), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 72 handle_osd_map epochs [73,73], i have 72, src has [1,73]
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.282839775s of 10.690481186s, submitted: 34
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 18513920 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:21.553240+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 73 heartbeat osd_stat(store_statfs(0x4fd48c000/0x0/0x4ffc00000, data 0xcc4919/0xd3e000, compress 0x0/0x0/0x0, omap 0x97d9, meta 0x1a26827), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:22.553478+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 18513920 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 589687 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:23.553663+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 18513920 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:24.553847+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 18513920 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:25.553973+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 18513920 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:26.554187+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 18513920 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:27.554456+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 18513920 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 73 heartbeat osd_stat(store_statfs(0x4fd48c000/0x0/0x4ffc00000, data 0xcc4919/0xd3e000, compress 0x0/0x0/0x0, omap 0x97d9, meta 0x1a26827), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 73 heartbeat osd_stat(store_statfs(0x4fd48c000/0x0/0x4ffc00000, data 0xcc4919/0xd3e000, compress 0x0/0x0/0x0, omap 0x97d9, meta 0x1a26827), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 589687 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:28.554686+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 18513920 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:29.554943+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 18513920 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:30.555148+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 18513920 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:31.555366+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 18513920 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:32.555476+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 18513920 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 589687 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:33.555621+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 18513920 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 73 heartbeat osd_stat(store_statfs(0x4fd48c000/0x0/0x4ffc00000, data 0xcc4919/0xd3e000, compress 0x0/0x0/0x0, omap 0x97d9, meta 0x1a26827), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:34.555798+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 18513920 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:35.556020+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 18513920 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:36.556172+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 18513920 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:37.556344+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 18513920 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 73 heartbeat osd_stat(store_statfs(0x4fd48c000/0x0/0x4ffc00000, data 0xcc4919/0xd3e000, compress 0x0/0x0/0x0, omap 0x97d9, meta 0x1a26827), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:38.556483+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 589687 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 18513920 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:39.556675+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 18513920 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 73 heartbeat osd_stat(store_statfs(0x4fd48c000/0x0/0x4ffc00000, data 0xcc4919/0xd3e000, compress 0x0/0x0/0x0, omap 0x97d9, meta 0x1a26827), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:40.556910+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 18513920 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:41.557116+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 18513920 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 73 heartbeat osd_stat(store_statfs(0x4fd48c000/0x0/0x4ffc00000, data 0xcc4919/0xd3e000, compress 0x0/0x0/0x0, omap 0x97d9, meta 0x1a26827), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:42.557314+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 18513920 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:43.557460+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 589687 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 18513920 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:44.557631+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 18513920 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:45.557768+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 18513920 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:46.557950+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 18513920 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:47.558109+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 18513920 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 73 heartbeat osd_stat(store_statfs(0x4fd48c000/0x0/0x4ffc00000, data 0xcc4919/0xd3e000, compress 0x0/0x0/0x0, omap 0x97d9, meta 0x1a26827), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:48.558252+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 589687 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 18513920 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:49.558402+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 18513920 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:50.558636+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 18513920 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:51.558816+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 18513920 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 73 heartbeat osd_stat(store_statfs(0x4fd48c000/0x0/0x4ffc00000, data 0xcc4919/0xd3e000, compress 0x0/0x0/0x0, omap 0x97d9, meta 0x1a26827), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:52.558947+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 18513920 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:53.559119+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 589687 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 18513920 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:54.559297+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 18513920 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:55.559518+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 18513920 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:56.559732+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: handle_auth_request added challenge on 0x5621e1457400
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 35.774143219s of 35.782062531s, submitted: 13
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65388544 unmapped: 18374656 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: handle_auth_request added challenge on 0x5621e1457800
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 73 handle_osd_map epochs [73,74], i have 73, src has [1,74]
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 74 ms_handle_reset con 0x5621e1457400 session 0x5621deefbdc0
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 74 heartbeat osd_stat(store_statfs(0x4fd48d000/0x0/0x4ffc00000, data 0xcc493c/0xd3f000, compress 0x0/0x0/0x0, omap 0x9a85, meta 0x1a2657b), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:57.559898+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: handle_auth_request added challenge on 0x5621e1c03c00
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 74 ms_handle_reset con 0x5621e1c03c00 session 0x5621e195a000
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65716224 unmapped: 18046976 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: handle_auth_request added challenge on 0x5621e1c03400
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 74 ms_handle_reset con 0x5621e1c03400 session 0x5621e1462a80
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 74 handle_osd_map epochs [74,75], i have 74, src has [1,75]
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 75 ms_handle_reset con 0x5621e1457800 session 0x5621e1462fc0
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: handle_auth_request added challenge on 0x5621e1456c00
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: handle_auth_request added challenge on 0x5621e1457400
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:58.560034+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 604284 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 66658304 unmapped: 17104896 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _renew_subs
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 75 handle_osd_map epochs [76,76], i have 75, src has [1,76]
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 76 ms_handle_reset con 0x5621e1457400 session 0x5621e077b6c0
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 76 ms_handle_reset con 0x5621e1456c00 session 0x5621e0a22e00
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: handle_auth_request added challenge on 0x5621e1c02c00
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 76 ms_handle_reset con 0x5621e1c02c00 session 0x5621e067ea80
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 76 heartbeat osd_stat(store_statfs(0x4fd481000/0x0/0x4ffc00000, data 0xcc78e6/0xd47000, compress 0x0/0x0/0x0, omap 0xa435, meta 0x1a25bcb), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: handle_auth_request added challenge on 0x5621e1c02800
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 76 ms_handle_reset con 0x5621e1c02800 session 0x5621df500000
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:59.560333+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 66682880 unmapped: 17080320 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: handle_auth_request added challenge on 0x5621e1456c00
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:00.560469+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 66674688 unmapped: 17088512 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _renew_subs
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 76 handle_osd_map epochs [77,77], i have 76, src has [1,77]
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 77 ms_handle_reset con 0x5621e1456c00 session 0x5621e077b180
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:01.561081+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: handle_auth_request added challenge on 0x5621e1457400
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 67878912 unmapped: 15884288 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _renew_subs
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 77 handle_osd_map epochs [78,78], i have 77, src has [1,78]
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 78 ms_handle_reset con 0x5621e1457400 session 0x5621e19da380
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:02.561256+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 68059136 unmapped: 15704064 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:03.561398+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 610354 data_alloc: 218103808 data_used: 858
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 15663104 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: handle_auth_request added challenge on 0x5621e1bcec00
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 78 handle_osd_map epochs [78,79], i have 78, src has [1,79]
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 79 ms_handle_reset con 0x5621e1bcec00 session 0x5621e14636c0
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:04.561528+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 79 heartbeat osd_stat(store_statfs(0x4fd47a000/0x0/0x4ffc00000, data 0xccb716/0xd4d000, compress 0x0/0x0/0x0, omap 0xa7da, meta 0x1a25826), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 79 handle_osd_map epochs [80,80], i have 79, src has [1,80]
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 68009984 unmapped: 15753216 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: handle_auth_request added challenge on 0x5621e1c03400
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _renew_subs
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 80 handle_osd_map epochs [81,81], i have 80, src has [1,81]
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 81 ms_handle_reset con 0x5621e1c03400 session 0x5621e0a23a40
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:05.561717+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 68222976 unmapped: 15540224 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: handle_auth_request added challenge on 0x5621e1c02c00
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _renew_subs
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 81 handle_osd_map epochs [82,82], i have 81, src has [1,82]
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 82 ms_handle_reset con 0x5621e1c02c00 session 0x5621df8eca80
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:06.562030+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 68370432 unmapped: 15392768 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: handle_auth_request added challenge on 0x5621e1c03000
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.066205978s of 10.445180893s, submitted: 195
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: handle_auth_request added challenge on 0x5621e1457000
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _renew_subs
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 82 handle_osd_map epochs [83,83], i have 82, src has [1,83]
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 83 ms_handle_reset con 0x5621e1c03000 session 0x5621e1947dc0
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 83 ms_handle_reset con 0x5621e1457000 session 0x5621df500380
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:07.562302+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 68591616 unmapped: 15171584 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: handle_auth_request added challenge on 0x5621e1457800
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 83 handle_osd_map epochs [83,84], i have 83, src has [1,84]
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 84 ms_handle_reset con 0x5621e1457800 session 0x5621e0a22700
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:08.562465+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 645550 data_alloc: 218103808 data_used: 4919
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 15007744 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 84 heartbeat osd_stat(store_statfs(0x4fd468000/0x0/0x4ffc00000, data 0xcd301e/0xd60000, compress 0x0/0x0/0x0, omap 0xcff3, meta 0x1a2300d), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: handle_auth_request added challenge on 0x5621e1457400
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _renew_subs
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 84 handle_osd_map epochs [85,85], i have 84, src has [1,85]
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 85 ms_handle_reset con 0x5621e1457400 session 0x5621e19be8c0
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:09.562683+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: handle_auth_request added challenge on 0x5621e1456c00
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 68984832 unmapped: 14778368 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 85 heartbeat osd_stat(store_statfs(0x4fd45f000/0x0/0x4ffc00000, data 0xcd6c27/0xd6b000, compress 0x0/0x0/0x0, omap 0xdcb6, meta 0x1a2234a), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:10.562892+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 78708736 unmapped: 13451264 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:11.563079+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 85 handle_osd_map epochs [85,86], i have 85, src has [1,86]
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 86 ms_handle_reset con 0x5621e1456c00 session 0x5621e19da8c0
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 70606848 unmapped: 21553152 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: handle_auth_request added challenge on 0x5621e1457000
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:12.563215+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 86 handle_osd_map epochs [86,87], i have 86, src has [1,87]
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: handle_auth_request added challenge on 0x5621e1456000
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 71835648 unmapped: 20324352 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 87 ms_handle_reset con 0x5621e1457000 session 0x5621df500e00
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: handle_auth_request added challenge on 0x5621e1bcec00
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:13.563427+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _renew_subs
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 87 handle_osd_map epochs [88,88], i have 87, src has [1,88]
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 664901 data_alloc: 218103808 data_used: 4919
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73129984 unmapped: 19030016 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 88 ms_handle_reset con 0x5621e1456000 session 0x5621e19468c0
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 88 ms_handle_reset con 0x5621e1bcec00 session 0x5621e1947a40
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 88 heartbeat osd_stat(store_statfs(0x4fbab4000/0x0/0x4ffc00000, data 0xcdadfd/0xd71000, compress 0x0/0x0/0x0, omap 0xea36, meta 0x2bc15ca), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: handle_auth_request added challenge on 0x5621e1bd8800
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:14.563573+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 88 handle_osd_map epochs [88,89], i have 88, src has [1,89]
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73310208 unmapped: 18849792 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 89 ms_handle_reset con 0x5621e1bd8800 session 0x5621e19801c0
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: handle_auth_request added challenge on 0x5621e1bd8c00
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:15.563855+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _renew_subs
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 89 handle_osd_map epochs [90,90], i have 89, src has [1,90]
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73129984 unmapped: 19030016 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 90 ms_handle_reset con 0x5621e1bd8c00 session 0x5621e0a22a80
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: handle_auth_request added challenge on 0x5621e1456000
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:16.564029+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _renew_subs
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 90 handle_osd_map epochs [91,91], i have 90, src has [1,91]
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73170944 unmapped: 18989056 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 91 ms_handle_reset con 0x5621e1456000 session 0x5621e1981180
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: handle_auth_request added challenge on 0x5621e1457000
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.402823448s of 10.174050331s, submitted: 340
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:17.564279+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: handle_auth_request added challenge on 0x5621e1bcec00
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _renew_subs
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 91 handle_osd_map epochs [92,92], i have 91, src has [1,92]
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73367552 unmapped: 18792448 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 92 ms_handle_reset con 0x5621e1457000 session 0x5621e1946e00
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: handle_auth_request added challenge on 0x5621e1bd8800
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:18.564459+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 92 heartbeat osd_stat(store_statfs(0x4fc2b3000/0x0/0x4ffc00000, data 0xcdeaed/0xd79000, compress 0x0/0x0/0x0, omap 0x108a3, meta 0x2bbf75d), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 92 handle_osd_map epochs [92,93], i have 92, src has [1,93]
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 675212 data_alloc: 218103808 data_used: 21160
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 93 ms_handle_reset con 0x5621e1bcec00 session 0x5621e19bf880
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 18677760 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 93 ms_handle_reset con 0x5621e1bd8800 session 0x5621e19db880
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 93 handle_osd_map epochs [93,94], i have 93, src has [1,94]
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:19.564664+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 18669568 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:20.564793+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 18669568 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:21.565089+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 18669568 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 94 heartbeat osd_stat(store_statfs(0x4fc2a8000/0x0/0x4ffc00000, data 0xce15b7/0xd7e000, compress 0x0/0x0/0x0, omap 0x1100f, meta 0x2bbeff1), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:22.565327+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 94 heartbeat osd_stat(store_statfs(0x4fc2a8000/0x0/0x4ffc00000, data 0xce15b7/0xd7e000, compress 0x0/0x0/0x0, omap 0x1100f, meta 0x2bbeff1), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 18669568 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 94 heartbeat osd_stat(store_statfs(0x4fc2a8000/0x0/0x4ffc00000, data 0xce15b7/0xd7e000, compress 0x0/0x0/0x0, omap 0x1100f, meta 0x2bbeff1), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:23.565446+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 677676 data_alloc: 218103808 data_used: 21160
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 18669568 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: handle_auth_request added challenge on 0x5621e1c03000
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 94 ms_handle_reset con 0x5621e1c03000 session 0x5621e1947340
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: handle_auth_request added challenge on 0x5621e1456000
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 94 heartbeat osd_stat(store_statfs(0x4fc2a8000/0x0/0x4ffc00000, data 0xce15b7/0xd7e000, compress 0x0/0x0/0x0, omap 0x1100f, meta 0x2bbeff1), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:24.565669+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73498624 unmapped: 18661376 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 94 handle_osd_map epochs [95,95], i have 94, src has [1,95]
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 95 heartbeat osd_stat(store_statfs(0x4fc2a8000/0x0/0x4ffc00000, data 0xce15b7/0xd7e000, compress 0x0/0x0/0x0, omap 0x1100f, meta 0x2bbeff1), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 95 ms_handle_reset con 0x5621e1456000 session 0x5621e1981500
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: handle_auth_request added challenge on 0x5621e1c02400
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _renew_subs
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 95 handle_osd_map epochs [96,96], i have 95, src has [1,96]
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 96 ms_handle_reset con 0x5621e1c02400 session 0x5621e196b6c0
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:25.565897+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 18653184 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 96 heartbeat osd_stat(store_statfs(0x4fc2a4000/0x0/0x4ffc00000, data 0xce41dd/0xd84000, compress 0x0/0x0/0x0, omap 0x11669, meta 0x2bbe997), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:26.566124+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 18653184 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:27.566416+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 18653184 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:28.566599+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 683060 data_alloc: 218103808 data_used: 29317
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 18653184 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:29.566863+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 18653184 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:30.567041+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 18653184 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:31.567147+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 18653184 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 96 heartbeat osd_stat(store_statfs(0x4fc2a4000/0x0/0x4ffc00000, data 0xce41dd/0xd84000, compress 0x0/0x0/0x0, omap 0x11669, meta 0x2bbe997), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:32.567300+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 18653184 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:33.567473+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 683060 data_alloc: 218103808 data_used: 29317
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 18653184 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: handle_auth_request added challenge on 0x5621e1c03800
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 96 ms_handle_reset con 0x5621e1c03800 session 0x5621e1991c00
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 96 handle_osd_map epochs [96,97], i have 96, src has [1,97]
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.875562668s of 17.118930817s, submitted: 126
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:34.567661+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 18669568 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: handle_auth_request added challenge on 0x5621e1457000
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 97 ms_handle_reset con 0x5621e1457000 session 0x5621e0061880
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:35.567874+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 18669568 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 97 heartbeat osd_stat(store_statfs(0x4fc2a3000/0x0/0x4ffc00000, data 0xce568d/0xd87000, compress 0x0/0x0/0x0, omap 0x11946, meta 0x2bbe6ba), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: handle_auth_request added challenge on 0x5621e1bcec00
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 97 ms_handle_reset con 0x5621e1bcec00 session 0x5621e0061500
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:36.568000+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: handle_auth_request added challenge on 0x5621e1456000
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 18669568 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:37.568198+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 97 handle_osd_map epochs [97,98], i have 97, src has [1,98]
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73498624 unmapped: 18661376 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:38.568423+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _renew_subs
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 98 handle_osd_map epochs [99,99], i have 98, src has [1,99]
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 99 ms_handle_reset con 0x5621e1456000 session 0x5621dfe81a40
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 698132 data_alloc: 218103808 data_used: 29317
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73531392 unmapped: 18628608 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:39.568597+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: handle_auth_request added challenge on 0x5621e1457000
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: handle_auth_request added challenge on 0x5621e1c02400
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73555968 unmapped: 18604032 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 99 heartbeat osd_stat(store_statfs(0x4fc298000/0x0/0x4ffc00000, data 0xce82c1/0xd90000, compress 0x0/0x0/0x0, omap 0x11fd8, meta 0x2bbe028), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:40.568796+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73555968 unmapped: 18604032 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:41.568956+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73555968 unmapped: 18604032 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:42.569156+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: handle_auth_request added challenge on 0x5621e1c03800
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73613312 unmapped: 18546688 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: handle_auth_request added challenge on 0x5621e1bd8800
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:43.569344+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _renew_subs
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 99 handle_osd_map epochs [100,100], i have 99, src has [1,100]
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 100 ms_handle_reset con 0x5621e1bd8800 session 0x5621e0a22380
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 100 ms_handle_reset con 0x5621e1c03800 session 0x5621e195b180
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 705238 data_alloc: 218103808 data_used: 33413
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73637888 unmapped: 18522112 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:44.569478+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: handle_auth_request added challenge on 0x5621e3561800
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _renew_subs
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 100 handle_osd_map epochs [101,101], i have 100, src has [1,101]
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.190230370s of 10.334465027s, submitted: 49
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73752576 unmapped: 18407424 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 101 ms_handle_reset con 0x5621e3561800 session 0x5621df8edc00
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 101 heartbeat osd_stat(store_statfs(0x4fc290000/0x0/0x4ffc00000, data 0xceaedd/0xd98000, compress 0x0/0x0/0x0, omap 0x12af4, meta 0x2bbd50c), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: handle_auth_request added challenge on 0x5621e3561400
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 101 ms_handle_reset con 0x5621e3561400 session 0x5621e19bfc00
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: handle_auth_request added challenge on 0x5621e1456000
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:45.569632+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 101 heartbeat osd_stat(store_statfs(0x4fc290000/0x0/0x4ffc00000, data 0xceaedd/0xd98000, compress 0x0/0x0/0x0, omap 0x12af4, meta 0x2bbd50c), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73949184 unmapped: 18210816 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 101 handle_osd_map epochs [101,102], i have 101, src has [1,102]
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 102 ms_handle_reset con 0x5621e1456000 session 0x5621e1980000
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:46.569830+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: handle_auth_request added challenge on 0x5621e1bd8800
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 18178048 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _renew_subs
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 102 handle_osd_map epochs [103,103], i have 102, src has [1,103]
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:47.570049+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 103 ms_handle_reset con 0x5621e1bd8800 session 0x5621e199fc00
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74047488 unmapped: 18112512 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: handle_auth_request added challenge on 0x5621e1c03800
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _renew_subs
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 103 handle_osd_map epochs [104,104], i have 103, src has [1,104]
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 104 ms_handle_reset con 0x5621e1c03800 session 0x5621e077a1c0
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:48.572031+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 717689 data_alloc: 218103808 data_used: 33413
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73949184 unmapped: 18210816 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:49.572208+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73949184 unmapped: 18210816 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:50.572382+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73949184 unmapped: 18210816 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 104 heartbeat osd_stat(store_statfs(0x4fc289000/0x0/0x4ffc00000, data 0xcef535/0xda1000, compress 0x0/0x0/0x0, omap 0x1364f, meta 0x2bbc9b1), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:51.572578+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73949184 unmapped: 18210816 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 104 heartbeat osd_stat(store_statfs(0x4fc289000/0x0/0x4ffc00000, data 0xcef535/0xda1000, compress 0x0/0x0/0x0, omap 0x1364f, meta 0x2bbc9b1), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:52.572827+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73949184 unmapped: 18210816 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 104 heartbeat osd_stat(store_statfs(0x4fc289000/0x0/0x4ffc00000, data 0xcef535/0xda1000, compress 0x0/0x0/0x0, omap 0x1364f, meta 0x2bbc9b1), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:53.573017+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 104 heartbeat osd_stat(store_statfs(0x4fc289000/0x0/0x4ffc00000, data 0xcef535/0xda1000, compress 0x0/0x0/0x0, omap 0x1364f, meta 0x2bbc9b1), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 717945 data_alloc: 218103808 data_used: 34639
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73949184 unmapped: 18210816 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:54.573172+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: handle_auth_request added challenge on 0x5621e3561400
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _renew_subs
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 104 handle_osd_map epochs [105,105], i have 104, src has [1,105]
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 105 ms_handle_reset con 0x5621e3561400 session 0x5621df8ec000
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: handle_auth_request added challenge on 0x5621e3561800
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: handle_auth_request added challenge on 0x5621e3561000
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 105 ms_handle_reset con 0x5621e3561000 session 0x5621e1980fc0
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 105 ms_handle_reset con 0x5621e3561800 session 0x5621e1946a80
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: handle_auth_request added challenge on 0x5621e1456000
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 105 ms_handle_reset con 0x5621e1456000 session 0x5621e038ddc0
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: handle_auth_request added challenge on 0x5621e1bd8800
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 105 ms_handle_reset con 0x5621e1bd8800 session 0x5621e196b340
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: handle_auth_request added challenge on 0x5621e1c03800
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 105 ms_handle_reset con 0x5621e1c03800 session 0x5621e038dc00
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: handle_auth_request added challenge on 0x5621e3561400
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74072064 unmapped: 18087936 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.056212425s of 10.232484818s, submitted: 119
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 105 ms_handle_reset con 0x5621e3561400 session 0x5621e19be000
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:55.573405+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: handle_auth_request added challenge on 0x5621e1456000
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 105 ms_handle_reset con 0x5621e1456000 session 0x5621e05b8000
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74063872 unmapped: 18096128 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: handle_auth_request added challenge on 0x5621e1bd8800
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 105 ms_handle_reset con 0x5621e1bd8800 session 0x5621ddecf340
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:56.573554+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: handle_auth_request added challenge on 0x5621e1c03800
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 105 ms_handle_reset con 0x5621e1c03800 session 0x5621df8ed880
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: handle_auth_request added challenge on 0x5621e3561400
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 105 ms_handle_reset con 0x5621e3561400 session 0x5621e1980c40
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74047488 unmapped: 18112512 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: handle_auth_request added challenge on 0x5621e3561800
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:57.573769+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: handle_auth_request added challenge on 0x5621e3560c00
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74186752 unmapped: 17973248 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:58.573983+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 726131 data_alloc: 218103808 data_used: 35205
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74186752 unmapped: 17973248 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 105 heartbeat osd_stat(store_statfs(0x4fc284000/0x0/0x4ffc00000, data 0xcf0a6f/0xda6000, compress 0x0/0x0/0x0, omap 0x13c81, meta 0x2bbc37f), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:59.574189+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74186752 unmapped: 17973248 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:00.574394+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74186752 unmapped: 17973248 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: handle_auth_request added challenge on 0x5621e3560800
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 105 ms_handle_reset con 0x5621e3560800 session 0x5621e070c380
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: handle_auth_request added challenge on 0x5621e1456000
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:01.574572+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _renew_subs
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 105 handle_osd_map epochs [106,106], i have 105, src has [1,106]
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 106 ms_handle_reset con 0x5621e1456000 session 0x5621deefbdc0
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: handle_auth_request added challenge on 0x5621e1bd8800
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: handle_auth_request added challenge on 0x5621e1c03800
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 106 ms_handle_reset con 0x5621e1c03800 session 0x5621e0061340
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 106 ms_handle_reset con 0x5621e1bd8800 session 0x5621e19ee1c0
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: handle_auth_request added challenge on 0x5621e3561400
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74383360 unmapped: 17776640 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fc280000/0x0/0x4ffc00000, data 0xcf207e/0xdaa000, compress 0x0/0x0/0x0, omap 0x13ff0, meta 0x2bbc010), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 106 ms_handle_reset con 0x5621e3561400 session 0x5621deefbc00
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: handle_auth_request added challenge on 0x5621e3560400
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:02.574826+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _renew_subs
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 106 handle_osd_map epochs [107,107], i have 106, src has [1,107]
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 107 ms_handle_reset con 0x5621e3560400 session 0x5621dfe81880
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: handle_auth_request added challenge on 0x5621e1456000
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74440704 unmapped: 17719296 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:03.575043+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _renew_subs
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 107 handle_osd_map epochs [108,108], i have 107, src has [1,108]
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 108 ms_handle_reset con 0x5621e1456000 session 0x5621e1946380
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 737866 data_alloc: 218103808 data_used: 35783
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 17727488 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:04.575279+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: handle_auth_request added challenge on 0x5621e1bd8800
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 108 ms_handle_reset con 0x5621e1bd8800 session 0x5621e1462540
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: handle_auth_request added challenge on 0x5621e1c03800
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 108 ms_handle_reset con 0x5621e1c03800 session 0x5621e0a23500
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: handle_auth_request added challenge on 0x5621e3561400
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 108 ms_handle_reset con 0x5621e3561400 session 0x5621e19db500
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 17727488 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:05.575462+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: handle_auth_request added challenge on 0x5621e3560000
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.728271484s of 10.840334892s, submitted: 55
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 108 ms_handle_reset con 0x5621e3560000 session 0x5621e19be1c0
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 17727488 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: handle_auth_request added challenge on 0x5621e1456000
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:06.575655+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 108 handle_osd_map epochs [108,109], i have 108, src has [1,109]
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 109 ms_handle_reset con 0x5621e1456000 session 0x5621e196ba40
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74440704 unmapped: 17719296 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc277000/0x0/0x4ffc00000, data 0xcf62c5/0xdb3000, compress 0x0/0x0/0x0, omap 0x14ca4, meta 0x2bbb35c), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:07.575921+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74440704 unmapped: 17719296 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 109 ms_handle_reset con 0x5621e3561800 session 0x5621e070cc40
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 109 ms_handle_reset con 0x5621e3560c00 session 0x5621e0a23340
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc277000/0x0/0x4ffc00000, data 0xcf6283/0xdb2000, compress 0x0/0x0/0x0, omap 0x14ca4, meta 0x2bbb35c), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: handle_auth_request added challenge on 0x5621e1bd8800
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:08.576070+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _renew_subs
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 109 handle_osd_map epochs [110,110], i have 109, src has [1,110]
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 740491 data_alloc: 218103808 data_used: 36295
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74391552 unmapped: 17768448 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 110 ms_handle_reset con 0x5621e1bd8800 session 0x5621dfe81500
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:09.576246+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74391552 unmapped: 17768448 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:10.576377+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc27a000/0x0/0x4ffc00000, data 0xcf784b/0xdb2000, compress 0x0/0x0/0x0, omap 0x15133, meta 0x2bbaecd), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 110 handle_osd_map epochs [111,111], i have 110, src has [1,111]
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 110 handle_osd_map epochs [111,111], i have 111, src has [1,111]
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74399744 unmapped: 17760256 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:11.576563+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 111 ms_handle_reset con 0x5621e1457000 session 0x5621e196b180
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 111 ms_handle_reset con 0x5621e1c02400 session 0x5621e1981340
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: handle_auth_request added challenge on 0x5621e1456000
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 17924096 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 111 ms_handle_reset con 0x5621e1456000 session 0x5621df8ec380
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:12.576718+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: handle_auth_request added challenge on 0x5621e1457000
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 111 ms_handle_reset con 0x5621e1457000 session 0x5621e077a700
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 111 heartbeat osd_stat(store_statfs(0x4fc278000/0x0/0x4ffc00000, data 0xcf8ce4/0xdb3000, compress 0x0/0x0/0x0, omap 0x15543, meta 0x2bbaabd), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 17924096 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: handle_auth_request added challenge on 0x5621e1bd8800
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:13.576922+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 737991 data_alloc: 218103808 data_used: 32777
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 111 handle_osd_map epochs [111,112], i have 111, src has [1,112]
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74260480 unmapped: 17899520 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 112 ms_handle_reset con 0x5621e1bd8800 session 0x5621ddecf180
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:14.577112+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74260480 unmapped: 17899520 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 112 heartbeat osd_stat(store_statfs(0x4fc274000/0x0/0x4ffc00000, data 0xcfa2f2/0xdb5000, compress 0x0/0x0/0x0, omap 0x15a01, meta 0x2bba5ff), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 112 handle_osd_map epochs [113,113], i have 112, src has [1,113]
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:15.577313+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74260480 unmapped: 17899520 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:16.577502+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74260480 unmapped: 17899520 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:17.577821+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74260480 unmapped: 17899520 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fc272000/0x0/0x4ffc00000, data 0xcfb7be/0xdb8000, compress 0x0/0x0/0x0, omap 0x15c9a, meta 0x2bba366), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:18.578059+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 743972 data_alloc: 218103808 data_used: 36838
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74260480 unmapped: 17899520 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 113 handle_osd_map epochs [113,114], i have 113, src has [1,114]
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.406369209s of 13.619614601s, submitted: 155
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:19.578260+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74260480 unmapped: 17899520 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:20.582254+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74260480 unmapped: 17899520 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:21.582775+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:22.582979+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:23.583114+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:24.583383+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:25.583555+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:26.583812+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:27.584275+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:28.584462+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:29.584817+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:30.584999+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:31.585214+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:32.586066+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:33.586357+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:34.586827+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:35.587288+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:36.587544+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:37.587975+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:38.588397+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:39.588772+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:40.589046+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:41.589259+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:42.589482+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:43.589759+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:44.589993+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:45.590177+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:46.590445+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:47.590739+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:48.591056+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:49.591313+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:50.591517+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:51.591828+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:52.592005+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:53.592202+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:54.592391+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:55.592596+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:56.592776+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:57.593053+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:58.593208+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:59.593386+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:00.593571+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:01.593813+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:02.765019+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:03.765148+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:04.765337+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:05.765565+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:06.765809+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:07.766036+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:08.766266+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:09.766410+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:10.766595+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:11.766804+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:12.767046+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:13.767186+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:14.767514+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 17883136 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:15.767768+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 17883136 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:16.767955+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 17883136 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:17.768227+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 17883136 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:18.768398+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 17883136 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:19.768616+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 17883136 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:20.768818+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 17883136 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:21.769024+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 17883136 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:22.769243+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 17883136 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:23.769426+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 17883136 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:24.769613+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 17883136 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:25.769774+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 17883136 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:26.769953+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 17883136 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:27.770209+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 17883136 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:28.770463+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 17883136 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:29.770619+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 17883136 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:30.770786+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 17883136 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:31.770931+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 17883136 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:32.771117+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 17883136 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:33.771289+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 17883136 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:34.771417+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 17883136 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:35.771616+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 17883136 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:36.771751+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 17883136 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:37.771909+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 17883136 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:38.772051+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:14 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:14 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 17883136 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:39.772234+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74416128 unmapped: 17743872 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:40.772370+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:23:14 compute-0 ceph-osd[87867]: do_command 'config diff' '{prefix=config diff}'
Jan 10 17:23:14 compute-0 ceph-osd[87867]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Jan 10 17:23:14 compute-0 ceph-osd[87867]: do_command 'config show' '{prefix=config show}'
Jan 10 17:23:14 compute-0 ceph-osd[87867]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Jan 10 17:23:14 compute-0 ceph-osd[87867]: do_command 'counter dump' '{prefix=counter dump}'
Jan 10 17:23:14 compute-0 ceph-osd[87867]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Jan 10 17:23:14 compute-0 ceph-osd[87867]: do_command 'counter schema' '{prefix=counter schema}'
Jan 10 17:23:14 compute-0 ceph-osd[87867]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75022336 unmapped: 17137664 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:41.772533+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75030528 unmapped: 17129472 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:42.772753+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75030528 unmapped: 17129472 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:23:14 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:43.773024+0000)
Jan 10 17:23:14 compute-0 ceph-osd[87867]: do_command 'log dump' '{prefix=log dump}'
Jan 10 17:23:14 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:23:14 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0)
Jan 10 17:23:14 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4100945430' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Jan 10 17:23:14 compute-0 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.14754 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 17:23:14 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v873: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:23:14 compute-0 ceph-mon[75249]: from='client.14746 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 17:23:14 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/1329769' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} : dispatch
Jan 10 17:23:14 compute-0 ceph-mon[75249]: from='client.14750 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 17:23:14 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/4100945430' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Jan 10 17:23:15 compute-0 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.14756 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 10 17:23:15 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Jan 10 17:23:15 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4173986491' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Jan 10 17:23:15 compute-0 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.14760 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 17:23:15 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Jan 10 17:23:15 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3018602978' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Jan 10 17:23:15 compute-0 ceph-mon[75249]: from='client.14754 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 17:23:15 compute-0 ceph-mon[75249]: pgmap v873: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:23:15 compute-0 ceph-mon[75249]: from='client.14756 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 10 17:23:15 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/4173986491' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Jan 10 17:23:15 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/3018602978' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Jan 10 17:23:16 compute-0 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.14764 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 10 17:23:16 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Jan 10 17:23:16 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/464268086' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Jan 10 17:23:16 compute-0 crontab[245521]: (root) LIST (root)
Jan 10 17:23:16 compute-0 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.14768 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 17:23:16 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v874: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:23:16 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0)
Jan 10 17:23:16 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1045665878' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Jan 10 17:23:16 compute-0 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.14772 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 10 17:23:16 compute-0 ceph-mon[75249]: from='client.14760 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 17:23:16 compute-0 ceph-mon[75249]: from='client.14764 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 10 17:23:16 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/464268086' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Jan 10 17:23:16 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/1045665878' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Jan 10 17:23:17 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon stat"} v 0)
Jan 10 17:23:17 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/463428898' entity='client.admin' cmd={"prefix": "mon stat"} : dispatch
Jan 10 17:23:17 compute-0 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.14776 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 10 17:23:17 compute-0 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.14780 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 10 17:23:17 compute-0 ceph-mon[75249]: from='client.14768 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 17:23:17 compute-0 ceph-mon[75249]: pgmap v874: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:23:17 compute-0 ceph-mon[75249]: from='client.14772 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 10 17:23:17 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/463428898' entity='client.admin' cmd={"prefix": "mon stat"} : dispatch
Jan 10 17:23:18 compute-0 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.14782 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 10 17:23:18 compute-0 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-mgr-compute-0-mkxlpr[75534]: 2026-01-10T17:23:18.322+0000 7fd5c778b640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 10 17:23:18 compute-0 ceph-mgr[75538]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 10 17:23:18 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "node ls"} v 0)
Jan 10 17:23:18 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3694601461' entity='client.admin' cmd={"prefix": "node ls"} : dispatch
Jan 10 17:23:18 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v875: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:23:18 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush class ls"} v 0)
Jan 10 17:23:18 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1395341979' entity='client.admin' cmd={"prefix": "osd crush class ls"} : dispatch
Jan 10 17:23:18 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0)
Jan 10 17:23:18 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3037386076' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} : dispatch
Jan 10 17:23:19 compute-0 ceph-mon[75249]: from='client.14776 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 10 17:23:19 compute-0 ceph-mon[75249]: from='client.14780 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 10 17:23:19 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/3694601461' entity='client.admin' cmd={"prefix": "node ls"} : dispatch
Jan 10 17:23:19 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/1395341979' entity='client.admin' cmd={"prefix": "osd crush class ls"} : dispatch
Jan 10 17:23:19 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/3037386076' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} : dispatch
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.7( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=0 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000054 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.7( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=0 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.7( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000005 1 0.000009
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.7( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.7( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.7( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.7( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.7( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.7( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.7( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.7( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000075 1 0.000022
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.7( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.3(unlocked)] enter Initial
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.3( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=0 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000047 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.3( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=0 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.3( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000008 1 0.000014
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.3( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.3( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.3( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.3( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.3( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.3( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.3( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.3( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000083 1 0.000033
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.3( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.4(unlocked)] enter Initial
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.4( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=0 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000033 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.4( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=0 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.4( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000004 1 0.000007
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.4( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.4( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.4( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.4( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.4( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.4( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.4( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.4( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000054 1 0.000022
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.4( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.5(unlocked)] enter Initial
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.5( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=0 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000054 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.5( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=0 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.5( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000005 1 0.000009
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.5( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.5( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.5( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.5( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.5( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.5( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.5( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.5( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000040 1 0.000029
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.5( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.1(unlocked)] enter Initial
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.1( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=0 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000064 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.1( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=0 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.1( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000007 1 0.000014
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.1( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.1( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.1( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.1( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.1( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.1( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.1( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.1( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000078 1 0.000034
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.1( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.f(unlocked)] enter Initial
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.f( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=0 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000143 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.f( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=0 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.f( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000010 1 0.000026
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.f( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.f( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.f( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.f( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.f( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.f( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.f( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.f( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000130 1 0.000042
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.f( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.9(unlocked)] enter Initial
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.9( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=0 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000143 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.9( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=0 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.9( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000017 1 0.000037
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.9( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.9( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.9( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.9( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000013 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.9( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.9( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.9( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.9( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000175 1 0.000074
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.9( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.a(unlocked)] enter Initial
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.a( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=0 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000098 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.a( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=0 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.a( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000015 1 0.000026
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.a( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.a( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.a( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.a( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000012 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.a( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.a( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.a( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.a( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000258 1 0.000072
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.a( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.1a(unlocked)] enter Initial
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.1a( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=0 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000131 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.1a( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=0 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.1a( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000018 1 0.000045
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.1a( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.1a( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.1a( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.1a( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000010 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.1a( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.1a( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.1a( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.1a( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000121 1 0.000059
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.1a( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.c(unlocked)] enter Initial
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.c( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=0 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000159 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.c( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=0 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.c( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000013 1 0.000034
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.c( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.c( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.c( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.c( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000011 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.c( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.c( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.c( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.c( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000182 1 0.000056
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.c( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.19(unlocked)] enter Initial
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.19( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=0 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000082 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.19( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=0 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.19( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000011 1 0.000018
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.19( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.19( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.19( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.19( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000009 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.19( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.19( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.19( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.19( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000166 1 0.000086
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.19( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.18(unlocked)] enter Initial
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.18( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=0 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000128 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.18( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=0 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.18( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000016 1 0.000028
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.18( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.18( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.18( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.18( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000012 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.18( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.18( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.18( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.18( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000099 1 0.000067
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.18( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.e( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 11.044729 15 0.000101
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.e( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 11.049968 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.e( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 11.050040 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.e( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started 11.050106 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.e( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.e( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.955799103s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 91.796836853s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.e( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.955755234s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796836853s@ mbc={}] exit Reset 0.000086 1 0.000173
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.e( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.955755234s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796836853s@ mbc={}] enter Started
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.e( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.955755234s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796836853s@ mbc={}] enter Start
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.e( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.955755234s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796836853s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.e( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.955755234s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796836853s@ mbc={}] exit Start 0.000015 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.e( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.955755234s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796836853s@ mbc={}] enter Started/Stray
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.a( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 4.349070 1 0.000034
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.a( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 4.356212 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.a( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.836077 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.a( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started 5.836105 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.a( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.a( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.650593758s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 90.491889954s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.a( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.650562286s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.491889954s@ mbc={}] exit Reset 0.000063 1 0.000101
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.a( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.650562286s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.491889954s@ mbc={}] enter Started
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.a( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.650562286s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.491889954s@ mbc={}] enter Start
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.a( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.650562286s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.491889954s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.a( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.650562286s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.491889954s@ mbc={}] exit Start 0.000015 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.a( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.650562286s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.491889954s@ mbc={}] enter Started/Stray
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.9( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 4.349443 1 0.000037
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.9( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 4.356491 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.9( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.836540 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.9( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started 5.836574 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.9( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.9( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.650234222s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 90.491912842s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.9( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.650205612s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.491912842s@ mbc={}] exit Reset 0.000057 1 0.000112
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.9( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.650205612s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.491912842s@ mbc={}] enter Started
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.9( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.650205612s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.491912842s@ mbc={}] enter Start
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.9( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.650205612s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.491912842s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.9( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.650205612s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.491912842s@ mbc={}] exit Start 0.000010 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.9( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.650205612s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.491912842s@ mbc={}] enter Started/Stray
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.c( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 11.045693 15 0.000066
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.c( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 11.051057 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.c( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 11.051116 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.c( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started 11.051146 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.c( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.c( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.954932213s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 91.796813965s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.c( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.954910278s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796813965s@ mbc={}] exit Reset 0.000043 1 0.000082
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.c( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.954910278s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796813965s@ mbc={}] enter Started
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.c( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.954910278s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796813965s@ mbc={}] enter Start
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.c( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.954910278s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796813965s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.c( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.954910278s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796813965s@ mbc={}] exit Start 0.000010 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.c( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.954910278s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796813965s@ mbc={}] enter Started/Stray
Jan 10 17:23:19 compute-0 ceph-osd[86809]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.1d( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.014383 2 0.000120
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.1d( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.1d( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000006 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.1d( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.8( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 4.349955 1 0.000032
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.8( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 4.357106 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.8( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.837218 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.8( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started 5.837260 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.8( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.8( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.649713516s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 90.491882324s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.8( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.649688721s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.491882324s@ mbc={}] exit Reset 0.000047 1 0.000102
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.8( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.649688721s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.491882324s@ mbc={}] enter Started
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.8( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.649688721s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.491882324s@ mbc={}] enter Start
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.8( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.649688721s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.491882324s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.8( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.649688721s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.491882324s@ mbc={}] exit Start 0.000010 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.8( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.649688721s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.491882324s@ mbc={}] enter Started/Stray
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.f( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 4.350028 1 0.000044
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.f( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 4.357198 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.f( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.835981 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.f( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started 5.836006 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.f( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.f( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.649621010s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 90.492034912s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.f( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.649598122s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492034912s@ mbc={}] exit Reset 0.000055 1 0.000098
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.f( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.649598122s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492034912s@ mbc={}] enter Started
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.f( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.649598122s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492034912s@ mbc={}] enter Start
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.f( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.649598122s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492034912s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.f( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.649598122s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492034912s@ mbc={}] exit Start 0.000041 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.f( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.649598122s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492034912s@ mbc={}] enter Started/Stray
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.6( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 4.350224 1 0.000049
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.6( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 4.357398 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.6( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.836521 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.6( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started 5.836550 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.6( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.6( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.649418831s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 90.492172241s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.6( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.649394989s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492172241s@ mbc={}] exit Reset 0.000130 1 0.000169
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.6( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.649394989s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492172241s@ mbc={}] enter Started
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.6( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.649394989s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492172241s@ mbc={}] enter Start
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.6( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.649394989s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492172241s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.6( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.649394989s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492172241s@ mbc={}] exit Start 0.000010 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.6( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.649394989s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492172241s@ mbc={}] enter Started/Stray
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.4( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 4.350774 1 0.000089
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.4( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 4.357837 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.4( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.837228 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.4( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started 5.837256 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.4( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.4( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.648921013s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 90.491943359s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.4( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.648898125s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.491943359s@ mbc={}] exit Reset 0.000068 1 0.000092
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.4( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.648898125s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.491943359s@ mbc={}] enter Started
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.4( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.648898125s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.491943359s@ mbc={}] enter Start
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.4( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.648898125s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.491943359s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.4( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.648898125s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.491943359s@ mbc={}] exit Start 0.000010 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.4( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.648898125s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.491943359s@ mbc={}] enter Started/Stray
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.1( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 11.047242 15 0.000093
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.1( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 11.052893 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.1( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 11.052954 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.1( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started 11.052975 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.1( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.1( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.953218460s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 91.796546936s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.1( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.953183174s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796546936s@ mbc={}] exit Reset 0.000058 1 0.000087
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.1( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.953183174s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796546936s@ mbc={}] enter Started
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.1( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.953183174s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796546936s@ mbc={}] enter Start
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.1( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.953183174s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796546936s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.1( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.953183174s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796546936s@ mbc={}] exit Start 0.000010 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.1( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.953183174s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796546936s@ mbc={}] enter Started/Stray
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.5( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 4.351080 1 0.000052
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.5( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 4.358116 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.5( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.837470 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.5( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started 5.837492 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.5( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.5( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.648669243s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 90.492225647s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.5( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.648649216s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492225647s@ mbc={}] exit Reset 0.000043 1 0.000070
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.5( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.648649216s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492225647s@ mbc={}] enter Started
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.5( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.648649216s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492225647s@ mbc={}] enter Start
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.5( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.648649216s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492225647s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.5( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.648649216s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492225647s@ mbc={}] exit Start 0.000010 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.5( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.648649216s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492225647s@ mbc={}] enter Started/Stray
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.3( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 11.047419 15 0.000082
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.3( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 11.052809 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.3( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 11.053040 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.3( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started 11.053156 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.3( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.3( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.953178406s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 91.796897888s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.3( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.953158379s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796897888s@ mbc={}] exit Reset 0.000039 1 0.000064
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.3( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.953158379s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796897888s@ mbc={}] enter Started
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.3( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.953158379s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796897888s@ mbc={}] enter Start
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.3( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.953158379s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796897888s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.3( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.953158379s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796897888s@ mbc={}] exit Start 0.000010 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.3( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.953158379s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796897888s@ mbc={}] enter Started/Stray
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.5( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 11.048026 15 0.000068
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.5( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 11.053540 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.5( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 11.053577 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.5( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started 11.053597 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.5( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.5( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.952614784s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 91.796539307s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.5( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.952584267s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796539307s@ mbc={}] exit Reset 0.000052 1 0.000100
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.5( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.952584267s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796539307s@ mbc={}] enter Started
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.5( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.952584267s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796539307s@ mbc={}] enter Start
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.5( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.952584267s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796539307s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.5( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.952584267s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796539307s@ mbc={}] exit Start 0.000010 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.5( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.952584267s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796539307s@ mbc={}] enter Started/Stray
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.1( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 4.351703 1 0.000023
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.1( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 4.358780 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.1( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.839052 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.1( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started 5.839076 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.1( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.1( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.648002625s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 90.492103577s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.1( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.647982597s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492103577s@ mbc={}] exit Reset 0.000041 1 0.000067
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.1( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.647982597s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492103577s@ mbc={}] enter Started
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.1( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.647982597s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492103577s@ mbc={}] enter Start
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.1( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.647982597s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492103577s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.1( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.647982597s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492103577s@ mbc={}] exit Start 0.000010 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.1( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.647982597s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492103577s@ mbc={}] enter Started/Stray
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.6( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 11.048458 15 0.000058
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.6( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 11.054034 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.6( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 11.054073 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.6( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started 11.054093 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.6( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.6( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.952114105s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 91.796447754s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.6( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.952090263s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796447754s@ mbc={}] exit Reset 0.000046 1 0.000076
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.6( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.952090263s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796447754s@ mbc={}] enter Started
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.6( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.952090263s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796447754s@ mbc={}] enter Start
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.6( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.952090263s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796447754s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.6( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.952090263s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796447754s@ mbc={}] exit Start 0.000012 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.6( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.952090263s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796447754s@ mbc={}] enter Started/Stray
Jan 10 17:23:19 compute-0 ceph-osd[86809]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.16( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.018216 2 0.000111
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.16( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.16( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000006 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.16( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.2( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 4.354095 1 0.000055
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.2( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 4.361195 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.2( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.840885 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.2( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started 5.840912 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.2( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.7( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 11.050784 15 0.000080
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.7( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 11.056326 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.7( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 11.056380 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 44 handle_osd_map epochs [44,44], i have 44, src has [1,44]
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.2( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.645599365s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 90.492210388s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.7( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started 11.056444 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.7( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.2( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.645558357s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492210388s@ mbc={}] exit Reset 0.000132 1 0.000138
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.2( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.645558357s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492210388s@ mbc={}] enter Started
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.2( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.645558357s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492210388s@ mbc={}] enter Start
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.2( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.645558357s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492210388s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.2( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.645558357s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492210388s@ mbc={}] exit Start 0.000008 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.2( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.645558357s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492210388s@ mbc={}] enter Started/Stray
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.7( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.949820518s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 91.796524048s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.7( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.949773788s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796524048s@ mbc={}] exit Reset 0.000086 1 0.000175
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.7( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.949773788s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796524048s@ mbc={}] enter Started
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.7( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.949773788s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796524048s@ mbc={}] enter Start
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.7( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.949773788s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796524048s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.7( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.949773788s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796524048s@ mbc={}] exit Start 0.000008 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.7( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.949773788s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796524048s@ mbc={}] enter Started/Stray
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.3( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 4.354406 1 0.000025
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.3( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 4.361472 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.3( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.841110 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.3( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started 5.841133 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.3( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.3( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.645345688s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 90.492187500s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.3( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.645316124s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492187500s@ mbc={}] exit Reset 0.000051 1 0.000080
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.8( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 11.050258 15 0.000163
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.8( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 11.056758 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.3( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.645316124s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492187500s@ mbc={}] enter Started
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.8( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 11.056805 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.3( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.645316124s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492187500s@ mbc={}] enter Start
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.3( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.645316124s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492187500s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.3( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.645316124s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492187500s@ mbc={}] exit Start 0.000008 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.3( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.645316124s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492187500s@ mbc={}] enter Started/Stray
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.8( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started 11.056830 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.8( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.8( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.949461937s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 91.796409607s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.8( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.949433327s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796409607s@ mbc={}] exit Reset 0.000057 1 0.000102
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.8( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.949433327s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796409607s@ mbc={}] enter Started
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.8( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.949433327s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796409607s@ mbc={}] enter Start
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.8( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.949433327s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796409607s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.8( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.949433327s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796409607s@ mbc={}] exit Start 0.000011 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.c( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 4.354125 1 0.000167
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.8( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.949433327s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796409607s@ mbc={}] enter Started/Stray
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.c( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 4.361621 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.c( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.840828 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.c( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started 5.841768 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.c( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.c( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.645701408s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 90.492774963s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.c( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.645682335s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492774963s@ mbc={}] exit Reset 0.000039 1 0.000091
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.c( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.645682335s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492774963s@ mbc={}] enter Started
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.c( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.645682335s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492774963s@ mbc={}] enter Start
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.c( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.645682335s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492774963s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.c( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.645682335s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492774963s@ mbc={}] exit Start 0.000007 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.c( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.645682335s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492774963s@ mbc={}] enter Started/Stray
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.9( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 11.051271 15 0.000169
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.9( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 11.057080 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.9( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 11.057181 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.9( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started 11.057209 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.9( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.9( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.949236870s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 91.796409607s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.a( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 11.051652 15 0.000211
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.a( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 11.057274 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.a( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 11.057343 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.a( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started 11.057373 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.a( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.a( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.948904037s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 91.796211243s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.a( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.948876381s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796211243s@ mbc={}] exit Reset 0.000047 1 0.000091
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.a( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.948876381s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796211243s@ mbc={}] enter Started
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.a( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.948876381s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796211243s@ mbc={}] enter Start
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.a( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.948876381s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796211243s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.a( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.948876381s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796211243s@ mbc={}] exit Start 0.000013 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.a( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.948876381s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796211243s@ mbc={}] enter Started/Stray
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.1f( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 4.354980 1 0.000045
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.1f( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 4.362050 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.1f( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.840599 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.1f( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started 5.840628 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.1f( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.1f( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.644659996s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 90.492225647s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.1f( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.644639015s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492225647s@ mbc={}] exit Reset 0.000047 1 0.000094
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.1f( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.644639015s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492225647s@ mbc={}] enter Started
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.1f( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.644639015s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492225647s@ mbc={}] enter Start
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.1f( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.644639015s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492225647s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.1f( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.644639015s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492225647s@ mbc={}] exit Start 0.000015 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.1f( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.644639015s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492225647s@ mbc={}] enter Started/Stray
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.1b( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 11.051462 15 0.000139
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.1b( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 11.056937 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.1b( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 11.058192 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.1b( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started 11.058218 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.1b( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.1b( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.949141502s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 91.796890259s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.1b( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.949123383s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796890259s@ mbc={}] exit Reset 0.000036 1 0.000073
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.1b( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.949123383s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796890259s@ mbc={}] enter Started
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.1b( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.949123383s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796890259s@ mbc={}] enter Start
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.1b( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.949123383s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796890259s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.1b( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.949123383s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796890259s@ mbc={}] exit Start 0.000007 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.1b( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.949123383s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796890259s@ mbc={}] enter Started/Stray
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.18( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 4.354990 1 0.000058
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.18( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 4.362001 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.18( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.841285 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.18( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started 5.841301 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.18( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.18( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.644832611s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 90.492797852s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.18( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.644811630s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492797852s@ mbc={}] exit Reset 0.000045 1 0.000091
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.18( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.644811630s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492797852s@ mbc={}] enter Started
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.18( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.644811630s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492797852s@ mbc={}] enter Start
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.18( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.644811630s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492797852s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.18( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.644811630s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492797852s@ mbc={}] exit Start 0.000009 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.18( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.644811630s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492797852s@ mbc={}] enter Started/Stray
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.1d( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 11.052012 15 0.000149
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.1d( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 11.057708 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.1d( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 11.058206 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.1d( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started 11.058312 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.1d( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.1d( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.948488235s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 91.796607971s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.1d( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.948470116s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796607971s@ mbc={}] exit Reset 0.000034 1 0.000067
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.1d( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.948470116s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796607971s@ mbc={}] enter Started
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.1d( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.948470116s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796607971s@ mbc={}] enter Start
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.1d( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.948470116s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796607971s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.1d( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.948470116s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796607971s@ mbc={}] exit Start 0.000007 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.1d( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.948470116s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796607971s@ mbc={}] enter Started/Stray
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.1a( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 4.355422 1 0.000066
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.1a( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 4.362574 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.1a( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.841566 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.1a( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started 5.841587 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.1a( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.1a( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.644400597s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 90.492721558s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.1a( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.644385338s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492721558s@ mbc={}] exit Reset 0.000031 1 0.000053
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.1a( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.644385338s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492721558s@ mbc={}] enter Started
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.1a( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.644385338s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492721558s@ mbc={}] enter Start
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.1a( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.644385338s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492721558s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.1a( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.644385338s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492721558s@ mbc={}] exit Start 0.000007 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.1a( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.644385338s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492721558s@ mbc={}] enter Started/Stray
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.1e( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 11.053036 15 0.000101
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.1e( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 11.058658 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.1e( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 11.058713 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.1e( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started 11.058736 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.1e( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.1e( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.947714806s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 91.796150208s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.1e( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.947671890s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796150208s@ mbc={}] exit Reset 0.000058 1 0.000079
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.1e( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.947671890s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796150208s@ mbc={}] enter Started
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.1e( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.947671890s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796150208s@ mbc={}] enter Start
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.1e( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.947671890s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796150208s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.1e( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.947671890s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796150208s@ mbc={}] exit Start 0.000007 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.1e( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.947671890s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796150208s@ mbc={}] enter Started/Stray
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.1b( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 4.355647 1 0.000146
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.1b( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 4.362769 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.1b( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.841797 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.1b( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started 5.841818 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.1b( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.1b( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.644159317s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 90.492759705s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.1b( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.644144058s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492759705s@ mbc={}] exit Reset 0.000032 1 0.000053
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.1b( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.644144058s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492759705s@ mbc={}] enter Started
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.1b( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.644144058s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492759705s@ mbc={}] enter Start
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.1b( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.644144058s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492759705s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.1b( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.644144058s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492759705s@ mbc={}] exit Start 0.000008 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.1b( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.644144058s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492759705s@ mbc={}] enter Started/Stray
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.1f( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 11.053118 15 0.000209
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.1f( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 11.058534 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.1f( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 11.058589 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.1f( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started 11.058689 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.1f( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.1f( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.947475433s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 91.796226501s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.1f( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.947455406s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796226501s@ mbc={}] exit Reset 0.000037 1 0.000097
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.1f( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.947455406s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796226501s@ mbc={}] enter Started
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.1f( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.947455406s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796226501s@ mbc={}] enter Start
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.1f( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.947455406s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796226501s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.1f( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.947455406s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796226501s@ mbc={}] exit Start 0.000008 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.1f( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.947455406s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796226501s@ mbc={}] enter Started/Stray
Jan 10 17:23:19 compute-0 ceph-osd[86809]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.12( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.020279 2 0.000031
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.12( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.12( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000005 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.12( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:23:19 compute-0 ceph-osd[86809]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.020091 2 0.000031
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:23:19 compute-0 ceph-osd[86809]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.019579 2 0.000044
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:23:19 compute-0 ceph-osd[86809]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.11( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.019380 2 0.000029
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.11( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.11( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.11( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:23:19 compute-0 ceph-osd[86809]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.13( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.020092 2 0.000050
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.13( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.13( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.13( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.6(unlocked)] enter Initial
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.6( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=0 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000084 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.6( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=0 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.6( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000009 1 0.000023
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.6( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.6( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.6( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.6( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.6( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.6( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.6( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.6( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000099 1 0.000328
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.6( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 17:23:19 compute-0 ceph-osd[86809]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.019982 2 0.000031
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000006 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.e( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 4.357547 1 0.000025
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.e( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 4.364576 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.e( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.843615 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.e( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started 5.843656 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.e( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.e( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.642189980s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 90.492256165s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.e( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.642154694s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492256165s@ mbc={}] exit Reset 0.000067 1 0.000141
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.e( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.642154694s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492256165s@ mbc={}] enter Started
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.e( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.642154694s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492256165s@ mbc={}] enter Start
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.e( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.642154694s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492256165s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.e( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.642154694s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492256165s@ mbc={}] exit Start 0.000008 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.e( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.642154694s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492256165s@ mbc={}] enter Started/Stray
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.14(unlocked)] enter Initial
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.14( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=0 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000063 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.14( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=0 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.14( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000009 1 0.000022
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.14( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.14( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.14( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.14( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.14( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.14( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.14( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.14( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000190 1 0.000040
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.12(unlocked)] enter Initial
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.14( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.9( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.949159622s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796409607s@ mbc={}] exit Reset 0.000107 1 0.000141
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.9( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.949159622s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796409607s@ mbc={}] enter Started
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.12( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=0 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000108 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.12( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=0 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.9( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.949159622s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796409607s@ mbc={}] enter Start
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.9( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.949159622s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796409607s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.9( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.949159622s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796409607s@ mbc={}] exit Start 0.000055 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.9( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.949159622s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796409607s@ mbc={}] enter Started/Stray
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.12( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000011 1 0.000033
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.12( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.12( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.12( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.12( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.12( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.12( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.12( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.12( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000705 1 0.000080
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.12( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.10(unlocked)] enter Initial
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.10( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=0 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000118 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.10( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=0 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.10( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000010 1 0.000023
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.10( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.10( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.10( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.10( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.10( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.10( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.10( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.10( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000127 1 0.000053
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.10( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.f(unlocked)] enter Initial
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.f( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=0 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000104 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.f( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=0 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.f( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000009 1 0.000025
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.f( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.f( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.f( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.f( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000007 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.f( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.f( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.f( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.f( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000097 1 0.000043
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.f( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.d(unlocked)] enter Initial
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=0 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000085 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=0 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000009 1 0.000021
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000099 1 0.000042
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.2(unlocked)] enter Initial
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.2( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=0 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000057 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.2( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=0 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.2( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000008 1 0.000017
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.2( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.2( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.2( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.2( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000009 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.2( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.2( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.2( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.2( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000095 1 0.000045
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.2( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.d(unlocked)] enter Initial
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.d( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=0 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000098 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.d( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=0 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.d( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000008 1 0.000021
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.d( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.d( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.d( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.d( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.d( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.d( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.d( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.d( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000155 1 0.000065
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.d( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.f(unlocked)] enter Initial
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=0 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000089 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=0 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000012 1 0.000021
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000009 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000098 1 0.000046
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.3(unlocked)] enter Initial
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.3( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=0 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000111 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.3( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=0 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.3( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000009 1 0.000020
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.3( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.3( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.3( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.3( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.3( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.3( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.3( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.3( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000118 1 0.000035
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.3( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.1(unlocked)] enter Initial
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.1( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=0 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000107 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.1( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=0 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.1( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000010 1 0.000022
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.1( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.1( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.1( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.1( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.1( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.1( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.1( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.1( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000073 1 0.000039
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.1( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.4(unlocked)] enter Initial
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.4( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=0 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000059 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.4( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=0 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.4( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000007 1 0.000016
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.4( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.4( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.4( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.4( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.4( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.4( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.4( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.4( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000084 1 0.000044
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.4( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.5(unlocked)] enter Initial
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.5( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=0 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000049 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.5( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=0 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.5( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000007 1 0.000016
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.5( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.5( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.5( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.5( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000009 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.5( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.5( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.5( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.5( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000102 1 0.000034
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.5( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.7(unlocked)] enter Initial
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.7( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=0 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000049 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.7( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=0 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.7( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000012
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.7( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.7( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.7( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.7( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.7( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.7( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.7( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.7( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000053 1 0.000030
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.7( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.7(unlocked)] enter Initial
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.7( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=0 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000045 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.7( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=0 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.7( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000007 1 0.000015
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.7( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.7( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.7( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.7( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.7( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.7( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.7( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.7( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000044 1 0.000030
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.7( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.9(unlocked)] enter Initial
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.9( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=0 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000061 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.9( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=0 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.9( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000007 1 0.000015
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.9( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.9( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.9( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.9( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000009 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.9( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.9( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.9( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.9( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000119 1 0.000034
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.9( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.5(unlocked)] enter Initial
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.5( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=0 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000088 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.5( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=0 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.5( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000009 1 0.000020
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.5( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.5( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.5( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.5( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000012 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.5( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.5( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.5( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.5( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000112 1 0.000052
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.5( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.9(unlocked)] enter Initial
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.9( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=0 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000079 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.9( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=0 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.9( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000007 1 0.000017
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.9( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.9( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.9( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.9( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.9( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.9( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.9( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.9( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000111 1 0.000054
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.9( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.8(unlocked)] enter Initial
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.8( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=0 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000057 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.8( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=0 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.8( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000018 1 0.000027
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.8( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.8( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.8( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.8( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.8( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.8( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.8( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.8( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000098 1 0.000044
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.8( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 17:23:19 compute-0 ceph-osd[86809]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.9( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.027201 2 0.000028
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.9( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.9( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000005 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.9( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:23:19 compute-0 ceph-osd[86809]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.026016 2 0.000037
Jan 10 17:23:19 compute-0 ceph-osd[86809]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.026106 2 0.000024
Jan 10 17:23:19 compute-0 ceph-osd[86809]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000005 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.024975 2 0.000026
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000006 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:23:19 compute-0 ceph-osd[86809]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000173 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.f( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.023887 2 0.000079
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.024971 2 0.000040
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.f( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000015 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:23:19 compute-0 ceph-osd[86809]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.024799 2 0.000026
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.f( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000007 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.f( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.9( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.023450 2 0.000061
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.9( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.9( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000013 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.1a( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.021282 2 0.000061
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.9( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.1a( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:23:19 compute-0 ceph-osd[86809]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.c( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.020790 2 0.000088
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.c( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.c( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.c( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.1a( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000009 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.1a( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:23:19 compute-0 ceph-osd[86809]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.19( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.020386 2 0.000113
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.19( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.19( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.19( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:23:19 compute-0 ceph-osd[86809]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.18( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.019940 2 0.000054
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.18( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.18( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000018 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.18( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.b(unlocked)] enter Initial
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=0 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000070 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=0 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000010 1 0.000019
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000008 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000134 1 0.000078
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 17:23:19 compute-0 ceph-osd[86809]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.1( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.025609 2 0.000126
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.1( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.1( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.1( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:23:19 compute-0 ceph-osd[86809]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.a( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.024180 2 0.000052
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.a( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.a( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000005 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.a( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:23:19 compute-0 ceph-osd[86809]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.14( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.012109 2 0.000165
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.14( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.14( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.14( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:23:19 compute-0 ceph-osd[86809]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.f( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.010481 2 0.000041
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.f( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.f( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.f( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:23:19 compute-0 ceph-osd[86809]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.d( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/GetLog 0.010338 2 0.000050
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.d( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 0'0 peering m=2 mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.d( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/GetMissing 0.000005 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.d( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 0'0 peering m=2 mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:23:19 compute-0 ceph-osd[86809]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.d( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.009765 2 0.000066
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.d( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.d( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000007 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.d( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:23:19 compute-0 ceph-osd[86809]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.2( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.010374 2 0.000044
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.2( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.2( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000005 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.2( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:23:19 compute-0 ceph-osd[86809]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.f( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 0'0 peering m=3 mbc={}] exit Started/Primary/Peering/GetLog 0.009613 2 0.000037
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.f( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 0'0 peering m=3 mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.f( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 0'0 peering m=3 mbc={}] exit Started/Primary/Peering/GetMissing 0.000009 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.3( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/GetLog 0.008808 2 0.000040
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.f( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 0'0 peering m=3 mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.3( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 0'0 peering m=2 mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.3( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/GetMissing 0.000018 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.3( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 0'0 peering m=2 mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:23:19 compute-0 ceph-osd[86809]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.1( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.008598 2 0.000058
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.1( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.1( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.1( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:23:19 compute-0 ceph-osd[86809]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.4( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.008392 2 0.000053
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.4( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.4( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.4( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:23:19 compute-0 ceph-osd[86809]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.5( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.008167 2 0.000043
Jan 10 17:23:19 compute-0 ceph-osd[86809]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.5( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:23:19 compute-0 ceph-osd[86809]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.5( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000005 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.5( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.10( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.011967 2 0.000056
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.10( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.10( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000006 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.10( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:23:19 compute-0 ceph-osd[86809]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.7( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.007741 2 0.000062
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.7( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.7( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.7( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:23:19 compute-0 ceph-osd[86809]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.12( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.012523 2 0.000054
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.12( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.12( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.12( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:23:19 compute-0 ceph-osd[86809]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.7( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetLog 0.008144 2 0.000030
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.7( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.7( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetMissing 0.000008 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.7( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.5( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/GetLog 0.007287 2 0.000039
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.5( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 0'0 peering m=2 mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.5( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.5( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 0'0 peering m=2 mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:23:19 compute-0 ceph-osd[86809]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.8( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.006787 2 0.000035
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.8( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.8( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.8( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:23:19 compute-0 ceph-osd[86809]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.9( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.007129 2 0.000035
Jan 10 17:23:19 compute-0 ceph-osd[86809]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetLog 0.003309 2 0.000048
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.9( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.9( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000007 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.9( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.9( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.007851 2 0.000037
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.9( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.9( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.9( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:23:19 compute-0 ceph-osd[86809]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.6( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.014713 2 0.000048
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.6( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.6( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000009 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.6( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 7.1e scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 62570496 unmapped: 335872 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 7.1e scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T16:59:24.401985+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  log_queue is 2 last_log 13 sent 11 num 2 unsent 2 sending 2
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T16:59:54.239931+0000 osd.1 (osd.1) 12 : cluster [DBG] 7.1e scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T16:59:54.249618+0000 osd.1 (osd.1) 13 : cluster [DBG] 7.1e scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 62562304 unmapped: 344064 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client handle_log_ack log(last 13)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T16:59:54.239931+0000 osd.1 (osd.1) 12 : cluster [DBG] 7.1e scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T16:59:54.249618+0000 osd.1 (osd.1) 13 : cluster [DBG] 7.1e scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 44 handle_osd_map epochs [44,45], i have 44, src has [1,45]
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 44 handle_osd_map epochs [45,45], i have 45, src has [1,45]
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.d( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.122323 2 0.000080
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.d( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.132335 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.f( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.122844 2 0.000042
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.d( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.f( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.133483 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.f( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.f( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.d( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.3( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/WaitUpThru 1.122186 2 0.000131
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.3( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering 1.131238 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.3( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 0'0 unknown m=2 mbc={}] enter Started/Primary/Active
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.d( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/WaitUpThru 1.122819 2 0.000187
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.d( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering 1.133324 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.f( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 0'0 peering m=3 mbc={}] exit Started/Primary/Peering/WaitUpThru 1.122332 2 0.000099
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.4( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.122110 2 0.000040
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.f( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 0'0 peering m=3 mbc={}] exit Started/Primary/Peering 1.132133 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.d( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 0'0 unknown m=2 mbc={}] enter Started/Primary/Active
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.f( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 0'0 unknown m=3 mbc={}] enter Started/Primary/Active
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.4( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.130631 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.4( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.4( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.f( v 33'39 lc 33'1 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 mlcod 0'0 activating+degraded m=3 mbc={255={(0+1)=3}}] enter Started/Primary/Active/Activating
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.1( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.122349 2 0.000043
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.1( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.131066 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.1( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.d( v 33'39 lc 33'13 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 mlcod 0'0 activating+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/Activating
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.1( v 33'39 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.3( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 0'0 activating+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/Activating
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.7( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.122209 2 0.000038
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.7( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.130068 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.7( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.7( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.5( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/WaitUpThru 1.122041 2 0.000047
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.5( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering 1.129492 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.5( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 0'0 unknown m=2 mbc={}] enter Started/Primary/Active
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.5( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.122347 2 0.000040
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.5( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.130655 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.5( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.5( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.7( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/WaitUpThru 1.122165 2 0.000095
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.5( v 33'39 lc 33'11 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 mlcod 0'0 activating+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/Activating
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.9( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.121965 2 0.000081
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.7( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering 1.130430 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.9( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.129274 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.9( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.7( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 0'0 unknown m=1 mbc={}] enter Started/Primary/Active
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.9( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.122003 2 0.000310
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.9( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.130006 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.7( v 33'39 lc 33'21 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 mlcod 0'0 activating+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Activating
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.9( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.9( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/WaitUpThru 1.121945 2 0.000157
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering 1.125552 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 0'0 unknown m=1 mbc={}] enter Started/Primary/Active
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.8( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.122197 2 0.000035
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.14( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.123792 2 0.000045
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.8( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.129128 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.14( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.136146 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.8( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.14( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.14( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.8( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 0'0 activating+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Activating
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.10( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.122627 2 0.000055
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.10( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.134775 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.136635 2 0.000056
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.10( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.156750 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.1b( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.10( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.2( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.123235 2 0.000077
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.2( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.133774 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.11( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.137651 2 0.000028
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.2( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.11( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.157132 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.11( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.2( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.11( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.137813 2 0.000027
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.157527 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.17( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.13( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.137735 2 0.000025
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.13( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.157939 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.13( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.13( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.138048 2 0.000028
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.158300 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.15( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.12( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.138143 2 0.000039
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.12( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.158547 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.12( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.12( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.12( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.123102 2 0.000041
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.12( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.136399 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.12( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.16( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.140854 2 0.000064
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.16( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.159274 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.16( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.16( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.9( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.127599 2 0.000051
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.9( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.154911 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.9( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.9( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.12( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.127388 2 0.000194
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.a( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.125790 2 0.001323
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.a( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.150283 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.153751 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.a( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.a( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.d( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.127454 2 0.000058
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.152555 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.3( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.127591 2 0.000031
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.127674 2 0.000242
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.127623 2 0.000079
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.153986 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.152489 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.152722 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.7( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.4( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.5( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.1( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.126500 2 0.000050
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.1( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.152227 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.1( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.6( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.123581 2 0.000055
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.1( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.6( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.138443 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.6( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.6( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.9( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.127670 2 0.000098
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.9( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.151365 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.9( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.9( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.f( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.127788 2 0.000167
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.f( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.151985 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.f( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.f( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.c( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.127778 2 0.000043
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.c( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.148801 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.c( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.1d( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.146321 2 0.000071
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.1d( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.161075 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.c( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.1d( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.1d( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.1a( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.127973 2 0.000174
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.1a( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.149508 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.1a( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.1a( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.18( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.127920 2 0.000070
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.18( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.148021 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.18( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.18( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.19( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.128031 2 0.000088
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.19( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.148636 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.19( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.19( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.d( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.d( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002992 4 0.000123
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.d( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.d( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.d( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.f( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.f( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003725 4 0.000695
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.f( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.f( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000010 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.f( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 45 handle_osd_map epochs [45,45], i have 45, src has [1,45]
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.1f( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.144046 7 0.000311
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.1f( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.1f( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.1f( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000102 1 0.000047
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.1f( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.f( v 33'39 lc 33'1 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.1( v 33'39 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.4( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.d( v 33'39 lc 33'13 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.3( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.7( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.1( v 33'39 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/39 les/c/f=45/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.008979 4 0.000088
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.1( v 33'39 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/39 les/c/f=45/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.1( v 33'39 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/39 les/c/f=45/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000011 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.1( v 33'39 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/39 les/c/f=45/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.5( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.f( v 33'39 lc 33'1 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/39 les/c/f=45/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] exit Started/Primary/Active/Activating 0.009193 5 0.000185
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.f( v 33'39 lc 33'1 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/39 les/c/f=45/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.5( v 33'39 lc 33'11 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.9( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.1b( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.10( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.11( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.2( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.17( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.13( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.15( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.12( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.16( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.9( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.12( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.a( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.d( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.3( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.4( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.5( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.7( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.1( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.6( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.9( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.f( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.1d( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.c( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.4( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.010219 4 0.000332
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.4( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.4( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.4( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.d( v 33'39 lc 33'13 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/39 les/c/f=45/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/Activating 0.010174 5 0.000291
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.d( v 33'39 lc 33'13 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/39 les/c/f=45/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.3( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/39 les/c/f=45/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/Activating 0.010183 5 0.000350
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.3( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/39 les/c/f=45/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.7( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.010124 4 0.000056
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.7( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.7( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.7( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.5( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.010106 4 0.000059
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.5( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/39 les/c/f=45/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.010050 4 0.000047
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.5( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/39 les/c/f=45/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.7( v 33'39 lc 33'21 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.f( v 33'39 lc 33'1 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/39 les/c/f=45/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=3 mbc={255={(0+1)=3}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.001243 1 0.000074
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.f( v 33'39 lc 33'1 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/39 les/c/f=45/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=3 mbc={255={(0+1)=3}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.f( v 33'39 lc 33'1 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/39 les/c/f=45/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=3 mbc={255={(0+1)=3}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000005 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.f( v 33'39 lc 33'1 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/39 les/c/f=45/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=3 mbc={255={(0+1)=3}}] enter Started/Primary/Active/Recovering
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.5( v 33'39 lc 33'11 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/39 les/c/f=45/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/Activating 0.010257 5 0.000331
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.5( v 33'39 lc 33'11 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/39 les/c/f=45/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.9( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.010184 4 0.000282
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.9( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.9( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.9( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/39 les/c/f=45/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000015 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/39 les/c/f=45/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/39 les/c/f=45/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/Activating 0.010115 5 0.000156
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/39 les/c/f=45/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.1b( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=44/35 les/c/f=45/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.010046 4 0.000047
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.1b( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=44/35 les/c/f=45/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.1b( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=44/35 les/c/f=45/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.1b( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=44/35 les/c/f=45/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.10( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.010077 4 0.000065
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.10( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.10( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.10( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.2( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009955 4 0.000077
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.2( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.2( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.2( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.17( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=44/35 les/c/f=45/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009946 4 0.000036
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.17( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=44/35 les/c/f=45/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.17( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=44/35 les/c/f=45/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.17( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=44/35 les/c/f=45/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.13( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009891 4 0.000045
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.13( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.13( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.13( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.15( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=44/35 les/c/f=45/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009813 4 0.000065
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.15( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=44/35 les/c/f=45/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.15( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=44/35 les/c/f=45/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000002 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.15( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=44/35 les/c/f=45/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.12( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009827 4 0.000048
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.12( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.12( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000002 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.12( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.16( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009635 4 0.000048
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.16( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.16( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000002 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.16( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.9( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009586 4 0.000034
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.9( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.9( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000002 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.9( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.12( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009460 4 0.000388
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.12( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.12( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.12( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.a( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=44/35 les/c/f=45/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009377 4 0.000047
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.a( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=44/35 les/c/f=45/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.a( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=44/35 les/c/f=45/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.a( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=44/35 les/c/f=45/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.d( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=44/35 les/c/f=45/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009405 4 0.000091
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.d( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=44/35 les/c/f=45/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.d( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=44/35 les/c/f=45/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.d( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=44/35 les/c/f=45/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.3( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=44/35 les/c/f=45/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009384 4 0.000049
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.3( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=44/35 les/c/f=45/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.3( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=44/35 les/c/f=45/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.3( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=44/35 les/c/f=45/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.4( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=44/35 les/c/f=45/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009048 4 0.000077
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.4( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=44/35 les/c/f=45/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.4( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=44/35 les/c/f=45/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.4( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=44/35 les/c/f=45/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.5( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=44/35 les/c/f=45/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009063 4 0.000110
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.5( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=44/35 les/c/f=45/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.5( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=44/35 les/c/f=45/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.5( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=44/35 les/c/f=45/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.11( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.010346 4 0.000055
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.7( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=44/35 les/c/f=45/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009115 4 0.000062
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.7( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=44/35 les/c/f=45/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.11( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.7( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=44/35 les/c/f=45/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.7( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=44/35 les/c/f=45/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.11( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000011 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.11( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.1( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009043 4 0.000055
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.1( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.1( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.1( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.6( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=44/35 les/c/f=45/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009040 4 0.000043
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.6( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=44/35 les/c/f=45/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.6( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=44/35 les/c/f=45/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.6( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=44/35 les/c/f=45/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.9( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=44/35 les/c/f=45/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009029 4 0.000049
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.9( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=44/35 les/c/f=45/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.9( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=44/35 les/c/f=45/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.9( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=44/35 les/c/f=45/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.f( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.008968 4 0.000071
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.f( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.1d( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.008899 4 0.000047
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.1d( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.1d( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.f( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000009 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.1d( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.f( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.c( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.008945 4 0.000056
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.c( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.c( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.c( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.1a( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.18( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.19( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.14( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.5( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.8( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.7( v 33'39 lc 33'21 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/39 les/c/f=45/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/Activating 0.011254 5 0.001322
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.7( v 33'39 lc 33'21 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/39 les/c/f=45/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.1a( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009084 4 0.000056
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.1a( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.1a( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.1a( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.18( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009069 4 0.000048
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.18( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.18( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.18( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.19( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009036 4 0.000035
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.19( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.19( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.19( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.8( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.011637 4 0.002788
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.8( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.8( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.8( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.14( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.011569 4 0.002735
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.14( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.14( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.14( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.1b( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.152598 7 0.000051
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.1b( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.1b( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.12( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.168352 7 0.000112
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.12( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.12( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.13( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.170051 7 0.000041
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.13( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.13( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.17( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.170536 7 0.000041
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.17( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.17( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.15( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.169394 7 0.000050
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.15( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.15( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.a( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.153978 7 0.000055
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.a( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.a( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.f( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.158863 7 0.000093
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.f( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.f( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.6( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.156982 7 0.000097
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.6( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.6( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.9( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.150645 7 0.003536
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.9( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.9( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.3( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.154534 7 0.000071
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.3( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.157704 7 0.000083
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.18( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.153469 7 0.000069
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.18( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.3( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.18( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.3( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.6( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.158705 7 0.000056
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.6( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.1( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.158133 7 0.000083
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.6( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.1( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.1( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.c( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.159603 7 0.000065
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.c( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.4( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.158443 7 0.000087
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.c( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.4( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.4( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.f( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.168436 7 0.000078
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.f( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.f( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.1f( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.153954 7 0.000078
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.1f( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.1f( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.1b( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.153785 7 0.000064
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.1b( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.1b( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.9( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.159850 7 0.000069
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.9( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.9( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.3( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.3( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.1f( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 DELETING pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.008645 1 0.000038
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.1f( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.008770 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.1f( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.152858 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.18( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.175597 7 0.000041
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.18( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.18( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.16( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.174688 7 0.000043
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.16( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.16( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.11( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.174384 7 0.000100
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.11( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.11( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.15( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.173914 7 0.000076
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.15( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.15( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.1c( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.176573 7 0.000041
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.1c( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.1c( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.11( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.173930 7 0.000059
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.11( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.11( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.5( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.163743 7 0.000065
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.5( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.5( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.e( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.166482 7 0.000079
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.e( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.e( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.2( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.161014 7 0.000088
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.2( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.2( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.a( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.166663 7 0.000093
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.a( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.a( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.7( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.161283 7 0.000090
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.7( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.7( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.5( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.164316 7 0.000062
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.5( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.5( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.c( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.161261 7 0.000066
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.c( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.c( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.8( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.166461 7 0.000062
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.8( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.8( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.1d( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.163189 7 0.000104
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.1d( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.1d( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.e( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.161499 7 0.000063
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.e( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.e( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.1e( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.163471 7 0.000071
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.1e( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.1e( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.1a( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.163762 7 0.000042
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.1a( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.1a( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.1( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.168054 7 0.000074
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.1( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.1( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.8( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.165554 7 0.000094
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.8( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.8( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T16:59:25.402192+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/39 les/c/f=45/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.291023 1 0.000045
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/39 les/c/f=45/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/39 les/c/f=45/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000023 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/39 les/c/f=45/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.d( v 33'39 lc 33'13 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/39 les/c/f=45/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.291349 1 0.000033
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.d( v 33'39 lc 33'13 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/39 les/c/f=45/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.d( v 33'39 lc 33'13 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/39 les/c/f=45/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000013 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.d( v 33'39 lc 33'13 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/39 les/c/f=45/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/Recovering
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 409798 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/39 les/c/f=45/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.125822 1 0.000128
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/39 les/c/f=45/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/39 les/c/f=45/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000013 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/39 les/c/f=45/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.3( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/39 les/c/f=45/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.417378 1 0.000021
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.3( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/39 les/c/f=45/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.3( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/39 les/c/f=45/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000005 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.3( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/39 les/c/f=45/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/Recovering
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/39 les/c/f=45/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.148772 1 0.000057
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/39 les/c/f=45/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/39 les/c/f=45/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000023 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/39 les/c/f=45/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.5( v 33'39 lc 33'11 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/39 les/c/f=45/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.566021 1 0.000022
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.5( v 33'39 lc 33'11 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/39 les/c/f=45/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.5( v 33'39 lc 33'11 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/39 les/c/f=45/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000009 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.5( v 33'39 lc 33'11 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/39 les/c/f=45/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/Recovering
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/39 les/c/f=45/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.139713 1 0.000116
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/39 les/c/f=45/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/39 les/c/f=45/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000011 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/39 les/c/f=45/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/39 les/c/f=45/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.705941 1 0.000021
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/39 les/c/f=45/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/39 les/c/f=45/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000004 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/39 les/c/f=45/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Recovering
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/39 les/c/f=45/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.073770 1 0.000048
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/39 les/c/f=45/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/39 les/c/f=45/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000012 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/39 les/c/f=45/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.7( v 33'39 lc 33'21 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/39 les/c/f=45/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.778903 1 0.000096
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.7( v 33'39 lc 33'21 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/39 les/c/f=45/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.7( v 33'39 lc 33'21 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/39 les/c/f=45/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000012 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.7( v 33'39 lc 33'21 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/39 les/c/f=45/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Recovering
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/39 les/c/f=45/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.069879 1 0.000136
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/39 les/c/f=45/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/39 les/c/f=45/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000012 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/39 les/c/f=45/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.1b( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.845419 1 0.000032
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.1b( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.12( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.845580 1 0.000019
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.12( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.13( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.845766 1 0.000013
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.13( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.17( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.845826 1 0.000013
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.17( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.15( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.845944 1 0.000013
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.15( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.a( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.846007 1 0.000016
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.a( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.f( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.846055 1 0.000014
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.f( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.6( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.846143 1 0.000014
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.6( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.9( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.846186 1 0.000033
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.9( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.18( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.846200 1 0.000021
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.18( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.3( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.846286 1 0.000035
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.3( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.6( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.846361 1 0.000016
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.6( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.1( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.846433 1 0.000017
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.1( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.c( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.846507 1 0.000018
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.c( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.4( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.846606 1 0.000016
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.4( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.f( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.846689 1 0.000015
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.f( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.1f( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.846767 1 0.000019
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.1f( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.1b( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.846838 1 0.000016
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.1b( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.9( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.846904 1 0.000020
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.9( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.3( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.846976 1 0.000187
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.3( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.18( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.842443 1 0.000043
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.18( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.16( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.842540 1 0.000012
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.16( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.11( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.842237 1 0.000087
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.11( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.15( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.842195 1 0.000021
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.15( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.1c( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.842059 1 0.000031
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.1c( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.11( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.842129 1 0.000052
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.11( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.5( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.841990 1 0.000028
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.5( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.e( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.841811 1 0.000028
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.e( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.2( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.841797 1 0.000045
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.2( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.a( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.841551 1 0.000042
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.a( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.7( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.841596 1 0.000042
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.7( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.5( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.841423 1 0.000026
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.5( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.c( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.841424 1 0.000021
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.c( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.8( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.841174 1 0.000032
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.8( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.1d( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.838555 1 0.000047
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.1d( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.e( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.838372 1 0.000034
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.e( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.1e( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.838135 1 0.000039
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.1e( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.1a( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.838102 1 0.000027
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.1a( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.1( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.838117 1 0.000062
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.1( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.8( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.837948 1 0.000045
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.8( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.1b( empty lb MIN local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=-1 lpr=44 DELETING pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.009748 1 0.000131
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.1b( empty lb MIN local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.855253 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.1b( empty lb MIN local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.007891 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.12( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 DELETING pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.015062 1 0.000130
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.12( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.860742 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.12( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.029125 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.13( empty lb MIN local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=-1 lpr=44 DELETING pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.022005 1 0.000030
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.13( empty lb MIN local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.867812 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.13( empty lb MIN local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.037886 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 62980096 unmapped: 2023424 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.17( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 DELETING pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.029245 1 0.000078
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.17( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.875124 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.17( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.045683 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.15( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 DELETING pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.036517 1 0.000024
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.15( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.882490 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.15( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.051910 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.a( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 DELETING pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.044002 1 0.000023
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.a( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.890041 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.a( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.044053 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.f( empty lb MIN local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=-1 lpr=44 DELETING pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.051260 1 0.000067
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.f( empty lb MIN local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.897356 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.f( empty lb MIN local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.056281 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.6( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 DELETING pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.058509 1 0.000033
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.6( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.904688 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.6( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.061707 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.9( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 DELETING pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.065843 1 0.000037
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.9( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.912074 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.9( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.062825 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.18( empty lb MIN local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=-1 lpr=44 DELETING pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.073151 1 0.000034
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.18( empty lb MIN local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.919387 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.18( empty lb MIN local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.072891 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.3( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 DELETING pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.080508 1 0.000032
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.3( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.926831 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.3( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.084594 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.6( empty lb MIN local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=-1 lpr=44 DELETING pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.087763 1 0.000042
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.6( empty lb MIN local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.934165 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.6( empty lb MIN local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.092904 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.1( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 DELETING pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.095141 1 0.000033
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.1( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.941607 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.1( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.099772 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.c( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 DELETING pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.102769 1 0.000054
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.c( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.949348 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.c( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.108985 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.4( empty lb MIN local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=-1 lpr=44 DELETING pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.109882 1 0.000047
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.4( empty lb MIN local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.956538 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.4( empty lb MIN local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.115014 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.f( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 DELETING pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.117287 1 0.000033
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.f( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.964022 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.f( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.132492 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.1f( empty lb MIN local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=-1 lpr=44 DELETING pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.124701 1 0.000027
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.1f( empty lb MIN local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.971519 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.1f( empty lb MIN local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.125510 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.1b( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 DELETING pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.131697 1 0.000044
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.1b( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.978571 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.1b( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.132389 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.9( empty lb MIN local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=-1 lpr=44 DELETING pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.139163 1 0.000045
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.9( empty lb MIN local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.986126 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.9( empty lb MIN local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.146015 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.3( empty lb MIN local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=-1 lpr=44 DELETING pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.146381 1 0.000037
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.3( empty lb MIN local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.993550 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.3( empty lb MIN local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.148141 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.18( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 DELETING pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.153658 1 0.000031
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.18( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.996138 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.18( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.171774 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.16( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 DELETING pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.160944 1 0.000050
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.16( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.003523 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.16( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.178234 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.11( empty lb MIN local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 DELETING pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.168206 1 0.000037
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.11( empty lb MIN local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.010488 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.11( empty lb MIN local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.184936 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.15( empty lb MIN local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 DELETING pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.175559 1 0.000027
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.15( empty lb MIN local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.017790 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.15( empty lb MIN local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.191726 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T16:59:26.402320+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.1c( empty lb MIN local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 DELETING pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.182981 1 0.000156
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.1c( empty lb MIN local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.025108 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.1c( empty lb MIN local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.201717 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.11( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 DELETING pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.190076 1 0.000029
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.11( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.032244 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.11( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.206217 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.5( empty lb MIN local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 DELETING pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.197382 1 0.000025
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.5( empty lb MIN local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.039402 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.5( empty lb MIN local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.203190 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.e( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 DELETING pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.204759 1 0.000049
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.e( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.046600 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.e( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.213131 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.2( empty lb MIN local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 DELETING pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.212064 1 0.000027
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.2( empty lb MIN local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.053899 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.2( empty lb MIN local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.214963 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.a( empty lb MIN local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 DELETING pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.219566 1 0.000026
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.a( empty lb MIN local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.061169 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.a( empty lb MIN local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.227895 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.7( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 DELETING pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.226746 1 0.000025
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.7( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.068403 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.7( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.229727 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.5( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 DELETING pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.234315 1 0.000026
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.5( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.075799 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.5( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.240158 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.c( empty lb MIN local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 DELETING pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.241365 1 0.000028
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.c( empty lb MIN local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.082821 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.c( empty lb MIN local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.244114 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.8( empty lb MIN local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 DELETING pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.248715 1 0.000028
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.8( empty lb MIN local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.089920 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.8( empty lb MIN local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.256426 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.1d( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 DELETING pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.256375 1 0.000024
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.1d( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.094982 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.1d( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.258219 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.e( empty lb MIN local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 DELETING pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.263420 1 0.000073
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.e( empty lb MIN local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.101847 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.e( empty lb MIN local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.263389 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.1e( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 DELETING pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.270640 1 0.000061
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.1e( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.108856 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.1e( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.272372 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.1a( empty lb MIN local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 DELETING pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.277952 1 0.000028
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.1a( empty lb MIN local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.116094 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.1a( empty lb MIN local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.279889 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.1( empty lb MIN local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 DELETING pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.285279 1 0.000083
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.1( empty lb MIN local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.123459 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.1( empty lb MIN local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.291569 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.8( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 DELETING pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.292478 1 0.000047
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.8( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.130472 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.8( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.296073 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 3.1a scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 3.1a scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 63045632 unmapped: 1957888 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 7.1d scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 7.1d scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T16:59:27.402482+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  log_queue is 4 last_log 17 sent 13 num 4 unsent 4 sending 4
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T16:59:56.521645+0000 osd.1 (osd.1) 14 : cluster [DBG] 3.1a scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T16:59:56.532200+0000 osd.1 (osd.1) 15 : cluster [DBG] 3.1a scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T16:59:57.334127+0000 osd.1 (osd.1) 16 : cluster [DBG] 7.1d scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T16:59:57.344631+0000 osd.1 (osd.1) 17 : cluster [DBG] 7.1d scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 63070208 unmapped: 1933312 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client handle_log_ack log(last 17)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T16:59:56.521645+0000 osd.1 (osd.1) 14 : cluster [DBG] 3.1a scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T16:59:56.532200+0000 osd.1 (osd.1) 15 : cluster [DBG] 3.1a scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T16:59:57.334127+0000 osd.1 (osd.1) 16 : cluster [DBG] 7.1d scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T16:59:57.344631+0000 osd.1 (osd.1) 17 : cluster [DBG] 7.1d scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T16:59:28.402773+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 45 heartbeat osd_stat(store_statfs(0x4fe0e2000/0x0/0x4ffc00000, data 0x9f159/0xea000, compress 0x0/0x0/0x0, omap 0x742b, meta 0x1a28bd5), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 63078400 unmapped: 1925120 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 3.19 scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 3.19 scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T16:59:29.403379+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  log_queue is 2 last_log 19 sent 17 num 2 unsent 2 sending 2
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T16:59:59.388614+0000 osd.1 (osd.1) 18 : cluster [DBG] 3.19 scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T16:59:59.399153+0000 osd.1 (osd.1) 19 : cluster [DBG] 3.19 scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 63135744 unmapped: 1867776 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 7.12 scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 7.12 scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T16:59:30.403600+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  log_queue is 4 last_log 21 sent 19 num 4 unsent 2 sending 2
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:00:00.347367+0000 osd.1 (osd.1) 20 : cluster [DBG] 7.12 scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:00:00.357929+0000 osd.1 (osd.1) 21 : cluster [DBG] 7.12 scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 45 handle_osd_map epochs [45,46], i have 45, src has [1,46]
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client handle_log_ack log(last 19)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T16:59:59.388614+0000 osd.1 (osd.1) 18 : cluster [DBG] 3.19 scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T16:59:59.399153+0000 osd.1 (osd.1) 19 : cluster [DBG] 3.19 scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client handle_log_ack log(last 21)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:00:00.347367+0000 osd.1 (osd.1) 20 : cluster [DBG] 7.12 scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:00:00.357929+0000 osd.1 (osd.1) 21 : cluster [DBG] 7.12 scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 374176 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 46 pg[6.e(unlocked)] enter Initial
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 46 pg[6.e( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=0 pi=[39,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000120 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 46 pg[6.e( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=0 pi=[39,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 46 pg[6.e( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000014 1 0.000029
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 46 pg[6.e( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 46 pg[6.e( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 46 pg[6.e( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 46 pg[6.e( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 46 pg[6.e( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 46 pg[6.e( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 46 pg[6.e( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 46 pg[6.e( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000638 1 0.000477
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 46 pg[6.e( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 46 pg[6.6(unlocked)] enter Initial
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 46 pg[6.6( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=0 pi=[39,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000095 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 46 pg[6.6( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=0 pi=[39,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 46 pg[6.6( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000015 1 0.000033
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 46 pg[6.6( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 46 pg[6.6( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 46 pg[6.6( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 46 pg[6.6( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000013 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 46 pg[6.6( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 46 pg[6.6( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 46 pg[6.6( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 46 pg[6.6( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000119 1 0.000095
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 46 pg[6.6( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 46 pg[6.a(unlocked)] enter Initial
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 46 pg[6.a( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=0 pi=[39,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000086 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 46 pg[6.a( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=0 pi=[39,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 46 pg[6.a( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000011 1 0.000021
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 46 pg[6.a( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 46 pg[6.a( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 46 pg[6.a( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 46 pg[6.a( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 46 pg[6.a( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 46 pg[6.a( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 46 pg[6.a( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 46 pg[6.a( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000128 1 0.000034
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 46 pg[6.a( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 46 pg[6.2(unlocked)] enter Initial
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 46 pg[6.2( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=0 pi=[39,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000040 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 46 pg[6.2( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=0 pi=[39,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 46 pg[6.2( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000007 1 0.000011
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 46 pg[6.2( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 46 pg[6.2( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 46 pg[6.2( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 46 pg[6.2( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 46 pg[6.2( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 46 pg[6.2( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 46 pg[6.2( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 46 pg[6.2( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000100 1 0.000034
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 46 pg[6.2( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 17:23:19 compute-0 ceph-osd[86809]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 46 pg[6.2( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000659 2 0.000093
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 46 pg[6.2( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 46 pg[6.2( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000005 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 46 pg[6.2( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:23:19 compute-0 ceph-osd[86809]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 46 pg[6.6( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetLog 0.001536 2 0.000093
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 46 pg[6.6( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:23:19 compute-0 ceph-osd[86809]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 46 pg[6.a( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001091 2 0.000032
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 46 pg[6.6( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetMissing 0.000010 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 46 pg[6.a( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 46 pg[6.6( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 46 pg[6.a( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000009 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 46 pg[6.a( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:23:19 compute-0 ceph-osd[86809]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 46 pg[6.e( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetLog 0.002306 2 0.000106
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 46 pg[6.e( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 46 pg[6.e( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetMissing 0.000008 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 46 pg[6.e( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 63168512 unmapped: 1835008 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T16:59:31.403826+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 46 handle_osd_map epochs [46,47], i have 46, src has [1,47]
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 46 handle_osd_map epochs [46,47], i have 47, src has [1,47]
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 47 handle_osd_map epochs [47,47], i have 47, src has [1,47]
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 47 pg[6.a( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.564181 2 0.000247
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 47 pg[6.a( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.565547 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 47 pg[6.a( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 47 pg[6.e( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/WaitUpThru 0.564003 2 0.000140
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 47 pg[6.e( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering 0.567065 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 47 pg[6.e( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 mlcod 0'0 unknown m=1 mbc={}] enter Started/Primary/Active
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 47 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=46/47 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 47 pg[6.2( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.564473 2 0.000048
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 47 pg[6.2( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.565296 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 47 pg[6.2( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 47 pg[6.e( v 33'39 lc 33'19 (0'0,33'39] local-lis/les=46/47 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 mlcod 0'0 activating+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Activating
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 47 pg[6.2( v 33'39 (0'0,33'39] local-lis/les=46/47 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 47 pg[6.6( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/WaitUpThru 0.564612 2 0.000170
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 47 pg[6.6( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering 0.566409 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 47 pg[6.6( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 mlcod 0'0 unknown m=1 mbc={}] enter Started/Primary/Active
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 47 pg[6.6( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=46/47 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 mlcod 0'0 activating+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Activating
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 47 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=46/47 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 47 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=46/47 n=1 ec=39/23 lis/c=46/39 les/c/f=47/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.001727 4 0.000187
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 47 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=46/47 n=1 ec=39/23 lis/c=46/39 les/c/f=47/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 47 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=46/47 n=1 ec=39/23 lis/c=46/39 les/c/f=47/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000015 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 47 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=46/47 n=1 ec=39/23 lis/c=46/39 les/c/f=47/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 47 pg[6.6( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=46/47 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 47 pg[6.2( v 33'39 (0'0,33'39] local-lis/les=46/47 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 47 pg[6.e( v 33'39 lc 33'19 (0'0,33'39] local-lis/les=46/47 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 47 pg[6.2( v 33'39 (0'0,33'39] local-lis/les=46/47 n=2 ec=39/23 lis/c=46/39 les/c/f=47/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002935 4 0.000149
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 47 pg[6.6( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=46/47 n=2 ec=39/23 lis/c=46/39 les/c/f=47/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/Activating 0.002644 4 0.000223
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 47 pg[6.2( v 33'39 (0'0,33'39] local-lis/les=46/47 n=2 ec=39/23 lis/c=46/39 les/c/f=47/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 47 pg[6.6( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=46/47 n=2 ec=39/23 lis/c=46/39 les/c/f=47/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 47 pg[6.2( v 33'39 (0'0,33'39] local-lis/les=46/47 n=2 ec=39/23 lis/c=46/39 les/c/f=47/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000010 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 47 pg[6.2( v 33'39 (0'0,33'39] local-lis/les=46/47 n=2 ec=39/23 lis/c=46/39 les/c/f=47/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 47 pg[6.e( v 33'39 lc 33'19 (0'0,33'39] local-lis/les=46/47 n=1 ec=39/23 lis/c=46/39 les/c/f=47/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/Activating 0.002959 5 0.000224
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 47 pg[6.e( v 33'39 lc 33'19 (0'0,33'39] local-lis/les=46/47 n=1 ec=39/23 lis/c=46/39 les/c/f=47/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 47 pg[6.6( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=46/47 n=2 ec=39/23 lis/c=46/39 les/c/f=47/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000164 1 0.000070
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 47 pg[6.6( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=46/47 n=2 ec=39/23 lis/c=46/39 les/c/f=47/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 47 pg[6.6( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=46/47 n=2 ec=39/23 lis/c=46/39 les/c/f=47/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000005 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 47 pg[6.6( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=46/47 n=2 ec=39/23 lis/c=46/39 les/c/f=47/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Recovering
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 47 pg[6.6( v 33'39 (0'0,33'39] local-lis/les=46/47 n=2 ec=39/23 lis/c=46/39 les/c/f=47/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.007928 2 0.000184
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 47 pg[6.6( v 33'39 (0'0,33'39] local-lis/les=46/47 n=2 ec=39/23 lis/c=46/39 les/c/f=47/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 47 pg[6.6( v 33'39 (0'0,33'39] local-lis/les=46/47 n=2 ec=39/23 lis/c=46/39 les/c/f=47/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000017 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 47 pg[6.6( v 33'39 (0'0,33'39] local-lis/les=46/47 n=2 ec=39/23 lis/c=46/39 les/c/f=47/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 47 pg[6.e( v 33'39 lc 33'19 (0'0,33'39] local-lis/les=46/47 n=1 ec=39/23 lis/c=46/39 les/c/f=47/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.008205 1 0.000087
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 47 pg[6.e( v 33'39 lc 33'19 (0'0,33'39] local-lis/les=46/47 n=1 ec=39/23 lis/c=46/39 les/c/f=47/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 47 pg[6.e( v 33'39 lc 33'19 (0'0,33'39] local-lis/les=46/47 n=1 ec=39/23 lis/c=46/39 les/c/f=47/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000011 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 47 pg[6.e( v 33'39 lc 33'19 (0'0,33'39] local-lis/les=46/47 n=1 ec=39/23 lis/c=46/39 les/c/f=47/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Recovering
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 47 pg[6.e( v 33'39 (0'0,33'39] local-lis/les=46/47 n=1 ec=39/23 lis/c=46/39 les/c/f=47/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.074415 1 0.000198
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 47 pg[6.e( v 33'39 (0'0,33'39] local-lis/les=46/47 n=1 ec=39/23 lis/c=46/39 les/c/f=47/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 47 pg[6.e( v 33'39 (0'0,33'39] local-lis/les=46/47 n=1 ec=39/23 lis/c=46/39 les/c/f=47/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000031 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 47 pg[6.e( v 33'39 (0'0,33'39] local-lis/les=46/47 n=1 ec=39/23 lis/c=46/39 les/c/f=47/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 47 heartbeat osd_stat(store_statfs(0x4fe0dd000/0x0/0x4ffc00000, data 0xa076f/0xed000, compress 0x0/0x0/0x0, omap 0x76a0, meta 0x1a28960), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 63168512 unmapped: 1835008 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T16:59:32.404006+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 47 handle_osd_map epochs [47,48], i have 47, src has [1,48]
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.003501892s of 10.250342369s, submitted: 423
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 48 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=44) [1] r=0 lpr=44 crt=33'39 mlcod 33'39 active+clean] exit Started/Primary/Active/Clean 6.292675 7 0.000154
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 48 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=44) [1] r=0 lpr=44 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active 7.082731 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 48 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=44) [1] r=0 lpr=44 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary 8.208309 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 48 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=44) [1] r=0 lpr=44 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started 8.208363 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 48 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=44) [1] r=0 lpr=44 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Reset
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 48 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48 pruub=8.926258087s) [0] r=-1 lpr=48 pi=[44,48)/1 crt=33'39 active pruub 95.995513916s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 48 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=44) [1] r=0 lpr=44 crt=33'39 mlcod 33'39 active+clean] exit Started/Primary/Active/Clean 6.222692 7 0.000119
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 48 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=44) [1] r=0 lpr=44 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active 7.082947 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 48 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=44) [1] r=0 lpr=44 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary 8.213400 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 48 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48 pruub=8.926201820s) [0] r=-1 lpr=48 pi=[44,48)/1 crt=33'39 unknown NOTIFY pruub 95.995513916s@ mbc={}] exit Reset 0.000093 1 0.000162
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 48 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=44) [1] r=0 lpr=44 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started 8.213425 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 48 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48 pruub=8.926201820s) [0] r=-1 lpr=48 pi=[44,48)/1 crt=33'39 unknown NOTIFY pruub 95.995513916s@ mbc={}] enter Started
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 48 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=44) [1] r=0 lpr=44 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Reset
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 48 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48 pruub=8.926201820s) [0] r=-1 lpr=48 pi=[44,48)/1 crt=33'39 unknown NOTIFY pruub 95.995513916s@ mbc={}] enter Start
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 48 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48 pruub=8.926201820s) [0] r=-1 lpr=48 pi=[44,48)/1 crt=33'39 unknown NOTIFY pruub 95.995513916s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 48 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48 pruub=8.926201820s) [0] r=-1 lpr=48 pi=[44,48)/1 crt=33'39 unknown NOTIFY pruub 95.995513916s@ mbc={}] exit Start 0.000009 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 48 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48 pruub=8.926201820s) [0] r=-1 lpr=48 pi=[44,48)/1 crt=33'39 unknown NOTIFY pruub 95.995513916s@ mbc={}] enter Started/Stray
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 48 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48 pruub=8.925914764s) [0] r=-1 lpr=48 pi=[44,48)/1 crt=33'39 active pruub 95.995262146s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 48 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48 pruub=8.925878525s) [0] r=-1 lpr=48 pi=[44,48)/1 crt=33'39 unknown NOTIFY pruub 95.995262146s@ mbc={}] exit Reset 0.000067 1 0.000102
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 48 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48 pruub=8.925878525s) [0] r=-1 lpr=48 pi=[44,48)/1 crt=33'39 unknown NOTIFY pruub 95.995262146s@ mbc={}] enter Started
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 48 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48 pruub=8.925878525s) [0] r=-1 lpr=48 pi=[44,48)/1 crt=33'39 unknown NOTIFY pruub 95.995262146s@ mbc={}] enter Start
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 48 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48 pruub=8.925878525s) [0] r=-1 lpr=48 pi=[44,48)/1 crt=33'39 unknown NOTIFY pruub 95.995262146s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 48 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48 pruub=8.925878525s) [0] r=-1 lpr=48 pi=[44,48)/1 crt=33'39 unknown NOTIFY pruub 95.995262146s@ mbc={}] exit Start 0.000013 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 48 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48 pruub=8.925878525s) [0] r=-1 lpr=48 pi=[44,48)/1 crt=33'39 unknown NOTIFY pruub 95.995262146s@ mbc={}] enter Started/Stray
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 48 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=44) [1] r=0 lpr=44 crt=33'39 mlcod 33'39 active+clean] exit Started/Primary/Active/Clean 6.507078 7 0.000163
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 48 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=44) [1] r=0 lpr=44 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active 7.083769 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 48 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=44) [1] r=0 lpr=44 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary 8.215024 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 48 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=44) [1] r=0 lpr=44 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started 8.215050 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 48 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=44) [1] r=0 lpr=44 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Reset
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 48 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48 pruub=8.925272942s) [0] r=-1 lpr=48 pi=[44,48)/1 crt=33'39 active pruub 95.995002747s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 48 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48 pruub=8.925237656s) [0] r=-1 lpr=48 pi=[44,48)/1 crt=33'39 unknown NOTIFY pruub 95.995002747s@ mbc={}] exit Reset 0.000057 1 0.000084
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 48 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48 pruub=8.925237656s) [0] r=-1 lpr=48 pi=[44,48)/1 crt=33'39 unknown NOTIFY pruub 95.995002747s@ mbc={}] enter Started
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 48 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48 pruub=8.925237656s) [0] r=-1 lpr=48 pi=[44,48)/1 crt=33'39 unknown NOTIFY pruub 95.995002747s@ mbc={}] enter Start
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 48 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48 pruub=8.925237656s) [0] r=-1 lpr=48 pi=[44,48)/1 crt=33'39 unknown NOTIFY pruub 95.995002747s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 48 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48 pruub=8.925237656s) [0] r=-1 lpr=48 pi=[44,48)/1 crt=33'39 unknown NOTIFY pruub 95.995002747s@ mbc={}] exit Start 0.000008 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 48 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48 pruub=8.925237656s) [0] r=-1 lpr=48 pi=[44,48)/1 crt=33'39 unknown NOTIFY pruub 95.995002747s@ mbc={}] enter Started/Stray
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 48 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=44) [1] r=0 lpr=44 crt=33'39 mlcod 33'39 active+clean] exit Started/Primary/Active/Clean 6.782265 7 0.000151
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 48 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=44) [1] r=0 lpr=44 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active 7.083947 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 48 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=44) [1] r=0 lpr=44 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary 8.216096 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 48 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=44) [1] r=0 lpr=44 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started 8.216127 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 48 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=44) [1] r=0 lpr=44 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Reset
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 48 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48 pruub=8.924705505s) [0] r=-1 lpr=48 pi=[44,48)/1 crt=33'39 active pruub 95.994689941s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 48 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48 pruub=8.924655914s) [0] r=-1 lpr=48 pi=[44,48)/1 crt=33'39 unknown NOTIFY pruub 95.994689941s@ mbc={}] exit Reset 0.000085 1 0.000117
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 48 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48 pruub=8.924655914s) [0] r=-1 lpr=48 pi=[44,48)/1 crt=33'39 unknown NOTIFY pruub 95.994689941s@ mbc={}] enter Started
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 48 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48 pruub=8.924655914s) [0] r=-1 lpr=48 pi=[44,48)/1 crt=33'39 unknown NOTIFY pruub 95.994689941s@ mbc={}] enter Start
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 48 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48 pruub=8.924655914s) [0] r=-1 lpr=48 pi=[44,48)/1 crt=33'39 unknown NOTIFY pruub 95.994689941s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 48 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48 pruub=8.924655914s) [0] r=-1 lpr=48 pi=[44,48)/1 crt=33'39 unknown NOTIFY pruub 95.994689941s@ mbc={}] exit Start 0.000012 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 48 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48 pruub=8.924655914s) [0] r=-1 lpr=48 pi=[44,48)/1 crt=33'39 unknown NOTIFY pruub 95.994689941s@ mbc={}] enter Started/Stray
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 48 handle_osd_map epochs [48,48], i have 48, src has [1,48]
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 63193088 unmapped: 1810432 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T16:59:33.404145+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 48 handle_osd_map epochs [48,49], i have 48, src has [1,49]
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 49 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=-1 lpr=48 pi=[44,48)/1 crt=33'39 unknown NOTIFY mbc={}] exit Started/Stray 1.021868 7 0.000091
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 49 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=-1 lpr=48 pi=[44,48)/1 crt=33'39 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 49 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=-1 lpr=48 pi=[44,48)/1 crt=33'39 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 49 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=-1 lpr=48 pi=[44,48)/1 crt=33'39 unknown NOTIFY mbc={}] exit Started/Stray 1.022355 7 0.000128
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 49 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=-1 lpr=48 pi=[44,48)/1 crt=33'39 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 49 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=-1 lpr=48 pi=[44,48)/1 crt=33'39 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 49 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=-1 lpr=48 pi=[44,48)/1 crt=33'39 unknown NOTIFY mbc={}] exit Started/Stray 1.022497 7 0.000089
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 49 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=-1 lpr=48 pi=[44,48)/1 crt=33'39 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 49 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=-1 lpr=48 pi=[44,48)/1 crt=33'39 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 49 handle_osd_map epochs [49,49], i have 49, src has [1,49]
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 49 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=-1 lpr=48 pi=[44,48)/1 crt=33'39 unknown NOTIFY mbc={}] exit Started/Stray 1.021821 7 0.000137
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 49 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=-1 lpr=48 pi=[44,48)/1 crt=33'39 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 49 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=-1 lpr=48 pi=[44,48)/1 crt=33'39 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 49 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=-1 lpr=48 pi=[44,48)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.014557 2 0.000083
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 49 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=-1 lpr=48 pi=[44,48)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ReplicaActive 0.014593 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 49 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=-1 lpr=48 pi=[44,48)/1 pct=0'0 crt=33'39 active mbc={}] enter Started/ToDelete
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 49 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=-1 lpr=48 pi=[44,48)/1 pct=0'0 crt=33'39 active mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 49 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=-1 lpr=48 pi=[44,48)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000164 1 0.000086
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 49 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=-1 lpr=48 pi=[44,48)/1 pct=0'0 crt=33'39 active mbc={}] enter Started/ToDelete/Deleting
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 49 heartbeat osd_stat(store_statfs(0x4fe0d3000/0x0/0x4ffc00000, data 0xa3205/0xf3000, compress 0x0/0x0/0x0, omap 0x7d8d, meta 0x1a28273), peers [0,2] op hist [0,0,0,0,1])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 49 pg[6.7( v 33'39 (0'0,33'39] lb MIN local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=-1 lpr=48 DELETING pi=[44,48)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ToDelete/Deleting 0.121421 2 0.000266
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 49 pg[6.7( v 33'39 (0'0,33'39] lb MIN local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=-1 lpr=48 pi=[44,48)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ToDelete 0.121633 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 49 pg[6.7( v 33'39 (0'0,33'39] lb MIN local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=-1 lpr=48 pi=[44,48)/1 pct=0'0 crt=33'39 active mbc={}] exit Started 1.158666 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 49 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=-1 lpr=48 pi=[44,48)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.138388 2 0.000173
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 49 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=-1 lpr=48 pi=[44,48)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ReplicaActive 0.138457 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 49 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=-1 lpr=48 pi=[44,48)/1 pct=0'0 crt=33'39 active mbc={}] enter Started/ToDelete
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 49 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=-1 lpr=48 pi=[44,48)/1 pct=0'0 crt=33'39 active mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 49 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=-1 lpr=48 pi=[44,48)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000147 1 0.000118
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 49 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=-1 lpr=48 pi=[44,48)/1 pct=0'0 crt=33'39 active mbc={}] enter Started/ToDelete/Deleting
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 49 pg[6.3( v 33'39 (0'0,33'39] lb MIN local-lis/les=44/45 n=2 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=-1 lpr=48 DELETING pi=[44,48)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ToDelete/Deleting 0.137878 2 0.000374
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 49 pg[6.3( v 33'39 (0'0,33'39] lb MIN local-lis/les=44/45 n=2 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=-1 lpr=48 pi=[44,48)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ToDelete 0.138165 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 49 pg[6.3( v 33'39 (0'0,33'39] lb MIN local-lis/les=44/45 n=2 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=-1 lpr=48 pi=[44,48)/1 pct=0'0 crt=33'39 active mbc={}] exit Started 1.298630 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 49 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=-1 lpr=48 pi=[44,48)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.345136 2 0.000055
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 49 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=-1 lpr=48 pi=[44,48)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ReplicaActive 0.345198 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 49 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=-1 lpr=48 pi=[44,48)/1 pct=0'0 crt=33'39 active mbc={}] enter Started/ToDelete
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 49 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=-1 lpr=48 pi=[44,48)/1 pct=0'0 crt=33'39 active mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 49 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=-1 lpr=48 pi=[44,48)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000117 1 0.000153
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 49 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=-1 lpr=48 pi=[44,48)/1 pct=0'0 crt=33'39 active mbc={}] enter Started/ToDelete/Deleting
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 49 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=-1 lpr=48 pi=[44,48)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.405514 2 0.000089
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 49 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=-1 lpr=48 pi=[44,48)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ReplicaActive 0.405576 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 49 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=-1 lpr=48 pi=[44,48)/1 pct=0'0 crt=33'39 active mbc={}] enter Started/ToDelete
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 49 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=-1 lpr=48 pi=[44,48)/1 pct=0'0 crt=33'39 active mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 49 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=-1 lpr=48 pi=[44,48)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000181 1 0.000163
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 49 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=-1 lpr=48 pi=[44,48)/1 pct=0'0 crt=33'39 active mbc={}] enter Started/ToDelete/Deleting
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 49 pg[6.f( v 33'39 (0'0,33'39] lb MIN local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=-1 lpr=48 DELETING pi=[44,48)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ToDelete/Deleting 0.079181 2 0.000236
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 49 pg[6.f( v 33'39 (0'0,33'39] lb MIN local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=-1 lpr=48 pi=[44,48)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ToDelete 0.079371 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 49 pg[6.f( v 33'39 (0'0,33'39] lb MIN local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=-1 lpr=48 pi=[44,48)/1 pct=0'0 crt=33'39 active mbc={}] exit Started 1.446454 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 49 pg[6.b( v 33'39 (0'0,33'39] lb MIN local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=-1 lpr=48 DELETING pi=[44,48)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ToDelete/Deleting 0.033219 2 0.000160
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 49 pg[6.b( v 33'39 (0'0,33'39] lb MIN local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=-1 lpr=48 pi=[44,48)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ToDelete 0.033468 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 49 pg[6.b( v 33'39 (0'0,33'39] lb MIN local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=-1 lpr=48 pi=[44,48)/1 pct=0'0 crt=33'39 active mbc={}] exit Started 1.461635 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 63201280 unmapped: 1802240 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 3.14 scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 3.14 scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T16:59:34.404343+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  log_queue is 2 last_log 23 sent 21 num 2 unsent 2 sending 2
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:00:04.380645+0000 osd.1 (osd.1) 22 : cluster [DBG] 3.14 scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:00:04.391110+0000 osd.1 (osd.1) 23 : cluster [DBG] 3.14 scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 49 handle_osd_map epochs [50,50], i have 49, src has [1,50]
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client handle_log_ack log(last 23)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:00:04.380645+0000 osd.1 (osd.1) 22 : cluster [DBG] 3.14 scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:00:04.391110+0000 osd.1 (osd.1) 23 : cluster [DBG] 3.14 scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 50 pg[6.c(unlocked)] enter Initial
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 50 pg[6.c( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=50) [1] r=0 lpr=0 pi=[39,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000119 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 50 pg[6.c( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=50) [1] r=0 lpr=0 pi=[39,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 50 pg[6.c( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=50) [1] r=0 lpr=50 pi=[39,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000015 1 0.000036
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 50 pg[6.c( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=50) [1] r=0 lpr=50 pi=[39,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 50 pg[6.c( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=50) [1] r=0 lpr=50 pi=[39,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 50 pg[6.c( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=50) [1] r=0 lpr=50 pi=[39,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 50 pg[6.c( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=50) [1] r=0 lpr=50 pi=[39,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000009 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 50 pg[6.c( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=50) [1] r=0 lpr=50 pi=[39,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 50 pg[6.c( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=50) [1] r=0 lpr=50 pi=[39,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 50 pg[6.c( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=50) [1] r=0 lpr=50 pi=[39,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 50 pg[6.c( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=50) [1] r=0 lpr=50 pi=[39,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000251 1 0.000062
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 50 pg[6.c( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=50) [1] r=0 lpr=50 pi=[39,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 50 pg[6.4(unlocked)] enter Initial
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 50 pg[6.4( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=50) [1] r=0 lpr=0 pi=[39,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000616 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 50 pg[6.4( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=50) [1] r=0 lpr=0 pi=[39,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 50 pg[6.4( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=50) [1] r=0 lpr=50 pi=[39,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000018 1 0.000035
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 50 pg[6.4( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=50) [1] r=0 lpr=50 pi=[39,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 50 pg[6.4( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=50) [1] r=0 lpr=50 pi=[39,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 50 pg[6.4( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=50) [1] r=0 lpr=50 pi=[39,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 50 pg[6.4( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=50) [1] r=0 lpr=50 pi=[39,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000012 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 50 pg[6.4( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=50) [1] r=0 lpr=50 pi=[39,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 50 pg[6.4( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=50) [1] r=0 lpr=50 pi=[39,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 50 pg[6.4( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=50) [1] r=0 lpr=50 pi=[39,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 50 pg[6.4( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=50) [1] r=0 lpr=50 pi=[39,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000154 1 0.000083
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 50 pg[6.4( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=50) [1] r=0 lpr=50 pi=[39,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 17:23:19 compute-0 ceph-osd[86809]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 50 pg[6.c( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=50) [1] r=0 lpr=50 pi=[39,50)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetLog 0.002300 2 0.001261
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 50 pg[6.c( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=50) [1] r=0 lpr=50 pi=[39,50)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 50 pg[6.4( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=50) [1] r=0 lpr=50 pi=[39,50)/1 crt=33'39 mlcod 0'0 peering m=4 mbc={}] exit Started/Primary/Peering/GetLog 0.001238 2 0.000077
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 50 pg[6.c( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=50) [1] r=0 lpr=50 pi=[39,50)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetMissing 0.000013 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 50 pg[6.4( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=50) [1] r=0 lpr=50 pi=[39,50)/1 crt=33'39 mlcod 0'0 peering m=4 mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 50 pg[6.c( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=50) [1] r=0 lpr=50 pi=[39,50)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 50 pg[6.4( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=50) [1] r=0 lpr=50 pi=[39,50)/1 crt=33'39 mlcod 0'0 peering m=4 mbc={}] exit Started/Primary/Peering/GetMissing 0.000012 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 50 pg[6.4( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=50) [1] r=0 lpr=50 pi=[39,50)/1 crt=33'39 mlcod 0'0 peering m=4 mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 63266816 unmapped: 1736704 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 3.13 scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 3.13 scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T16:59:35.404629+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  log_queue is 2 last_log 25 sent 23 num 2 unsent 2 sending 2
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:00:05.371171+0000 osd.1 (osd.1) 24 : cluster [DBG] 3.13 scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:00:05.381727+0000 osd.1 (osd.1) 25 : cluster [DBG] 3.13 scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 50 handle_osd_map epochs [50,51], i have 50, src has [1,51]
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 50 handle_osd_map epochs [51,51], i have 51, src has [1,51]
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client handle_log_ack log(last 25)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:00:05.371171+0000 osd.1 (osd.1) 24 : cluster [DBG] 3.13 scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:00:05.381727+0000 osd.1 (osd.1) 25 : cluster [DBG] 3.13 scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 51 pg[6.4( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=50) [1] r=0 lpr=50 pi=[39,50)/1 crt=33'39 mlcod 0'0 peering m=4 mbc={}] exit Started/Primary/Peering/WaitUpThru 1.012166 2 0.000114
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 51 pg[6.c( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=50) [1] r=0 lpr=50 pi=[39,50)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/WaitUpThru 1.012171 2 0.000112
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 51 pg[6.4( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=50) [1] r=0 lpr=50 pi=[39,50)/1 crt=33'39 mlcod 0'0 peering m=4 mbc={}] exit Started/Primary/Peering 1.013684 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 51 pg[6.c( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=50) [1] r=0 lpr=50 pi=[39,50)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering 1.015281 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 51 pg[6.c( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=50) [1] r=0 lpr=50 pi=[39,50)/1 crt=33'39 mlcod 0'0 unknown m=1 mbc={}] enter Started/Primary/Active
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 51 pg[6.4( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=50) [1] r=0 lpr=50 pi=[39,50)/1 crt=33'39 mlcod 0'0 unknown m=4 mbc={}] enter Started/Primary/Active
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 51 pg[6.c( v 33'39 lc 33'17 (0'0,33'39] local-lis/les=50/51 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=50) [1] r=0 lpr=50 pi=[39,50)/1 crt=33'39 lcod 0'0 mlcod 0'0 activating+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Activating
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 51 pg[6.4( v 33'39 lc 33'15 (0'0,33'39] local-lis/les=50/51 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=50) [1] r=0 lpr=50 pi=[39,50)/1 crt=33'39 lcod 0'0 mlcod 0'0 activating+degraded m=4 mbc={255={(0+1)=4}}] enter Started/Primary/Active/Activating
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 51 pg[6.c( v 33'39 lc 33'17 (0'0,33'39] local-lis/les=50/51 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=50) [1] r=0 lpr=50 pi=[39,50)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 51 pg[6.4( v 33'39 lc 33'15 (0'0,33'39] local-lis/les=50/51 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=50) [1] r=0 lpr=50 pi=[39,50)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=4 mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 51 pg[6.c( v 33'39 lc 33'17 (0'0,33'39] local-lis/les=50/51 n=1 ec=39/23 lis/c=50/39 les/c/f=51/42/0 sis=50) [1] r=0 lpr=50 pi=[39,50)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/Activating 0.002154 4 0.000253
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 51 pg[6.c( v 33'39 lc 33'17 (0'0,33'39] local-lis/les=50/51 n=1 ec=39/23 lis/c=50/39 les/c/f=51/42/0 sis=50) [1] r=0 lpr=50 pi=[39,50)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 51 pg[6.c( v 33'39 lc 33'17 (0'0,33'39] local-lis/les=50/51 n=1 ec=39/23 lis/c=50/39 les/c/f=51/42/0 sis=50) [1] r=0 lpr=50 pi=[39,50)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000094 1 0.000084
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 51 pg[6.c( v 33'39 lc 33'17 (0'0,33'39] local-lis/les=50/51 n=1 ec=39/23 lis/c=50/39 les/c/f=51/42/0 sis=50) [1] r=0 lpr=50 pi=[39,50)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 51 pg[6.c( v 33'39 lc 33'17 (0'0,33'39] local-lis/les=50/51 n=1 ec=39/23 lis/c=50/39 les/c/f=51/42/0 sis=50) [1] r=0 lpr=50 pi=[39,50)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000008 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 51 pg[6.c( v 33'39 lc 33'17 (0'0,33'39] local-lis/les=50/51 n=1 ec=39/23 lis/c=50/39 les/c/f=51/42/0 sis=50) [1] r=0 lpr=50 pi=[39,50)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Recovering
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 51 pg[6.4( v 33'39 lc 33'15 (0'0,33'39] local-lis/les=50/51 n=2 ec=39/23 lis/c=50/39 les/c/f=51/42/0 sis=50) [1] r=0 lpr=50 pi=[39,50)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=4 mbc={255={(0+1)=4}}] exit Started/Primary/Active/Activating 0.002297 4 0.000295
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 51 pg[6.4( v 33'39 lc 33'15 (0'0,33'39] local-lis/les=50/51 n=2 ec=39/23 lis/c=50/39 les/c/f=51/42/0 sis=50) [1] r=0 lpr=50 pi=[39,50)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=4 mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 51 pg[6.c( v 33'39 (0'0,33'39] local-lis/les=50/51 n=1 ec=39/23 lis/c=50/39 les/c/f=51/42/0 sis=50) [1] r=0 lpr=50 pi=[39,50)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.007621 2 0.000065
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 51 pg[6.c( v 33'39 (0'0,33'39] local-lis/les=50/51 n=1 ec=39/23 lis/c=50/39 les/c/f=51/42/0 sis=50) [1] r=0 lpr=50 pi=[39,50)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 51 pg[6.c( v 33'39 (0'0,33'39] local-lis/les=50/51 n=1 ec=39/23 lis/c=50/39 les/c/f=51/42/0 sis=50) [1] r=0 lpr=50 pi=[39,50)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000009 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 51 pg[6.c( v 33'39 (0'0,33'39] local-lis/les=50/51 n=1 ec=39/23 lis/c=50/39 les/c/f=51/42/0 sis=50) [1] r=0 lpr=50 pi=[39,50)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 51 pg[6.4( v 33'39 lc 33'15 (0'0,33'39] local-lis/les=50/51 n=2 ec=39/23 lis/c=50/39 les/c/f=51/42/0 sis=50) [1] r=0 lpr=50 pi=[39,50)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=4 mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.007694 2 0.000073
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 51 pg[6.4( v 33'39 lc 33'15 (0'0,33'39] local-lis/les=50/51 n=2 ec=39/23 lis/c=50/39 les/c/f=51/42/0 sis=50) [1] r=0 lpr=50 pi=[39,50)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=4 mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 51 pg[6.4( v 33'39 lc 33'15 (0'0,33'39] local-lis/les=50/51 n=2 ec=39/23 lis/c=50/39 les/c/f=51/42/0 sis=50) [1] r=0 lpr=50 pi=[39,50)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=4 mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000011 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 51 pg[6.4( v 33'39 lc 33'15 (0'0,33'39] local-lis/les=50/51 n=2 ec=39/23 lis/c=50/39 les/c/f=51/42/0 sis=50) [1] r=0 lpr=50 pi=[39,50)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=4 mbc={255={(0+1)=4}}] enter Started/Primary/Active/Recovering
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 395490 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 51 pg[6.4( v 33'39 (0'0,33'39] local-lis/les=50/51 n=2 ec=39/23 lis/c=50/39 les/c/f=51/42/0 sis=50) [1] r=0 lpr=50 pi=[39,50)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.252192 1 0.000111
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 51 pg[6.4( v 33'39 (0'0,33'39] local-lis/les=50/51 n=2 ec=39/23 lis/c=50/39 les/c/f=51/42/0 sis=50) [1] r=0 lpr=50 pi=[39,50)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 51 pg[6.4( v 33'39 (0'0,33'39] local-lis/les=50/51 n=2 ec=39/23 lis/c=50/39 les/c/f=51/42/0 sis=50) [1] r=0 lpr=50 pi=[39,50)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000034 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 51 pg[6.4( v 33'39 (0'0,33'39] local-lis/les=50/51 n=2 ec=39/23 lis/c=50/39 les/c/f=51/42/0 sis=50) [1] r=0 lpr=50 pi=[39,50)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 64389120 unmapped: 614400 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T16:59:36.404939+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 64430080 unmapped: 573440 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T16:59:37.405154+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 51 heartbeat osd_stat(store_statfs(0x4fe0ca000/0x0/0x4ffc00000, data 0xa74e9/0xfc000, compress 0x0/0x0/0x0, omap 0x8978, meta 0x1a27688), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 64430080 unmapped: 573440 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 7.17 scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 7.17 scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T16:59:38.405255+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  log_queue is 2 last_log 27 sent 25 num 2 unsent 2 sending 2
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:00:08.371827+0000 osd.1 (osd.1) 26 : cluster [DBG] 7.17 scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:00:08.382355+0000 osd.1 (osd.1) 27 : cluster [DBG] 7.17 scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 51 heartbeat osd_stat(store_statfs(0x4fe0ca000/0x0/0x4ffc00000, data 0xa74e9/0xfc000, compress 0x0/0x0/0x0, omap 0x8978, meta 0x1a27688), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client handle_log_ack log(last 27)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:00:08.371827+0000 osd.1 (osd.1) 26 : cluster [DBG] 7.17 scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:00:08.382355+0000 osd.1 (osd.1) 27 : cluster [DBG] 7.17 scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 64413696 unmapped: 589824 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 7.16 scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 7.16 scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T16:59:39.405447+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  log_queue is 2 last_log 29 sent 27 num 2 unsent 2 sending 2
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:00:09.361078+0000 osd.1 (osd.1) 28 : cluster [DBG] 7.16 scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:00:09.371413+0000 osd.1 (osd.1) 29 : cluster [DBG] 7.16 scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client handle_log_ack log(last 29)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:00:09.361078+0000 osd.1 (osd.1) 28 : cluster [DBG] 7.16 scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:00:09.371413+0000 osd.1 (osd.1) 29 : cluster [DBG] 7.16 scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 64389120 unmapped: 614400 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T16:59:40.405773+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 399912 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 64389120 unmapped: 614400 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 51 handle_osd_map epochs [52,52], i have 51, src has [1,52]
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 52 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=44) [1] r=0 lpr=44 crt=33'39 mlcod 33'39 active+clean] exit Started/Primary/Active/Clean 15.298156 21 0.000176
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 52 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=44) [1] r=0 lpr=44 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active 16.014371 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 52 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=44) [1] r=0 lpr=44 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary 17.143886 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 52 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=44) [1] r=0 lpr=44 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started 17.143933 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 52 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=44) [1] r=0 lpr=44 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Reset
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 52 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=15.995003700s) [0] r=-1 lpr=52 pi=[44,52)/1 crt=33'39 active pruub 111.995697021s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 52 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=15.994950294s) [0] r=-1 lpr=52 pi=[44,52)/1 crt=33'39 unknown NOTIFY pruub 111.995697021s@ mbc={}] exit Reset 0.000092 1 0.000152
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 52 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=15.994950294s) [0] r=-1 lpr=52 pi=[44,52)/1 crt=33'39 unknown NOTIFY pruub 111.995697021s@ mbc={}] enter Started
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 52 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=15.994950294s) [0] r=-1 lpr=52 pi=[44,52)/1 crt=33'39 unknown NOTIFY pruub 111.995697021s@ mbc={}] enter Start
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 52 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=15.994950294s) [0] r=-1 lpr=52 pi=[44,52)/1 crt=33'39 unknown NOTIFY pruub 111.995697021s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 52 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=15.994950294s) [0] r=-1 lpr=52 pi=[44,52)/1 crt=33'39 unknown NOTIFY pruub 111.995697021s@ mbc={}] exit Start 0.000007 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 52 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=15.994950294s) [0] r=-1 lpr=52 pi=[44,52)/1 crt=33'39 unknown NOTIFY pruub 111.995697021s@ mbc={}] enter Started/Stray
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 52 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=44) [1] r=0 lpr=44 crt=33'39 mlcod 33'39 active+clean] exit Started/Primary/Active/Clean 15.587230 21 0.000161
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 52 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=44) [1] r=0 lpr=44 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active 16.014908 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 52 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=44) [1] r=0 lpr=44 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary 17.148261 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 52 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=44) [1] r=0 lpr=44 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started 17.148307 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 52 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=44) [1] r=0 lpr=44 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Reset
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 52 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=15.994361877s) [0] r=-1 lpr=52 pi=[44,52)/1 crt=33'39 active pruub 111.995353699s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 52 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=15.993425369s) [0] r=-1 lpr=52 pi=[44,52)/1 crt=33'39 unknown NOTIFY pruub 111.995353699s@ mbc={}] exit Reset 0.001005 1 0.001088
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 52 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=15.993425369s) [0] r=-1 lpr=52 pi=[44,52)/1 crt=33'39 unknown NOTIFY pruub 111.995353699s@ mbc={}] enter Started
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 52 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=15.993425369s) [0] r=-1 lpr=52 pi=[44,52)/1 crt=33'39 unknown NOTIFY pruub 111.995353699s@ mbc={}] enter Start
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 52 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=15.993425369s) [0] r=-1 lpr=52 pi=[44,52)/1 crt=33'39 unknown NOTIFY pruub 111.995353699s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 52 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=15.993425369s) [0] r=-1 lpr=52 pi=[44,52)/1 crt=33'39 unknown NOTIFY pruub 111.995353699s@ mbc={}] exit Start 0.000007 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 52 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=15.993425369s) [0] r=-1 lpr=52 pi=[44,52)/1 crt=33'39 unknown NOTIFY pruub 111.995353699s@ mbc={}] enter Started/Stray
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T16:59:41.405941+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 64421888 unmapped: 581632 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 52 handle_osd_map epochs [53,53], i have 52, src has [1,53]
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 53 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [0] r=-1 lpr=52 pi=[44,52)/1 crt=33'39 unknown NOTIFY mbc={}] exit Started/Stray 0.936134 6 0.000102
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 53 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [0] r=-1 lpr=52 pi=[44,52)/1 crt=33'39 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 53 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [0] r=-1 lpr=52 pi=[44,52)/1 crt=33'39 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 53 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [0] r=-1 lpr=52 pi=[44,52)/1 crt=33'39 unknown NOTIFY mbc={}] exit Started/Stray 0.937499 6 0.000063
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 53 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [0] r=-1 lpr=52 pi=[44,52)/1 crt=33'39 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 53 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [0] r=-1 lpr=52 pi=[44,52)/1 crt=33'39 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 7.10 scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 7.10 scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 53 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [0] r=-1 lpr=52 pi=[44,52)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.079444 3 0.000080
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 53 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [0] r=-1 lpr=52 pi=[44,52)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ReplicaActive 0.079469 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 53 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [0] r=-1 lpr=52 pi=[44,52)/1 pct=0'0 crt=33'39 active mbc={}] enter Started/ToDelete
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 53 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [0] r=-1 lpr=52 pi=[44,52)/1 pct=0'0 crt=33'39 active mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 53 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [0] r=-1 lpr=52 pi=[44,52)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000104 1 0.000048
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 53 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [0] r=-1 lpr=52 pi=[44,52)/1 pct=0'0 crt=33'39 active mbc={}] enter Started/ToDelete/Deleting
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T16:59:42.406080+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  log_queue is 2 last_log 31 sent 29 num 2 unsent 2 sending 2
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:00:12.316798+0000 osd.1 (osd.1) 30 : cluster [DBG] 7.10 scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:00:12.386241+0000 osd.1 (osd.1) 31 : cluster [DBG] 7.10 scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.885927200s of 10.011228561s, submitted: 62
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 53 pg[6.d( v 33'39 (0'0,33'39] lb MIN local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [0] r=-1 lpr=52 DELETING pi=[44,52)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ToDelete/Deleting 0.134527 2 0.000316
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 53 pg[6.d( v 33'39 (0'0,33'39] lb MIN local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [0] r=-1 lpr=52 pi=[44,52)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ToDelete 0.134692 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 53 pg[6.d( v 33'39 (0'0,33'39] lb MIN local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [0] r=-1 lpr=52 pi=[44,52)/1 pct=0'0 crt=33'39 active mbc={}] exit Started 1.150356 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 53 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [0] r=-1 lpr=52 pi=[44,52)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.224618 3 0.000042
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 53 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [0] r=-1 lpr=52 pi=[44,52)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ReplicaActive 0.224662 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 53 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [0] r=-1 lpr=52 pi=[44,52)/1 pct=0'0 crt=33'39 active mbc={}] enter Started/ToDelete
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 53 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [0] r=-1 lpr=52 pi=[44,52)/1 pct=0'0 crt=33'39 active mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 53 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [0] r=-1 lpr=52 pi=[44,52)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000126 1 0.000086
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 53 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [0] r=-1 lpr=52 pi=[44,52)/1 pct=0'0 crt=33'39 active mbc={}] enter Started/ToDelete/Deleting
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 53 pg[6.5( v 33'39 (0'0,33'39] lb MIN local-lis/les=44/45 n=2 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [0] r=-1 lpr=52 DELETING pi=[44,52)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ToDelete/Deleting 0.017283 2 0.000294
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 53 pg[6.5( v 33'39 (0'0,33'39] lb MIN local-lis/les=44/45 n=2 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [0] r=-1 lpr=52 pi=[44,52)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ToDelete 0.017476 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 53 pg[6.5( v 33'39 (0'0,33'39] lb MIN local-lis/les=44/45 n=2 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [0] r=-1 lpr=52 pi=[44,52)/1 pct=0'0 crt=33'39 active mbc={}] exit Started 1.179681 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 64421888 unmapped: 581632 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client handle_log_ack log(last 31)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:00:12.316798+0000 osd.1 (osd.1) 30 : cluster [DBG] 7.10 scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:00:12.386241+0000 osd.1 (osd.1) 31 : cluster [DBG] 7.10 scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 53 heartbeat osd_stat(store_statfs(0x4fe0ca000/0x0/0x4ffc00000, data 0xa9edf/0x100000, compress 0x0/0x0/0x0, omap 0x8f85, meta 0x1a2707b), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 53 handle_osd_map epochs [54,54], i have 53, src has [1,54]
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T16:59:43.406272+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 7.14 scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 7.14 scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65503232 unmapped: 548864 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T16:59:44.406419+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  log_queue is 2 last_log 33 sent 31 num 2 unsent 2 sending 2
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:00:14.288832+0000 osd.1 (osd.1) 32 : cluster [DBG] 7.14 scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:00:14.299357+0000 osd.1 (osd.1) 33 : cluster [DBG] 7.14 scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 3.10 scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 3.10 scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65536000 unmapped: 516096 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T16:59:45.406755+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  log_queue is 4 last_log 35 sent 33 num 4 unsent 2 sending 2
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:00:15.268749+0000 osd.1 (osd.1) 34 : cluster [DBG] 3.10 scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:00:15.279183+0000 osd.1 (osd.1) 35 : cluster [DBG] 3.10 scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client handle_log_ack log(last 33)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:00:14.288832+0000 osd.1 (osd.1) 32 : cluster [DBG] 7.14 scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:00:14.299357+0000 osd.1 (osd.1) 33 : cluster [DBG] 7.14 scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client handle_log_ack log(last 35)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:00:15.268749+0000 osd.1 (osd.1) 34 : cluster [DBG] 3.10 scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:00:15.279183+0000 osd.1 (osd.1) 35 : cluster [DBG] 3.10 scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 407557 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 7.b scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 7.b scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 54 heartbeat osd_stat(store_statfs(0x4fe0c5000/0x0/0x4ffc00000, data 0xab4f5/0x103000, compress 0x0/0x0/0x0, omap 0x922d, meta 0x1a26dd3), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65536000 unmapped: 516096 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T16:59:46.407038+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  log_queue is 2 last_log 37 sent 35 num 2 unsent 2 sending 2
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:00:16.223761+0000 osd.1 (osd.1) 36 : cluster [DBG] 7.b scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:00:16.234300+0000 osd.1 (osd.1) 37 : cluster [DBG] 7.b scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client handle_log_ack log(last 37)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:00:16.223761+0000 osd.1 (osd.1) 36 : cluster [DBG] 7.b scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:00:16.234300+0000 osd.1 (osd.1) 37 : cluster [DBG] 7.b scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65544192 unmapped: 507904 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T16:59:47.407294+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 54 handle_osd_map epochs [54,55], i have 54, src has [1,55]
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65552384 unmapped: 499712 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T16:59:48.407496+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65560576 unmapped: 491520 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T16:59:49.407964+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65576960 unmapped: 475136 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T16:59:50.408135+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 412276 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 55 handle_osd_map epochs [56,56], i have 55, src has [1,56]
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 3.d scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 3.d scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65609728 unmapped: 442368 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T16:59:51.408345+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  log_queue is 2 last_log 39 sent 37 num 2 unsent 2 sending 2
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:00:21.213046+0000 osd.1 (osd.1) 38 : cluster [DBG] 3.d scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:00:21.223659+0000 osd.1 (osd.1) 39 : cluster [DBG] 3.d scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client handle_log_ack log(last 39)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:00:21.213046+0000 osd.1 (osd.1) 38 : cluster [DBG] 3.d scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:00:21.223659+0000 osd.1 (osd.1) 39 : cluster [DBG] 3.d scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 56 heartbeat osd_stat(store_statfs(0x4fe0bf000/0x0/0x4ffc00000, data 0xae121/0x109000, compress 0x0/0x0/0x0, omap 0x9744, meta 0x1a268bc), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 56 handle_osd_map epochs [57,57], i have 56, src has [1,57]
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 56 handle_osd_map epochs [57,57], i have 57, src has [1,57]
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65626112 unmapped: 425984 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T16:59:52.408544+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 57 handle_osd_map epochs [58,58], i have 57, src has [1,58]
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.177284241s of 10.220639229s, submitted: 17
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 58 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=44) [1] r=0 lpr=44 crt=33'39 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 27.306095 40 0.000408
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 58 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=44) [1] r=0 lpr=44 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 27.316440 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 58 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=44) [1] r=0 lpr=44 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 28.445729 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 58 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=44) [1] r=0 lpr=44 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 28.445766 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 58 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=44) [1] r=0 lpr=44 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 58 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=58 pruub=12.692907333s) [0] r=-1 lpr=58 pi=[44,58)/1 crt=33'39 lcod 0'0 active pruub 119.995796204s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 58 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=58 pruub=12.692836761s) [0] r=-1 lpr=58 pi=[44,58)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 119.995796204s@ mbc={}] exit Reset 0.000124 1 0.000211
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 58 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=58 pruub=12.692836761s) [0] r=-1 lpr=58 pi=[44,58)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 119.995796204s@ mbc={}] enter Started
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 58 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=58 pruub=12.692836761s) [0] r=-1 lpr=58 pi=[44,58)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 119.995796204s@ mbc={}] enter Start
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 58 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=58 pruub=12.692836761s) [0] r=-1 lpr=58 pi=[44,58)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 119.995796204s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 58 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=58 pruub=12.692836761s) [0] r=-1 lpr=58 pi=[44,58)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 119.995796204s@ mbc={}] exit Start 0.000008 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 58 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=58 pruub=12.692836761s) [0] r=-1 lpr=58 pi=[44,58)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 119.995796204s@ mbc={}] enter Started/Stray
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 58 handle_osd_map epochs [58,58], i have 58, src has [1,58]
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 3.b scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 3.b scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65667072 unmapped: 385024 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T16:59:53.408759+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  log_queue is 2 last_log 41 sent 39 num 2 unsent 2 sending 2
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:00:23.252358+0000 osd.1 (osd.1) 40 : cluster [DBG] 3.b scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:00:23.262883+0000 osd.1 (osd.1) 41 : cluster [DBG] 3.b scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client handle_log_ack log(last 41)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:00:23.252358+0000 osd.1 (osd.1) 40 : cluster [DBG] 3.b scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:00:23.262883+0000 osd.1 (osd.1) 41 : cluster [DBG] 3.b scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _renew_subs
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 58 handle_osd_map epochs [59,59], i have 58, src has [1,59]
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65675264 unmapped: 376832 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 59 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=58) [0] r=-1 lpr=58 pi=[44,58)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.695591 6 0.000110
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 59 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=58) [0] r=-1 lpr=58 pi=[44,58)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 59 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=58) [0] r=-1 lpr=58 pi=[44,58)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 59 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=58) [0] r=-1 lpr=58 pi=[44,58)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001112 2 0.000101
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 59 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=58) [0] r=-1 lpr=58 pi=[44,58)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 59 pg[6.9( v 33'39 (0'0,33'39] lb MIN local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=58) [0] r=-1 lpr=58 DELETING pi=[44,58)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.003949 1 0.000062
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 59 pg[6.9( v 33'39 (0'0,33'39] lb MIN local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=58) [0] r=-1 lpr=58 pi=[44,58)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.005134 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 59 pg[6.9( v 33'39 (0'0,33'39] lb MIN local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=58) [0] r=-1 lpr=58 pi=[44,58)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.700796 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T16:59:54.409013+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 59 handle_osd_map epochs [60,60], i have 59, src has [1,60]
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 60 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=46/47 n=1 ec=39/23 lis/c=46/46 les/c/f=47/47/0 sis=46) [1] r=0 lpr=46 crt=33'39 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 23.436269 39 0.000211
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 60 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=46/47 n=1 ec=39/23 lis/c=46/46 les/c/f=47/47/0 sis=46) [1] r=0 lpr=46 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 23.438120 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 60 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=46/47 n=1 ec=39/23 lis/c=46/46 les/c/f=47/47/0 sis=46) [1] r=0 lpr=46 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 24.003692 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 60 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=46/47 n=1 ec=39/23 lis/c=46/46 les/c/f=47/47/0 sis=46) [1] r=0 lpr=46 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 24.003729 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 60 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=46/47 n=1 ec=39/23 lis/c=46/46 les/c/f=47/47/0 sis=46) [1] r=0 lpr=46 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 60 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=46/47 n=1 ec=39/23 lis/c=46/46 les/c/f=47/47/0 sis=60 pruub=8.564128876s) [0] r=-1 lpr=60 pi=[46,60)/1 crt=33'39 lcod 0'0 active pruub 118.069427490s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 60 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=46/47 n=1 ec=39/23 lis/c=46/46 les/c/f=47/47/0 sis=60 pruub=8.564027786s) [0] r=-1 lpr=60 pi=[46,60)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 118.069427490s@ mbc={}] exit Reset 0.000153 1 0.000246
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 60 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=46/47 n=1 ec=39/23 lis/c=46/46 les/c/f=47/47/0 sis=60 pruub=8.564027786s) [0] r=-1 lpr=60 pi=[46,60)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 118.069427490s@ mbc={}] enter Started
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 60 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=46/47 n=1 ec=39/23 lis/c=46/46 les/c/f=47/47/0 sis=60 pruub=8.564027786s) [0] r=-1 lpr=60 pi=[46,60)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 118.069427490s@ mbc={}] enter Start
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 60 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=46/47 n=1 ec=39/23 lis/c=46/46 les/c/f=47/47/0 sis=60 pruub=8.564027786s) [0] r=-1 lpr=60 pi=[46,60)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 118.069427490s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 60 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=46/47 n=1 ec=39/23 lis/c=46/46 les/c/f=47/47/0 sis=60 pruub=8.564027786s) [0] r=-1 lpr=60 pi=[46,60)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 118.069427490s@ mbc={}] exit Start 0.000021 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 60 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=46/47 n=1 ec=39/23 lis/c=46/46 les/c/f=47/47/0 sis=60 pruub=8.564027786s) [0] r=-1 lpr=60 pi=[46,60)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 118.069427490s@ mbc={}] enter Started/Stray
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 60 handle_osd_map epochs [60,60], i have 60, src has [1,60]
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 3.2 scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 3.2 scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65699840 unmapped: 1400832 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T16:59:55.409207+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  log_queue is 2 last_log 43 sent 41 num 2 unsent 2 sending 2
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:00:25.250358+0000 osd.1 (osd.1) 42 : cluster [DBG] 3.2 scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:00:25.260876+0000 osd.1 (osd.1) 43 : cluster [DBG] 3.2 scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 60 heartbeat osd_stat(store_statfs(0x4fe0b3000/0x0/0x4ffc00000, data 0xb364d/0x115000, compress 0x0/0x0/0x0, omap 0xa15f, meta 0x1a25ea1), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 432423 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client handle_log_ack log(last 43)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:00:25.250358+0000 osd.1 (osd.1) 42 : cluster [DBG] 3.2 scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:00:25.260876+0000 osd.1 (osd.1) 43 : cluster [DBG] 3.2 scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 60 handle_osd_map epochs [60,61], i have 60, src has [1,61]
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 61 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=46/47 n=1 ec=39/23 lis/c=46/46 les/c/f=47/47/0 sis=60) [0] r=-1 lpr=60 pi=[46,60)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.987625 7 0.000183
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 61 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=46/47 n=1 ec=39/23 lis/c=46/46 les/c/f=47/47/0 sis=60) [0] r=-1 lpr=60 pi=[46,60)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 61 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=46/47 n=1 ec=39/23 lis/c=46/46 les/c/f=47/47/0 sis=60) [0] r=-1 lpr=60 pi=[46,60)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 61 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=46/47 n=1 ec=39/23 lis/c=46/46 les/c/f=47/47/0 sis=60) [0] r=-1 lpr=60 pi=[46,60)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000141 1 0.000112
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 61 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=46/47 n=1 ec=39/23 lis/c=46/46 les/c/f=47/47/0 sis=60) [0] r=-1 lpr=60 pi=[46,60)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 61 pg[6.a( v 33'39 (0'0,33'39] lb MIN local-lis/les=46/47 n=1 ec=39/23 lis/c=46/46 les/c/f=47/47/0 sis=60) [0] r=-1 lpr=60 DELETING pi=[46,60)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.003077 1 0.000088
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 61 pg[6.a( v 33'39 (0'0,33'39] lb MIN local-lis/les=46/47 n=1 ec=39/23 lis/c=46/46 les/c/f=47/47/0 sis=60) [0] r=-1 lpr=60 pi=[46,60)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.003301 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 61 pg[6.a( v 33'39 (0'0,33'39] lb MIN local-lis/les=46/47 n=1 ec=39/23 lis/c=46/46 les/c/f=47/47/0 sis=60) [0] r=-1 lpr=60 pi=[46,60)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started 0.991026 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65716224 unmapped: 1384448 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T16:59:56.409418+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 61 handle_osd_map epochs [61,62], i have 61, src has [1,62]
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65716224 unmapped: 1384448 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T16:59:57.409623+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 62 pg[6.b(unlocked)] enter Initial
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 62 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=62) [1] r=0 lpr=0 pi=[48,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000159 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 62 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=62) [1] r=0 lpr=0 pi=[48,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 62 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=62) [1] r=0 lpr=62 pi=[48,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000029 1 0.000052
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 62 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=62) [1] r=0 lpr=62 pi=[48,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 62 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=62) [1] r=0 lpr=62 pi=[48,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 62 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=62) [1] r=0 lpr=62 pi=[48,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 62 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=62) [1] r=0 lpr=62 pi=[48,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000009 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 62 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=62) [1] r=0 lpr=62 pi=[48,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 62 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=62) [1] r=0 lpr=62 pi=[48,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 62 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=62) [1] r=0 lpr=62 pi=[48,62)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 62 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=62) [1] r=0 lpr=62 pi=[48,62)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000300 1 0.000072
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 62 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=62) [1] r=0 lpr=62 pi=[48,62)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 62 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=62) [1] r=0 lpr=62 pi=[48,62)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetLog 0.000587 2 0.000119
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 62 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=62) [1] r=0 lpr=62 pi=[48,62)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 62 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=62) [1] r=0 lpr=62 pi=[48,62)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetMissing 0.000008 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 62 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=62) [1] r=0 lpr=62 pi=[48,62)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65724416 unmapped: 1376256 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T16:59:58.409797+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 62 handle_osd_map epochs [62,63], i have 62, src has [1,63]
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 63 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=62) [1] r=0 lpr=62 pi=[48,62)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/WaitUpThru 0.801783 2 0.000081
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 63 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=62) [1] r=0 lpr=62 pi=[48,62)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering 0.802768 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 63 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=62) [1] r=0 lpr=62 pi=[48,62)/1 crt=33'39 mlcod 0'0 unknown m=1 mbc={}] enter Started/Primary/Active
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 63 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=62/63 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=62) [1] r=0 lpr=62 pi=[48,62)/1 crt=33'39 mlcod 0'0 activating+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Activating
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 63 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=62/63 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=62) [1] r=0 lpr=62 pi=[48,62)/1 crt=33'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 63 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=62/63 n=1 ec=39/23 lis/c=62/48 les/c/f=63/49/0 sis=62) [1] r=0 lpr=62 pi=[48,62)/1 crt=33'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/Activating 0.002677 3 0.000184
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 63 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=62/63 n=1 ec=39/23 lis/c=62/48 les/c/f=63/49/0 sis=62) [1] r=0 lpr=62 pi=[48,62)/1 crt=33'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 63 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=62/63 n=1 ec=39/23 lis/c=62/48 les/c/f=63/49/0 sis=62) [1] r=0 lpr=62 pi=[48,62)/1 crt=33'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000076 1 0.000050
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 63 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=62/63 n=1 ec=39/23 lis/c=62/48 les/c/f=63/49/0 sis=62) [1] r=0 lpr=62 pi=[48,62)/1 crt=33'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 63 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=62/63 n=1 ec=39/23 lis/c=62/48 les/c/f=63/49/0 sis=62) [1] r=0 lpr=62 pi=[48,62)/1 crt=33'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000004 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 63 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=62/63 n=1 ec=39/23 lis/c=62/48 les/c/f=63/49/0 sis=62) [1] r=0 lpr=62 pi=[48,62)/1 crt=33'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Recovering
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 63 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=62/63 n=1 ec=39/23 lis/c=62/48 les/c/f=63/49/0 sis=62) [1] r=0 lpr=62 pi=[48,62)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.007659 3 0.000047
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 63 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=62/63 n=1 ec=39/23 lis/c=62/48 les/c/f=63/49/0 sis=62) [1] r=0 lpr=62 pi=[48,62)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 63 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=62/63 n=1 ec=39/23 lis/c=62/48 les/c/f=63/49/0 sis=62) [1] r=0 lpr=62 pi=[48,62)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000010 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 63 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=62/63 n=1 ec=39/23 lis/c=62/48 les/c/f=63/49/0 sis=62) [1] r=0 lpr=62 pi=[48,62)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65732608 unmapped: 1368064 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T16:59:59.409980+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ac000/0x0/0x4ffc00000, data 0xb7701/0x11e000, compress 0x0/0x0/0x0, omap 0xa965, meta 0x1a2569b), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 3.0 scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 3.0 scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65740800 unmapped: 1359872 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:00.410150+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  log_queue is 2 last_log 45 sent 43 num 2 unsent 2 sending 2
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:00:30.244168+0000 osd.1 (osd.1) 44 : cluster [DBG] 3.0 scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:00:30.254646+0000 osd.1 (osd.1) 45 : cluster [DBG] 3.0 scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 444204 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ac000/0x0/0x4ffc00000, data 0xb7701/0x11e000, compress 0x0/0x0/0x0, omap 0xa965, meta 0x1a2569b), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client handle_log_ack log(last 45)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:00:30.244168+0000 osd.1 (osd.1) 44 : cluster [DBG] 3.0 scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:00:30.254646+0000 osd.1 (osd.1) 45 : cluster [DBG] 3.0 scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 7.0 scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 7.0 scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 63 handle_osd_map epochs [64,64], i have 63, src has [1,64]
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 64 pg[6.d(unlocked)] enter Initial
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 64 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=52/52 les/c/f=53/53/0 sis=64) [1] r=0 lpr=0 pi=[52,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000103 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 64 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=52/52 les/c/f=53/53/0 sis=64) [1] r=0 lpr=0 pi=[52,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 64 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=52/52 les/c/f=53/53/0 sis=64) [1] r=0 lpr=64 pi=[52,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000011 1 0.000030
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 64 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=52/52 les/c/f=53/53/0 sis=64) [1] r=0 lpr=64 pi=[52,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 64 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=52/52 les/c/f=53/53/0 sis=64) [1] r=0 lpr=64 pi=[52,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 64 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=52/52 les/c/f=53/53/0 sis=64) [1] r=0 lpr=64 pi=[52,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 64 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=52/52 les/c/f=53/53/0 sis=64) [1] r=0 lpr=64 pi=[52,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 64 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=52/52 les/c/f=53/53/0 sis=64) [1] r=0 lpr=64 pi=[52,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 64 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=52/52 les/c/f=53/53/0 sis=64) [1] r=0 lpr=64 pi=[52,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 64 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=52/52 les/c/f=53/53/0 sis=64) [1] r=0 lpr=64 pi=[52,64)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 64 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=52/52 les/c/f=53/53/0 sis=64) [1] r=0 lpr=64 pi=[52,64)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000464 1 0.000042
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 64 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=52/52 les/c/f=53/53/0 sis=64) [1] r=0 lpr=64 pi=[52,64)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 64 pg[6.d( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=52/53 n=1 ec=39/23 lis/c=52/52 les/c/f=53/53/0 sis=64) [1] r=0 lpr=64 pi=[52,64)/1 crt=33'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/GetLog 0.000953 2 0.000058
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 64 pg[6.d( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=52/53 n=1 ec=39/23 lis/c=52/52 les/c/f=53/53/0 sis=64) [1] r=0 lpr=64 pi=[52,64)/1 crt=33'39 mlcod 0'0 peering m=2 mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 64 pg[6.d( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=52/53 n=1 ec=39/23 lis/c=52/52 les/c/f=53/53/0 sis=64) [1] r=0 lpr=64 pi=[52,64)/1 crt=33'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 64 pg[6.d( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=52/53 n=1 ec=39/23 lis/c=52/52 les/c/f=53/53/0 sis=64) [1] r=0 lpr=64 pi=[52,64)/1 crt=33'39 mlcod 0'0 peering m=2 mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65765376 unmapped: 1335296 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0ac000/0x0/0x4ffc00000, data 0xb7701/0x11e000, compress 0x0/0x0/0x0, omap 0xa965, meta 0x1a2569b), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:01.410424+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  log_queue is 2 last_log 47 sent 45 num 2 unsent 2 sending 2
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:00:31.282907+0000 osd.1 (osd.1) 46 : cluster [DBG] 7.0 scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:00:31.293398+0000 osd.1 (osd.1) 47 : cluster [DBG] 7.0 scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0ac000/0x0/0x4ffc00000, data 0xb7701/0x11e000, compress 0x0/0x0/0x0, omap 0xa965, meta 0x1a2569b), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client handle_log_ack log(last 47)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:00:31.282907+0000 osd.1 (osd.1) 46 : cluster [DBG] 7.0 scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:00:31.293398+0000 osd.1 (osd.1) 47 : cluster [DBG] 7.0 scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 64 handle_osd_map epochs [65,65], i have 64, src has [1,65]
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 64 handle_osd_map epochs [64,65], i have 65, src has [1,65]
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 65 pg[6.d( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=52/53 n=1 ec=39/23 lis/c=52/52 les/c/f=53/53/0 sis=64) [1] r=0 lpr=64 pi=[52,64)/1 crt=33'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/WaitUpThru 0.643723 2 0.000060
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 65 pg[6.d( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=52/53 n=1 ec=39/23 lis/c=52/52 les/c/f=53/53/0 sis=64) [1] r=0 lpr=64 pi=[52,64)/1 crt=33'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering 0.645228 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 65 pg[6.d( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=52/53 n=1 ec=39/23 lis/c=52/52 les/c/f=53/53/0 sis=64) [1] r=0 lpr=64 pi=[52,64)/1 crt=33'39 mlcod 0'0 unknown m=2 mbc={}] enter Started/Primary/Active
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 65 pg[6.d( v 33'39 lc 33'13 (0'0,33'39] local-lis/les=64/65 n=1 ec=39/23 lis/c=52/52 les/c/f=53/53/0 sis=64) [1] r=0 lpr=64 pi=[52,64)/1 crt=33'39 lcod 0'0 mlcod 0'0 activating+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/Activating
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 65 pg[6.d( v 33'39 lc 33'13 (0'0,33'39] local-lis/les=64/65 n=1 ec=39/23 lis/c=52/52 les/c/f=53/53/0 sis=64) [1] r=0 lpr=64 pi=[52,64)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 65 pg[6.d( v 33'39 lc 33'13 (0'0,33'39] local-lis/les=64/65 n=1 ec=39/23 lis/c=64/52 les/c/f=65/53/0 sis=64) [1] r=0 lpr=64 pi=[52,64)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/Activating 0.002158 4 0.000286
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 65 pg[6.d( v 33'39 lc 33'13 (0'0,33'39] local-lis/les=64/65 n=1 ec=39/23 lis/c=64/52 les/c/f=65/53/0 sis=64) [1] r=0 lpr=64 pi=[52,64)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 65 pg[6.d( v 33'39 lc 33'13 (0'0,33'39] local-lis/les=64/65 n=1 ec=39/23 lis/c=64/52 les/c/f=65/53/0 sis=64) [1] r=0 lpr=64 pi=[52,64)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000192 1 0.000124
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 65 pg[6.d( v 33'39 lc 33'13 (0'0,33'39] local-lis/les=64/65 n=1 ec=39/23 lis/c=64/52 les/c/f=65/53/0 sis=64) [1] r=0 lpr=64 pi=[52,64)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 65 pg[6.d( v 33'39 lc 33'13 (0'0,33'39] local-lis/les=64/65 n=1 ec=39/23 lis/c=64/52 les/c/f=65/53/0 sis=64) [1] r=0 lpr=64 pi=[52,64)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000007 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 65 pg[6.d( v 33'39 lc 33'13 (0'0,33'39] local-lis/les=64/65 n=1 ec=39/23 lis/c=64/52 les/c/f=65/53/0 sis=64) [1] r=0 lpr=64 pi=[52,64)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/Recovering
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 65 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=64/65 n=1 ec=39/23 lis/c=64/52 les/c/f=65/53/0 sis=64) [1] r=0 lpr=64 pi=[52,64)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.067784 2 0.000098
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 65 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=64/65 n=1 ec=39/23 lis/c=64/52 les/c/f=65/53/0 sis=64) [1] r=0 lpr=64 pi=[52,64)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 65 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=64/65 n=1 ec=39/23 lis/c=64/52 les/c/f=65/53/0 sis=64) [1] r=0 lpr=64 pi=[52,64)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000013 0 0.000000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 pg_epoch: 65 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=64/65 n=1 ec=39/23 lis/c=64/52 les/c/f=65/53/0 sis=64) [1] r=0 lpr=64 pi=[52,64)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65822720 unmapped: 1277952 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:02.410756+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 65 heartbeat osd_stat(store_statfs(0x4fe0a3000/0x0/0x4ffc00000, data 0xba1dd/0x125000, compress 0x0/0x0/0x0, omap 0xafb1, meta 0x1a2504f), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 65 heartbeat osd_stat(store_statfs(0x4fe0a3000/0x0/0x4ffc00000, data 0xba1dd/0x125000, compress 0x0/0x0/0x0, omap 0xafb1, meta 0x1a2504f), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 65 handle_osd_map epochs [66,66], i have 65, src has [1,66]
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.386300087s of 10.565129280s, submitted: 38
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65830912 unmapped: 1269760 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:03.410913+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 66 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbb7f3/0x128000, compress 0x0/0x0/0x0, omap 0xb23d, meta 0x1a24dc3), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65830912 unmapped: 1269760 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:04.411059+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 66 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbb7f3/0x128000, compress 0x0/0x0/0x0, omap 0xb23d, meta 0x1a24dc3), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 3.4 scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 3.4 scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65765376 unmapped: 1335296 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:05.411212+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  log_queue is 2 last_log 49 sent 47 num 2 unsent 2 sending 2
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:00:35.243461+0000 osd.1 (osd.1) 48 : cluster [DBG] 3.4 scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:00:35.254012+0000 osd.1 (osd.1) 49 : cluster [DBG] 3.4 scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 461170 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client handle_log_ack log(last 49)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:00:35.243461+0000 osd.1 (osd.1) 48 : cluster [DBG] 3.4 scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:00:35.254012+0000 osd.1 (osd.1) 49 : cluster [DBG] 3.4 scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65765376 unmapped: 1335296 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:06.411422+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65765376 unmapped: 1335296 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:07.411638+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 66 heartbeat osd_stat(store_statfs(0x4fe0a4000/0x0/0x4ffc00000, data 0xbb7f3/0x128000, compress 0x0/0x0/0x0, omap 0xb23d, meta 0x1a24dc3), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 66 handle_osd_map epochs [67,67], i have 66, src has [1,67]
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 67 heartbeat osd_stat(store_statfs(0x4fe0a4000/0x0/0x4ffc00000, data 0xbb7f3/0x128000, compress 0x0/0x0/0x0, omap 0xb23d, meta 0x1a24dc3), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:08.411814+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65781760 unmapped: 1318912 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:09.412002+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65781760 unmapped: 1318912 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 67 handle_osd_map epochs [67,68], i have 67, src has [1,68]
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 7.7 scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 7.7 scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:10.412196+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  log_queue is 2 last_log 51 sent 49 num 2 unsent 2 sending 2
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:00:40.242252+0000 osd.1 (osd.1) 50 : cluster [DBG] 7.7 scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:00:40.252828+0000 osd.1 (osd.1) 51 : cluster [DBG] 7.7 scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65806336 unmapped: 1294336 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 469845 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client handle_log_ack log(last 51)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:00:40.242252+0000 osd.1 (osd.1) 50 : cluster [DBG] 7.7 scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:00:40.252828+0000 osd.1 (osd.1) 51 : cluster [DBG] 7.7 scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:11.412459+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65806336 unmapped: 1294336 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09c000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:12.412626+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65814528 unmapped: 1286144 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:13.412924+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65814528 unmapped: 1286144 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:14.413089+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65814528 unmapped: 1286144 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:15.413248+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65822720 unmapped: 1277952 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 469845 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:16.413425+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65822720 unmapped: 1277952 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 7.d scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.835360527s of 14.069359779s, submitted: 7
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 7.d scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:17.413605+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  log_queue is 2 last_log 53 sent 51 num 2 unsent 2 sending 2
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:00:47.308225+0000 osd.1 (osd.1) 52 : cluster [DBG] 7.d scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:00:47.318630+0000 osd.1 (osd.1) 53 : cluster [DBG] 7.d scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65830912 unmapped: 1269760 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09c000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:18.413941+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client handle_log_ack log(last 53)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:00:47.308225+0000 osd.1 (osd.1) 52 : cluster [DBG] 7.d scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:00:47.318630+0000 osd.1 (osd.1) 53 : cluster [DBG] 7.d scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65839104 unmapped: 1261568 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:19.414209+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65839104 unmapped: 1261568 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:20.414404+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65839104 unmapped: 1261568 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 471536 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 7.19 scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 7.19 scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:21.414604+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  log_queue is 2 last_log 55 sent 53 num 2 unsent 2 sending 2
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:00:51.357784+0000 osd.1 (osd.1) 54 : cluster [DBG] 7.19 scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:00:51.368201+0000 osd.1 (osd.1) 55 : cluster [DBG] 7.19 scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client handle_log_ack log(last 55)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:00:51.357784+0000 osd.1 (osd.1) 54 : cluster [DBG] 7.19 scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:00:51.368201+0000 osd.1 (osd.1) 55 : cluster [DBG] 7.19 scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65847296 unmapped: 1253376 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:22.414888+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65847296 unmapped: 1253376 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:23.415039+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65855488 unmapped: 1245184 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:24.415179+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65855488 unmapped: 1245184 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:25.415377+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65871872 unmapped: 1228800 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 473949 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:26.415526+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65880064 unmapped: 1220608 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:27.415678+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65880064 unmapped: 1220608 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 4.d scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.012902260s of 11.086735725s, submitted: 4
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 4.d scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:28.415857+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  log_queue is 2 last_log 57 sent 55 num 2 unsent 2 sending 2
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:00:58.395083+0000 osd.1 (osd.1) 56 : cluster [DBG] 4.d scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:00:58.405465+0000 osd.1 (osd.1) 57 : cluster [DBG] 4.d scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65888256 unmapped: 1212416 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client handle_log_ack log(last 57)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:00:58.395083+0000 osd.1 (osd.1) 56 : cluster [DBG] 4.d scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:00:58.405465+0000 osd.1 (osd.1) 57 : cluster [DBG] 4.d scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 4.f scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 4.f scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:29.416612+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  log_queue is 2 last_log 59 sent 57 num 2 unsent 2 sending 2
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:00:59.387366+0000 osd.1 (osd.1) 58 : cluster [DBG] 4.f scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:00:59.397948+0000 osd.1 (osd.1) 59 : cluster [DBG] 4.f scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65888256 unmapped: 1212416 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client handle_log_ack log(last 59)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:00:59.387366+0000 osd.1 (osd.1) 58 : cluster [DBG] 4.f scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:00:59.397948+0000 osd.1 (osd.1) 59 : cluster [DBG] 4.f scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 4.4 scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 4.4 scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:30.416908+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  log_queue is 2 last_log 61 sent 59 num 2 unsent 2 sending 2
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:01:00.364031+0000 osd.1 (osd.1) 60 : cluster [DBG] 4.4 scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:01:00.374534+0000 osd.1 (osd.1) 61 : cluster [DBG] 4.4 scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65929216 unmapped: 1171456 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 481182 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client handle_log_ack log(last 61)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:01:00.364031+0000 osd.1 (osd.1) 60 : cluster [DBG] 4.4 scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:01:00.374534+0000 osd.1 (osd.1) 61 : cluster [DBG] 4.4 scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:31.417377+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65929216 unmapped: 1171456 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:32.417548+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65929216 unmapped: 1171456 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:33.417781+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65937408 unmapped: 1163264 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:34.417948+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65937408 unmapped: 1163264 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:35.418963+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65945600 unmapped: 1155072 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 481182 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:36.419923+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65945600 unmapped: 1155072 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 4.9 scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 4.9 scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:37.420799+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  log_queue is 2 last_log 63 sent 61 num 2 unsent 2 sending 2
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:01:07.365231+0000 osd.1 (osd.1) 62 : cluster [DBG] 4.9 scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:01:07.375798+0000 osd.1 (osd.1) 63 : cluster [DBG] 4.9 scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65970176 unmapped: 1130496 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client handle_log_ack log(last 63)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:01:07.365231+0000 osd.1 (osd.1) 62 : cluster [DBG] 4.9 scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:01:07.375798+0000 osd.1 (osd.1) 63 : cluster [DBG] 4.9 scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:38.421280+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65978368 unmapped: 1122304 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:39.422288+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65978368 unmapped: 1122304 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:40.422772+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 2.1b scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.026464462s of 12.046990395s, submitted: 8
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 2.1b scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65986560 unmapped: 1114112 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 486006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:41.423906+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  log_queue is 2 last_log 65 sent 63 num 2 unsent 2 sending 2
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:01:10.441201+0000 osd.1 (osd.1) 64 : cluster [DBG] 2.1b scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:01:10.451643+0000 osd.1 (osd.1) 65 : cluster [DBG] 2.1b scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65986560 unmapped: 1114112 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client handle_log_ack log(last 65)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:01:10.441201+0000 osd.1 (osd.1) 64 : cluster [DBG] 2.1b scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:01:10.451643+0000 osd.1 (osd.1) 65 : cluster [DBG] 2.1b scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:42.424829+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65994752 unmapped: 1105920 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:43.425084+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65994752 unmapped: 1105920 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 4.10 scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 4.10 scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:44.425283+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  log_queue is 2 last_log 67 sent 65 num 2 unsent 2 sending 2
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:01:14.407051+0000 osd.1 (osd.1) 66 : cluster [DBG] 4.10 scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:01:14.417566+0000 osd.1 (osd.1) 67 : cluster [DBG] 4.10 scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65994752 unmapped: 1105920 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client handle_log_ack log(last 67)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:01:14.407051+0000 osd.1 (osd.1) 66 : cluster [DBG] 4.10 scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:01:14.417566+0000 osd.1 (osd.1) 67 : cluster [DBG] 4.10 scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:45.426039+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 1089536 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 488419 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:46.426463+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 1089536 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:47.426841+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66019328 unmapped: 1081344 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 4.2 scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 4.2 scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:48.427169+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  log_queue is 2 last_log 69 sent 67 num 2 unsent 2 sending 2
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:01:18.376581+0000 osd.1 (osd.1) 68 : cluster [DBG] 4.2 scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:01:18.387161+0000 osd.1 (osd.1) 69 : cluster [DBG] 4.2 scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66035712 unmapped: 1064960 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client handle_log_ack log(last 69)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:01:18.376581+0000 osd.1 (osd.1) 68 : cluster [DBG] 4.2 scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:01:18.387161+0000 osd.1 (osd.1) 69 : cluster [DBG] 4.2 scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:49.427380+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66035712 unmapped: 1064960 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:50.427584+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66043904 unmapped: 1056768 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490830 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:51.427784+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66043904 unmapped: 1056768 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:52.427953+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66052096 unmapped: 1048576 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:53.428092+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66052096 unmapped: 1048576 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 2.17 scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.865109444s of 13.878160477s, submitted: 6
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 2.17 scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:54.428342+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  log_queue is 2 last_log 71 sent 69 num 2 unsent 2 sending 2
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:01:24.320246+0000 osd.1 (osd.1) 70 : cluster [DBG] 2.17 scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:01:24.330920+0000 osd.1 (osd.1) 71 : cluster [DBG] 2.17 scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66060288 unmapped: 1040384 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:55.428541+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client handle_log_ack log(last 71)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:01:24.320246+0000 osd.1 (osd.1) 70 : cluster [DBG] 2.17 scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:01:24.330920+0000 osd.1 (osd.1) 71 : cluster [DBG] 2.17 scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66060288 unmapped: 1040384 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 493243 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:56.428750+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66068480 unmapped: 1032192 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:57.428930+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66076672 unmapped: 1024000 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:58.429086+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66076672 unmapped: 1024000 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 5.13 scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 5.13 scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:59.429442+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  log_queue is 2 last_log 73 sent 71 num 2 unsent 2 sending 2
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:01:29.373329+0000 osd.1 (osd.1) 72 : cluster [DBG] 5.13 scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:01:29.383874+0000 osd.1 (osd.1) 73 : cluster [DBG] 5.13 scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66076672 unmapped: 1024000 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client handle_log_ack log(last 73)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:01:29.373329+0000 osd.1 (osd.1) 72 : cluster [DBG] 5.13 scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:01:29.383874+0000 osd.1 (osd.1) 73 : cluster [DBG] 5.13 scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:00.429883+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66084864 unmapped: 1015808 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 495656 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:01.430022+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66093056 unmapped: 1007616 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:02.430209+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66101248 unmapped: 999424 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:03.430353+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 2.15 scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 2.15 scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66101248 unmapped: 999424 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:04.430562+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  log_queue is 2 last_log 75 sent 73 num 2 unsent 2 sending 2
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:01:33.475452+0000 osd.1 (osd.1) 74 : cluster [DBG] 2.15 scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:01:33.486066+0000 osd.1 (osd.1) 75 : cluster [DBG] 2.15 scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66109440 unmapped: 991232 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client handle_log_ack log(last 75)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:01:33.475452+0000 osd.1 (osd.1) 74 : cluster [DBG] 2.15 scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:01:33.486066+0000 osd.1 (osd.1) 75 : cluster [DBG] 2.15 scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:05.430836+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66117632 unmapped: 983040 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 498069 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:06.431039+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66117632 unmapped: 983040 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:07.431170+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66117632 unmapped: 983040 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 5.12 scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.043815613s of 14.055473328s, submitted: 6
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 5.12 scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:08.431384+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  log_queue is 2 last_log 77 sent 75 num 2 unsent 2 sending 2
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:01:38.375811+0000 osd.1 (osd.1) 76 : cluster [DBG] 5.12 scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:01:38.386375+0000 osd.1 (osd.1) 77 : cluster [DBG] 5.12 scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66142208 unmapped: 958464 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client handle_log_ack log(last 77)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:01:38.375811+0000 osd.1 (osd.1) 76 : cluster [DBG] 5.12 scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:01:38.386375+0000 osd.1 (osd.1) 77 : cluster [DBG] 5.12 scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:09.431987+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66150400 unmapped: 950272 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:10.432129+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66158592 unmapped: 942080 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 500482 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 5.16 scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 5.16 scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:11.432267+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  log_queue is 2 last_log 79 sent 77 num 2 unsent 2 sending 2
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:01:41.376259+0000 osd.1 (osd.1) 78 : cluster [DBG] 5.16 scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:01:41.386831+0000 osd.1 (osd.1) 79 : cluster [DBG] 5.16 scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66158592 unmapped: 942080 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client handle_log_ack log(last 79)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:01:41.376259+0000 osd.1 (osd.1) 78 : cluster [DBG] 5.16 scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:01:41.386831+0000 osd.1 (osd.1) 79 : cluster [DBG] 5.16 scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:12.432474+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66166784 unmapped: 933888 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 5.9 scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 5.9 scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:13.432576+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  log_queue is 2 last_log 81 sent 79 num 2 unsent 2 sending 2
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:01:43.350088+0000 osd.1 (osd.1) 80 : cluster [DBG] 5.9 scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:01:43.360517+0000 osd.1 (osd.1) 81 : cluster [DBG] 5.9 scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client handle_log_ack log(last 81)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:01:43.350088+0000 osd.1 (osd.1) 80 : cluster [DBG] 5.9 scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:01:43.360517+0000 osd.1 (osd.1) 81 : cluster [DBG] 5.9 scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66166784 unmapped: 933888 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:14.432779+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66174976 unmapped: 925696 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:15.432930+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66174976 unmapped: 925696 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 505306 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:16.433084+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66174976 unmapped: 925696 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:17.433237+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66183168 unmapped: 917504 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:18.433401+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66183168 unmapped: 917504 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:19.433599+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66191360 unmapped: 909312 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:20.433780+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66199552 unmapped: 901120 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 505306 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 4.12 scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.805700302s of 12.849143982s, submitted: 6
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 4.12 scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:21.433934+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  log_queue is 2 last_log 83 sent 81 num 2 unsent 2 sending 2
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:01:51.225123+0000 osd.1 (osd.1) 82 : cluster [DBG] 4.12 scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:01:51.235716+0000 osd.1 (osd.1) 83 : cluster [DBG] 4.12 scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66207744 unmapped: 892928 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 2.d scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 2.d scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:22.434228+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  log_queue is 4 last_log 85 sent 83 num 4 unsent 2 sending 2
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:01:52.270027+0000 osd.1 (osd.1) 84 : cluster [DBG] 2.d scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:01:52.280461+0000 osd.1 (osd.1) 85 : cluster [DBG] 2.d scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client handle_log_ack log(last 83)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:01:51.225123+0000 osd.1 (osd.1) 82 : cluster [DBG] 4.12 scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:01:51.235716+0000 osd.1 (osd.1) 83 : cluster [DBG] 4.12 scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client handle_log_ack log(last 85)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:01:52.270027+0000 osd.1 (osd.1) 84 : cluster [DBG] 2.d scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:01:52.280461+0000 osd.1 (osd.1) 85 : cluster [DBG] 2.d scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66248704 unmapped: 851968 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:23.434493+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66248704 unmapped: 851968 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:24.434693+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66256896 unmapped: 843776 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:25.434928+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66256896 unmapped: 843776 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 510130 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:26.435183+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66265088 unmapped: 835584 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:27.435525+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66265088 unmapped: 835584 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:28.435714+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66265088 unmapped: 835584 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:29.436024+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66273280 unmapped: 827392 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 2.3 scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 2.3 scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:30.436188+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  log_queue is 2 last_log 87 sent 85 num 2 unsent 2 sending 2
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:02:00.255229+0000 osd.1 (osd.1) 86 : cluster [DBG] 2.3 scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:02:00.265732+0000 osd.1 (osd.1) 87 : cluster [DBG] 2.3 scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66273280 unmapped: 827392 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client handle_log_ack log(last 87)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:02:00.255229+0000 osd.1 (osd.1) 86 : cluster [DBG] 2.3 scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:02:00.265732+0000 osd.1 (osd.1) 87 : cluster [DBG] 2.3 scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512541 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:31.436428+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66281472 unmapped: 819200 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 2.4 scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.959676743s of 10.976650238s, submitted: 6
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 2.4 scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:32.436609+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  log_queue is 2 last_log 89 sent 87 num 2 unsent 2 sending 2
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:02:02.201748+0000 osd.1 (osd.1) 88 : cluster [DBG] 2.4 scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:02:02.212322+0000 osd.1 (osd.1) 89 : cluster [DBG] 2.4 scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66306048 unmapped: 794624 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client handle_log_ack log(last 89)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:02:02.201748+0000 osd.1 (osd.1) 88 : cluster [DBG] 2.4 scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:02:02.212322+0000 osd.1 (osd.1) 89 : cluster [DBG] 2.4 scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:33.436954+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66314240 unmapped: 786432 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 2.5 scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 2.5 scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:34.437141+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  log_queue is 2 last_log 91 sent 89 num 2 unsent 2 sending 2
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:02:04.223035+0000 osd.1 (osd.1) 90 : cluster [DBG] 2.5 scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:02:04.233661+0000 osd.1 (osd.1) 91 : cluster [DBG] 2.5 scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66322432 unmapped: 778240 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 2.7 scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 2.7 scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client handle_log_ack log(last 91)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:02:04.223035+0000 osd.1 (osd.1) 90 : cluster [DBG] 2.5 scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:02:04.233661+0000 osd.1 (osd.1) 91 : cluster [DBG] 2.5 scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:35.437436+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  log_queue is 2 last_log 93 sent 91 num 2 unsent 2 sending 2
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:02:05.195136+0000 osd.1 (osd.1) 92 : cluster [DBG] 2.7 scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:02:05.205720+0000 osd.1 (osd.1) 93 : cluster [DBG] 2.7 scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66322432 unmapped: 778240 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 5.11 scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 5.11 scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client handle_log_ack log(last 93)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:02:05.195136+0000 osd.1 (osd.1) 92 : cluster [DBG] 2.7 scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:02:05.205720+0000 osd.1 (osd.1) 93 : cluster [DBG] 2.7 scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522187 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:36.437928+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  log_queue is 2 last_log 95 sent 93 num 2 unsent 2 sending 2
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:02:06.206433+0000 osd.1 (osd.1) 94 : cluster [DBG] 5.11 scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:02:06.216894+0000 osd.1 (osd.1) 95 : cluster [DBG] 5.11 scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66330624 unmapped: 770048 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client handle_log_ack log(last 95)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:02:06.206433+0000 osd.1 (osd.1) 94 : cluster [DBG] 5.11 scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:02:06.216894+0000 osd.1 (osd.1) 95 : cluster [DBG] 5.11 scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:37.438183+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66330624 unmapped: 770048 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:38.438336+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66330624 unmapped: 770048 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 5.1 scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 5.1 scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:39.438559+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  log_queue is 2 last_log 97 sent 95 num 2 unsent 2 sending 2
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:02:09.126044+0000 osd.1 (osd.1) 96 : cluster [DBG] 5.1 scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:02:09.136866+0000 osd.1 (osd.1) 97 : cluster [DBG] 5.1 scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66338816 unmapped: 761856 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client handle_log_ack log(last 97)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:02:09.126044+0000 osd.1 (osd.1) 96 : cluster [DBG] 5.1 scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:02:09.136866+0000 osd.1 (osd.1) 97 : cluster [DBG] 5.1 scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:40.438827+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66338816 unmapped: 761856 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 524598 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:41.438990+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66347008 unmapped: 753664 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:42.439165+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66347008 unmapped: 753664 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:43.439312+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66347008 unmapped: 753664 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:44.439492+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66355200 unmapped: 745472 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:45.439639+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66355200 unmapped: 745472 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 524598 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:46.439836+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66355200 unmapped: 745472 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:47.439927+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66363392 unmapped: 737280 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 2.6 scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.874062538s of 15.986262321s, submitted: 10
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 2.6 scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:48.440079+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  log_queue is 2 last_log 99 sent 97 num 2 unsent 2 sending 2
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:02:18.187943+0000 osd.1 (osd.1) 98 : cluster [DBG] 2.6 scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:02:18.198500+0000 osd.1 (osd.1) 99 : cluster [DBG] 2.6 scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 5.1d scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 5.1d scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client handle_log_ack log(last 99)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:02:18.187943+0000 osd.1 (osd.1) 98 : cluster [DBG] 2.6 scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:02:18.198500+0000 osd.1 (osd.1) 99 : cluster [DBG] 2.6 scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:49.440352+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  log_queue is 2 last_log 101 sent 99 num 2 unsent 2 sending 2
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:02:19.232471+0000 osd.1 (osd.1) 100 : cluster [DBG] 5.1d scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:02:19.242971+0000 osd.1 (osd.1) 101 : cluster [DBG] 5.1d scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66412544 unmapped: 688128 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 5.f scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 5.f scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client handle_log_ack log(last 101)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:02:19.232471+0000 osd.1 (osd.1) 100 : cluster [DBG] 5.1d scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:02:19.242971+0000 osd.1 (osd.1) 101 : cluster [DBG] 5.1d scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:50.440619+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  log_queue is 2 last_log 103 sent 101 num 2 unsent 2 sending 2
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:02:20.262501+0000 osd.1 (osd.1) 102 : cluster [DBG] 5.f scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:02:20.272880+0000 osd.1 (osd.1) 103 : cluster [DBG] 5.f scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66412544 unmapped: 688128 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 531833 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client handle_log_ack log(last 103)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:02:20.262501+0000 osd.1 (osd.1) 102 : cluster [DBG] 5.f scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:02:20.272880+0000 osd.1 (osd.1) 103 : cluster [DBG] 5.f scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:51.441070+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66420736 unmapped: 679936 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:52.441292+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66420736 unmapped: 679936 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:53.441564+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66420736 unmapped: 679936 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:54.441732+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66428928 unmapped: 671744 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 5.c scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 5.c scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:55.441888+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  log_queue is 2 last_log 105 sent 103 num 2 unsent 2 sending 2
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:02:25.198768+0000 osd.1 (osd.1) 104 : cluster [DBG] 5.c scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:02:25.209251+0000 osd.1 (osd.1) 105 : cluster [DBG] 5.c scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66437120 unmapped: 663552 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 534244 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client handle_log_ack log(last 105)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:02:25.198768+0000 osd.1 (osd.1) 104 : cluster [DBG] 5.c scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:02:25.209251+0000 osd.1 (osd.1) 105 : cluster [DBG] 5.c scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:56.442154+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66445312 unmapped: 655360 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:57.442356+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66445312 unmapped: 655360 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:58.442534+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66445312 unmapped: 655360 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:59.442841+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66453504 unmapped: 647168 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:00.443036+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66453504 unmapped: 647168 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 4.5 scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.020789146s of 13.038912773s, submitted: 8
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 4.5 scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 536655 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:01.443275+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  log_queue is 2 last_log 107 sent 105 num 2 unsent 2 sending 2
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:02:31.227118+0000 osd.1 (osd.1) 106 : cluster [DBG] 4.5 scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:02:31.237626+0000 osd.1 (osd.1) 107 : cluster [DBG] 4.5 scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66478080 unmapped: 622592 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client handle_log_ack log(last 107)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:02:31.227118+0000 osd.1 (osd.1) 106 : cluster [DBG] 4.5 scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:02:31.237626+0000 osd.1 (osd.1) 107 : cluster [DBG] 4.5 scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:02.443741+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66486272 unmapped: 614400 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:03.443910+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66486272 unmapped: 614400 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:04.444090+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66494464 unmapped: 606208 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 5.1a scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 5.1a scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:05.444240+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  log_queue is 2 last_log 109 sent 107 num 2 unsent 2 sending 2
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:02:35.395296+0000 osd.1 (osd.1) 108 : cluster [DBG] 5.1a scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:02:35.405857+0000 osd.1 (osd.1) 109 : cluster [DBG] 5.1a scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client handle_log_ack log(last 109)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:02:35.395296+0000 osd.1 (osd.1) 108 : cluster [DBG] 5.1a scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:02:35.405857+0000 osd.1 (osd.1) 109 : cluster [DBG] 5.1a scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66494464 unmapped: 606208 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 539068 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:06.444485+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66502656 unmapped: 598016 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:07.444820+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66502656 unmapped: 598016 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:08.445126+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66510848 unmapped: 589824 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 5.18 scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 5.18 scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:09.445362+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  log_queue is 2 last_log 111 sent 109 num 2 unsent 2 sending 2
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:02:39.428528+0000 osd.1 (osd.1) 110 : cluster [DBG] 5.18 scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:02:39.439142+0000 osd.1 (osd.1) 111 : cluster [DBG] 5.18 scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66519040 unmapped: 581632 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client handle_log_ack log(last 111)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:02:39.428528+0000 osd.1 (osd.1) 110 : cluster [DBG] 5.18 scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:02:39.439142+0000 osd.1 (osd.1) 111 : cluster [DBG] 5.18 scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:10.445788+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66519040 unmapped: 581632 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 541481 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:11.446025+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66527232 unmapped: 573440 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:12.446279+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66527232 unmapped: 573440 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:13.446585+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66527232 unmapped: 573440 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:14.446909+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66535424 unmapped: 565248 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:15.447136+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66535424 unmapped: 565248 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 5.19 scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.103586197s of 15.117080688s, submitted: 6
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 5.19 scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 543894 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:16.447307+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  log_queue is 2 last_log 113 sent 111 num 2 unsent 2 sending 2
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:02:46.344075+0000 osd.1 (osd.1) 112 : cluster [DBG] 5.19 scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:02:46.354683+0000 osd.1 (osd.1) 113 : cluster [DBG] 5.19 scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66543616 unmapped: 557056 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client handle_log_ack log(last 113)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:02:46.344075+0000 osd.1 (osd.1) 112 : cluster [DBG] 5.19 scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:02:46.354683+0000 osd.1 (osd.1) 113 : cluster [DBG] 5.19 scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:17.447528+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66543616 unmapped: 557056 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:18.447864+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66543616 unmapped: 557056 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 4.8 scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 4.8 scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:19.448510+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  log_queue is 2 last_log 115 sent 113 num 2 unsent 2 sending 2
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:02:49.294155+0000 osd.1 (osd.1) 114 : cluster [DBG] 4.8 scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:02:49.304671+0000 osd.1 (osd.1) 115 : cluster [DBG] 4.8 scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66551808 unmapped: 548864 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client handle_log_ack log(last 115)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:02:49.294155+0000 osd.1 (osd.1) 114 : cluster [DBG] 4.8 scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:02:49.304671+0000 osd.1 (osd.1) 115 : cluster [DBG] 4.8 scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:20.449020+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66551808 unmapped: 548864 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 546305 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:21.449163+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66560000 unmapped: 540672 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:22.449299+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66568192 unmapped: 532480 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 4.14 scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 4.14 scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:23.449498+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  log_queue is 2 last_log 117 sent 115 num 2 unsent 2 sending 2
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:02:53.331729+0000 osd.1 (osd.1) 116 : cluster [DBG] 4.14 scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:02:53.342209+0000 osd.1 (osd.1) 117 : cluster [DBG] 4.14 scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66584576 unmapped: 516096 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client handle_log_ack log(last 117)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:02:53.331729+0000 osd.1 (osd.1) 116 : cluster [DBG] 4.14 scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:02:53.342209+0000 osd.1 (osd.1) 117 : cluster [DBG] 4.14 scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:24.449820+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66584576 unmapped: 516096 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:25.449982+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66584576 unmapped: 516096 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:26.450142+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 548718 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66592768 unmapped: 507904 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:27.450359+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66592768 unmapped: 507904 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 6.4 scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.009474754s of 12.021146774s, submitted: 6
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 6.4 scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:28.450611+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  log_queue is 2 last_log 119 sent 117 num 2 unsent 2 sending 2
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:02:58.365230+0000 osd.1 (osd.1) 118 : cluster [DBG] 6.4 scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:02:58.389731+0000 osd.1 (osd.1) 119 : cluster [DBG] 6.4 scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66600960 unmapped: 499712 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 6.b scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client handle_log_ack log(last 119)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:02:58.365230+0000 osd.1 (osd.1) 118 : cluster [DBG] 6.4 scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:02:58.389731+0000 osd.1 (osd.1) 119 : cluster [DBG] 6.4 scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 6.b scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:29.451279+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  log_queue is 2 last_log 121 sent 119 num 2 unsent 2 sending 2
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:02:59.336401+0000 osd.1 (osd.1) 120 : cluster [DBG] 6.b scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:02:59.350514+0000 osd.1 (osd.1) 121 : cluster [DBG] 6.b scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66609152 unmapped: 491520 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client handle_log_ack log(last 121)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:02:59.336401+0000 osd.1 (osd.1) 120 : cluster [DBG] 6.b scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:02:59.350514+0000 osd.1 (osd.1) 121 : cluster [DBG] 6.b scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 6.e scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 6.e scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:30.451442+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  log_queue is 2 last_log 123 sent 121 num 2 unsent 2 sending 2
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:03:00.365757+0000 osd.1 (osd.1) 122 : cluster [DBG] 6.e scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:03:00.380106+0000 osd.1 (osd.1) 123 : cluster [DBG] 6.e scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66617344 unmapped: 483328 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client handle_log_ack log(last 123)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:03:00.365757+0000 osd.1 (osd.1) 122 : cluster [DBG] 6.e scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:03:00.380106+0000 osd.1 (osd.1) 123 : cluster [DBG] 6.e scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:31.451644+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 555951 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66625536 unmapped: 475136 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:32.451806+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66625536 unmapped: 475136 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 6.1 scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 6.1 scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:33.451967+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  log_queue is 2 last_log 125 sent 123 num 2 unsent 2 sending 2
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:03:03.346791+0000 osd.1 (osd.1) 124 : cluster [DBG] 6.1 scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:03:03.357324+0000 osd.1 (osd.1) 125 : cluster [DBG] 6.1 scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66633728 unmapped: 466944 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client handle_log_ack log(last 125)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:03:03.346791+0000 osd.1 (osd.1) 124 : cluster [DBG] 6.1 scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:03:03.357324+0000 osd.1 (osd.1) 125 : cluster [DBG] 6.1 scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:34.452205+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66633728 unmapped: 466944 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:35.452330+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66633728 unmapped: 466944 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:36.452497+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 558362 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66650112 unmapped: 450560 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:37.452692+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66650112 unmapped: 450560 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:38.452949+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66658304 unmapped: 442368 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:39.453141+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66658304 unmapped: 442368 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:40.453307+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66658304 unmapped: 442368 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:41.453504+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 558362 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 6.6 scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.718189240s of 13.738556862s, submitted: 8
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 6.6 scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66682880 unmapped: 417792 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:42.453768+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  log_queue is 2 last_log 127 sent 125 num 2 unsent 2 sending 2
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:03:12.103943+0000 osd.1 (osd.1) 126 : cluster [DBG] 6.6 scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:03:12.118153+0000 osd.1 (osd.1) 127 : cluster [DBG] 6.6 scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client handle_log_ack log(last 127)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:03:12.103943+0000 osd.1 (osd.1) 126 : cluster [DBG] 6.6 scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:03:12.118153+0000 osd.1 (osd.1) 127 : cluster [DBG] 6.6 scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66691072 unmapped: 409600 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:43.454013+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66699264 unmapped: 401408 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:44.454229+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 6.2 scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 6.2 scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66723840 unmapped: 376832 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:45.454478+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  log_queue is 2 last_log 129 sent 127 num 2 unsent 2 sending 2
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:03:15.095406+0000 osd.1 (osd.1) 128 : cluster [DBG] 6.2 scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:03:15.105889+0000 osd.1 (osd.1) 129 : cluster [DBG] 6.2 scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 6.d scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 6.d scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client handle_log_ack log(last 129)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:03:15.095406+0000 osd.1 (osd.1) 128 : cluster [DBG] 6.2 scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:03:15.105889+0000 osd.1 (osd.1) 129 : cluster [DBG] 6.2 scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66740224 unmapped: 360448 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:46.454760+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  log_queue is 2 last_log 131 sent 129 num 2 unsent 2 sending 2
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:03:16.100152+0000 osd.1 (osd.1) 130 : cluster [DBG] 6.d scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:03:16.117781+0000 osd.1 (osd.1) 131 : cluster [DBG] 6.d scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 565595 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 6.c scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 6.c scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client handle_log_ack log(last 131)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:03:16.100152+0000 osd.1 (osd.1) 130 : cluster [DBG] 6.d scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:03:16.117781+0000 osd.1 (osd.1) 131 : cluster [DBG] 6.d scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66748416 unmapped: 352256 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:47.454977+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  log_queue is 2 last_log 133 sent 131 num 2 unsent 2 sending 2
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:03:17.086127+0000 osd.1 (osd.1) 132 : cluster [DBG] 6.c scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:03:17.100173+0000 osd.1 (osd.1) 133 : cluster [DBG] 6.c scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client handle_log_ack log(last 133)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:03:17.086127+0000 osd.1 (osd.1) 132 : cluster [DBG] 6.c scrub starts
Jan 10 17:23:19 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:03:17.100173+0000 osd.1 (osd.1) 133 : cluster [DBG] 6.c scrub ok
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66748416 unmapped: 352256 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:48.455346+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66756608 unmapped: 344064 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:49.455577+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66756608 unmapped: 344064 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:50.455816+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66764800 unmapped: 335872 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:51.456032+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66764800 unmapped: 335872 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:52.456173+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66772992 unmapped: 327680 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:53.456348+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66772992 unmapped: 327680 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:54.456525+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66772992 unmapped: 327680 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:55.456662+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66781184 unmapped: 319488 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:56.456807+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66781184 unmapped: 319488 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:57.456944+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66789376 unmapped: 311296 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:58.457322+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66789376 unmapped: 311296 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:59.457752+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66789376 unmapped: 311296 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:00.458003+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66797568 unmapped: 303104 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:01.458222+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66797568 unmapped: 303104 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:02.458398+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66805760 unmapped: 294912 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:03.458618+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66805760 unmapped: 294912 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:04.458826+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66805760 unmapped: 294912 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:05.458990+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66813952 unmapped: 286720 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:06.459176+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66813952 unmapped: 286720 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:07.459344+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66813952 unmapped: 286720 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:08.459478+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66822144 unmapped: 278528 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:09.459678+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66822144 unmapped: 278528 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:10.459855+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66830336 unmapped: 270336 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:11.460009+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66830336 unmapped: 270336 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:12.460180+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66838528 unmapped: 262144 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:13.460332+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66838528 unmapped: 262144 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:14.460492+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66838528 unmapped: 262144 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:15.460661+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66846720 unmapped: 253952 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:16.460782+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66846720 unmapped: 253952 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:17.460977+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66854912 unmapped: 245760 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:18.461099+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66854912 unmapped: 245760 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:19.461381+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66854912 unmapped: 245760 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:20.461522+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66863104 unmapped: 237568 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:21.461754+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66863104 unmapped: 237568 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:22.461986+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66871296 unmapped: 229376 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:23.462134+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66871296 unmapped: 229376 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:24.462283+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66879488 unmapped: 221184 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:25.462413+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66879488 unmapped: 221184 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:26.462545+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66879488 unmapped: 221184 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:27.462719+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66887680 unmapped: 212992 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:28.462865+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66887680 unmapped: 212992 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:29.463169+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66887680 unmapped: 212992 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:30.463521+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66895872 unmapped: 204800 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:31.463674+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66895872 unmapped: 204800 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:32.463782+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66904064 unmapped: 196608 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:33.463911+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:34.464228+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66904064 unmapped: 196608 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:35.464534+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66904064 unmapped: 196608 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:36.464797+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66912256 unmapped: 188416 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:37.465125+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66912256 unmapped: 188416 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:38.465376+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66920448 unmapped: 180224 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:39.465609+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66920448 unmapped: 180224 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:40.465961+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66928640 unmapped: 172032 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:41.466281+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66936832 unmapped: 163840 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:42.466428+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66936832 unmapped: 163840 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:43.466600+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66945024 unmapped: 155648 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:44.466771+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66945024 unmapped: 155648 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:45.466892+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66945024 unmapped: 155648 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:46.467038+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66953216 unmapped: 147456 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:47.478398+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66953216 unmapped: 147456 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:48.478860+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66953216 unmapped: 147456 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:49.479065+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66961408 unmapped: 139264 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:50.479389+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66961408 unmapped: 139264 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:51.479758+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66969600 unmapped: 131072 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:52.479931+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66969600 unmapped: 131072 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:53.480207+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66977792 unmapped: 122880 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:54.480451+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66977792 unmapped: 122880 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:55.480688+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66977792 unmapped: 122880 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:56.480912+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66985984 unmapped: 114688 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:57.481083+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66985984 unmapped: 114688 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:58.481240+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66994176 unmapped: 106496 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:59.481565+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66994176 unmapped: 106496 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:00.481761+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67002368 unmapped: 98304 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:01.481970+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67002368 unmapped: 98304 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:02.482194+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67002368 unmapped: 98304 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:03.482358+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67010560 unmapped: 90112 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:04.482493+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67010560 unmapped: 90112 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:05.482622+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67018752 unmapped: 81920 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:06.482792+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67018752 unmapped: 81920 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:07.482952+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67018752 unmapped: 81920 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:08.483085+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67026944 unmapped: 73728 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:09.483289+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67026944 unmapped: 73728 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:10.483452+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67026944 unmapped: 73728 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:11.483593+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67035136 unmapped: 65536 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:12.483764+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67035136 unmapped: 65536 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:13.483931+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67043328 unmapped: 57344 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:14.484055+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67043328 unmapped: 57344 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:15.484190+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67043328 unmapped: 57344 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:16.484320+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67051520 unmapped: 49152 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:17.484496+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67051520 unmapped: 49152 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:18.484646+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67059712 unmapped: 40960 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:19.484857+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67059712 unmapped: 40960 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:20.485016+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67059712 unmapped: 40960 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:21.485139+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67067904 unmapped: 32768 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:22.485584+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67067904 unmapped: 32768 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:23.485939+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67076096 unmapped: 24576 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:24.486131+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67076096 unmapped: 24576 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:25.486300+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67092480 unmapped: 8192 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:26.486517+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67100672 unmapped: 0 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:27.486681+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67100672 unmapped: 0 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:28.486869+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67100672 unmapped: 0 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:29.487053+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67108864 unmapped: 1040384 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:30.487218+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67117056 unmapped: 1032192 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:31.487351+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67117056 unmapped: 1032192 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:32.487484+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67117056 unmapped: 1032192 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:33.488000+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67125248 unmapped: 1024000 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:34.488200+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67125248 unmapped: 1024000 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:35.488421+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 1015808 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:36.488570+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 1015808 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:37.488806+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 1015808 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:38.488961+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 1007616 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:39.489197+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 1007616 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:40.489421+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 1007616 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:41.489578+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67149824 unmapped: 999424 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:42.489765+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67149824 unmapped: 999424 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:43.489934+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67158016 unmapped: 991232 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:44.490123+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67158016 unmapped: 991232 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:45.490323+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67182592 unmapped: 966656 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:46.490457+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67182592 unmapped: 966656 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:47.490681+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67182592 unmapped: 966656 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:48.490941+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 958464 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:49.491177+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 958464 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:50.491351+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 958464 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:51.491581+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67198976 unmapped: 950272 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:52.491789+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67198976 unmapped: 950272 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:53.491967+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 942080 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:54.492198+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 942080 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:55.492365+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 942080 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:56.492577+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67215360 unmapped: 933888 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:57.492796+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67215360 unmapped: 933888 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:58.493009+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67223552 unmapped: 925696 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:59.493261+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67223552 unmapped: 925696 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:00.493433+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67223552 unmapped: 925696 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:01.493606+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67231744 unmapped: 917504 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:02.493756+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67231744 unmapped: 917504 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:03.493885+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67239936 unmapped: 909312 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:04.494050+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67239936 unmapped: 909312 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:05.494207+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67256320 unmapped: 892928 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:06.494397+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67264512 unmapped: 884736 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:07.494645+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67264512 unmapped: 884736 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:08.494881+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67272704 unmapped: 876544 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:09.495075+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67280896 unmapped: 868352 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:10.495240+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67280896 unmapped: 868352 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:11.495435+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 860160 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:12.495626+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 860160 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:13.495779+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 851968 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:14.495962+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 851968 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:15.496110+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 851968 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:16.496280+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67305472 unmapped: 843776 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:17.496446+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67305472 unmapped: 843776 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:18.496772+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67305472 unmapped: 843776 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:19.497011+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67313664 unmapped: 835584 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:20.497152+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67313664 unmapped: 835584 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:21.497333+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67321856 unmapped: 827392 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:22.497484+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67321856 unmapped: 827392 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:23.497779+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67321856 unmapped: 827392 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:24.497922+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67330048 unmapped: 819200 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:25.498096+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67330048 unmapped: 819200 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:26.498249+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67330048 unmapped: 819200 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:27.498386+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67330048 unmapped: 819200 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:28.498546+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67338240 unmapped: 811008 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:29.498789+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67338240 unmapped: 811008 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:30.498913+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67346432 unmapped: 802816 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:31.499064+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67346432 unmapped: 802816 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:32.499288+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67354624 unmapped: 794624 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:33.499452+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67354624 unmapped: 794624 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:34.499624+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67354624 unmapped: 794624 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:35.499759+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67362816 unmapped: 786432 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:36.499904+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67362816 unmapped: 786432 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:37.500024+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67371008 unmapped: 778240 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:38.500143+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67371008 unmapped: 778240 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:39.500338+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67371008 unmapped: 778240 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:40.509325+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67379200 unmapped: 770048 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:41.509516+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67379200 unmapped: 770048 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:42.509654+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67387392 unmapped: 761856 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:43.509823+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67387392 unmapped: 761856 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:44.509968+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67387392 unmapped: 761856 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:45.510194+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67395584 unmapped: 753664 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:46.510388+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67395584 unmapped: 753664 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:47.510655+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67395584 unmapped: 753664 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:48.510827+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67403776 unmapped: 745472 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:49.511038+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67403776 unmapped: 745472 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:50.511253+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67403776 unmapped: 745472 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:51.511542+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67411968 unmapped: 737280 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:52.511778+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67411968 unmapped: 737280 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:53.511916+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67420160 unmapped: 729088 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:54.512063+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67420160 unmapped: 729088 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:55.512212+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67420160 unmapped: 729088 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:56.512435+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67428352 unmapped: 720896 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:57.512598+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67428352 unmapped: 720896 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:58.512790+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67436544 unmapped: 712704 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:59.512988+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67436544 unmapped: 712704 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:00.513123+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67436544 unmapped: 712704 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:01.513286+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67444736 unmapped: 704512 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:02.513431+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67444736 unmapped: 704512 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:03.513623+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67452928 unmapped: 696320 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:04.513748+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67452928 unmapped: 696320 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:05.513894+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67452928 unmapped: 696320 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:06.514033+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67461120 unmapped: 688128 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:07.514184+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67461120 unmapped: 688128 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:08.514353+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67469312 unmapped: 679936 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:09.514594+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67469312 unmapped: 679936 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:10.514776+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67469312 unmapped: 679936 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:11.514915+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67477504 unmapped: 671744 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:12.515097+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67477504 unmapped: 671744 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:13.515266+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67477504 unmapped: 671744 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:14.515426+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67485696 unmapped: 663552 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:15.515616+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67485696 unmapped: 663552 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:16.515791+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 655360 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:17.515983+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 655360 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:18.516210+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 655360 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:19.516441+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67502080 unmapped: 647168 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:20.516567+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67502080 unmapped: 647168 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:21.516709+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67510272 unmapped: 638976 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:22.516820+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67510272 unmapped: 638976 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:23.516965+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67510272 unmapped: 638976 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:24.517164+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67518464 unmapped: 630784 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:25.517394+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67518464 unmapped: 630784 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:26.517552+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:27.517685+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:28.517880+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:29.518127+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67534848 unmapped: 614400 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:30.518279+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67534848 unmapped: 614400 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:31.518431+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 606208 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:32.518826+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 606208 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:33.518996+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67551232 unmapped: 598016 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:34.519166+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67551232 unmapped: 598016 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:35.519301+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67551232 unmapped: 598016 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:36.519500+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67559424 unmapped: 589824 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:37.519628+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67559424 unmapped: 589824 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:38.519760+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67559424 unmapped: 589824 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:39.519934+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67567616 unmapped: 581632 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:40.520086+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67567616 unmapped: 581632 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:41.520219+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67567616 unmapped: 581632 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:42.520378+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67575808 unmapped: 573440 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:43.520824+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67575808 unmapped: 573440 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:44.520982+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67584000 unmapped: 565248 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:45.521153+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67584000 unmapped: 565248 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:46.521326+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67584000 unmapped: 565248 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:47.521488+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67592192 unmapped: 557056 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:48.521635+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67592192 unmapped: 557056 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:49.521821+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67600384 unmapped: 548864 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:50.522031+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67600384 unmapped: 548864 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:51.522182+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67600384 unmapped: 548864 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:52.522364+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67608576 unmapped: 540672 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:53.522539+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67608576 unmapped: 540672 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:54.522748+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67608576 unmapped: 540672 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:55.522963+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67616768 unmapped: 532480 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:56.523157+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67616768 unmapped: 532480 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:57.523316+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67616768 unmapped: 532480 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:58.523476+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67624960 unmapped: 524288 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:59.523774+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67624960 unmapped: 524288 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:00.523983+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67633152 unmapped: 516096 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:01.524186+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67633152 unmapped: 516096 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:02.524373+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67633152 unmapped: 516096 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:03.524546+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67641344 unmapped: 507904 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:04.524681+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67641344 unmapped: 507904 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:05.524886+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67649536 unmapped: 499712 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:06.525097+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67649536 unmapped: 499712 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:07.525238+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67649536 unmapped: 499712 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:08.525370+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67657728 unmapped: 491520 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:09.525542+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67657728 unmapped: 491520 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:10.525677+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67665920 unmapped: 483328 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:11.525834+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67665920 unmapped: 483328 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:12.525909+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67674112 unmapped: 475136 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:13.526022+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67674112 unmapped: 475136 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:14.526162+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67674112 unmapped: 475136 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:15.526348+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67682304 unmapped: 466944 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:16.526545+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67682304 unmapped: 466944 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:17.526684+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67690496 unmapped: 458752 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:18.526893+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67690496 unmapped: 458752 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:19.527080+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67690496 unmapped: 458752 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:20.527256+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67698688 unmapped: 450560 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:21.527440+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67698688 unmapped: 450560 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:22.527665+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67706880 unmapped: 442368 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:23.527980+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67706880 unmapped: 442368 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:24.528161+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67706880 unmapped: 442368 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:25.528351+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67706880 unmapped: 442368 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:26.528502+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67715072 unmapped: 434176 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:27.528691+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67715072 unmapped: 434176 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:28.528903+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67723264 unmapped: 425984 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:29.529171+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67723264 unmapped: 425984 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:30.529378+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67723264 unmapped: 425984 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:31.529565+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67731456 unmapped: 417792 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:32.529796+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67731456 unmapped: 417792 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:33.529967+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67731456 unmapped: 417792 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:34.530146+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67739648 unmapped: 409600 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:35.530272+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67739648 unmapped: 409600 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:36.530460+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67747840 unmapped: 401408 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:37.530611+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67747840 unmapped: 401408 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:38.530756+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67756032 unmapped: 393216 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:39.530918+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67756032 unmapped: 393216 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:40.531128+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67756032 unmapped: 393216 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:41.531281+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67756032 unmapped: 393216 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:42.531439+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67764224 unmapped: 385024 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:43.531569+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67764224 unmapped: 385024 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:44.531687+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67764224 unmapped: 385024 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:45.531816+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:46.532000+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67772416 unmapped: 376832 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:47.532276+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67772416 unmapped: 376832 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:48.532415+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67780608 unmapped: 368640 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:49.532664+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67780608 unmapped: 368640 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:50.532760+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67788800 unmapped: 360448 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:51.532969+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67788800 unmapped: 360448 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:52.533156+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67788800 unmapped: 360448 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:53.533355+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67796992 unmapped: 352256 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:54.533547+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67796992 unmapped: 352256 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:55.533670+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67805184 unmapped: 344064 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:56.533771+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67805184 unmapped: 344064 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:57.533963+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67805184 unmapped: 344064 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:58.534087+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67813376 unmapped: 335872 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:59.534291+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67813376 unmapped: 335872 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:00.534485+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67813376 unmapped: 335872 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:01.534665+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67821568 unmapped: 327680 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:02.534819+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67821568 unmapped: 327680 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:03.534967+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67829760 unmapped: 319488 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:04.535163+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67829760 unmapped: 319488 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Cumulative writes: 4552 writes, 20K keys, 4552 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 4552 writes, 515 syncs, 8.84 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 4552 writes, 20K keys, 4552 commit groups, 1.0 writes per commit group, ingest: 16.66 MB, 0.03 MB/s
                                           Interval WAL: 4552 writes, 515 syncs, 8.84 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.019       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.019       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.019       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d5952838d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 0.000145 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d5952838d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 0.000145 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d5952838d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 0.000145 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d5952838d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 0.000145 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.01              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d5952838d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 0.000145 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d5952838d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 0.000145 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d5952838d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 0.000145 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d595283a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 1.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d595283a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 1.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.02              0.00         1    0.025       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.02              0.00         1    0.025       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.02              0.00         1    0.025       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d595283a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 1.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d5952838d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 0.000145 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d5952838d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 0.000145 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:05.535323+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67887104 unmapped: 262144 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:06.535585+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67895296 unmapped: 253952 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:07.535785+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67895296 unmapped: 253952 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:08.535982+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67903488 unmapped: 245760 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:09.536317+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67903488 unmapped: 245760 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:10.536529+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67911680 unmapped: 237568 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:11.536785+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67919872 unmapped: 229376 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:12.536988+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67919872 unmapped: 229376 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:13.537208+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67919872 unmapped: 229376 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:14.537432+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67928064 unmapped: 221184 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:15.537777+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67928064 unmapped: 221184 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:16.538012+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67936256 unmapped: 212992 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:17.538235+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67936256 unmapped: 212992 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:18.538467+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67936256 unmapped: 212992 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:19.538732+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67944448 unmapped: 204800 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:20.538904+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67944448 unmapped: 204800 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:21.539059+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67952640 unmapped: 196608 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:22.539267+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67952640 unmapped: 196608 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:23.539474+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67952640 unmapped: 196608 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:24.539783+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67960832 unmapped: 188416 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:25.540002+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67960832 unmapped: 188416 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:26.540196+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67969024 unmapped: 180224 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:27.540413+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67969024 unmapped: 180224 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:28.540581+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67969024 unmapped: 180224 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:29.540835+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67977216 unmapped: 172032 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:30.540987+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67969024 unmapped: 180224 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:31.541179+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67969024 unmapped: 180224 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:32.541325+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67977216 unmapped: 172032 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:33.541494+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67977216 unmapped: 172032 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:34.541694+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67977216 unmapped: 172032 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:35.541934+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67985408 unmapped: 163840 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:36.542117+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67985408 unmapped: 163840 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:37.542286+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67993600 unmapped: 155648 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:38.542457+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67993600 unmapped: 155648 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:39.542673+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67993600 unmapped: 155648 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:40.542876+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68001792 unmapped: 147456 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:41.543007+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68001792 unmapped: 147456 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:42.543122+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68009984 unmapped: 139264 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:43.543334+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68009984 unmapped: 139264 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:44.543484+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68009984 unmapped: 139264 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:45.543638+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68018176 unmapped: 131072 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:46.543813+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68018176 unmapped: 131072 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:47.543958+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68026368 unmapped: 122880 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:48.544079+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68026368 unmapped: 122880 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:49.544266+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68026368 unmapped: 122880 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:50.544504+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68034560 unmapped: 114688 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:51.544663+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68034560 unmapped: 114688 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:52.544837+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68034560 unmapped: 114688 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:53.544986+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68042752 unmapped: 106496 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:54.545134+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68042752 unmapped: 106496 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:55.545288+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 98304 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:56.545510+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 98304 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:57.545737+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 98304 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:58.545908+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68059136 unmapped: 90112 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:59.546220+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68059136 unmapped: 90112 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:00.546456+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68059136 unmapped: 90112 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:01.546608+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68067328 unmapped: 81920 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:02.546799+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68067328 unmapped: 81920 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:03.546960+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68075520 unmapped: 73728 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:04.547141+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68075520 unmapped: 73728 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:05.547383+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68075520 unmapped: 73728 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:06.547560+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 65536 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:07.547789+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 65536 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:08.547935+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68091904 unmapped: 57344 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:09.548122+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68091904 unmapped: 57344 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:10.548484+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68091904 unmapped: 57344 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:11.548623+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 49152 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:12.548774+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 49152 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:13.548921+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68108288 unmapped: 40960 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:14.549370+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68108288 unmapped: 40960 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:15.549567+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68108288 unmapped: 40960 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:16.549797+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 32768 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:17.550161+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 32768 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:18.550305+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 24576 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:19.550598+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 24576 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:20.550776+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 24576 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:21.550955+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 16384 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:22.551116+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 16384 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:23.551311+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 16384 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:24.551475+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 8192 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:25.551619+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 8192 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:26.551820+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68149248 unmapped: 0 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:27.551971+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68149248 unmapped: 0 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:28.552139+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68149248 unmapped: 0 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:29.552342+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68149248 unmapped: 1048576 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:30.552512+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68149248 unmapped: 1048576 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:31.552643+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 1040384 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:32.553109+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 1040384 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:33.553364+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 1040384 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:34.553757+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68165632 unmapped: 1032192 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:35.553960+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68165632 unmapped: 1032192 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:36.554200+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 1024000 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:37.554454+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 1024000 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:38.554780+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 1024000 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:39.555008+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68182016 unmapped: 1015808 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:40.555429+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68182016 unmapped: 1015808 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:41.555677+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 1007616 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:42.555950+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 1007616 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:43.556088+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 1007616 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:44.556229+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68198400 unmapped: 999424 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:45.556441+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68198400 unmapped: 999424 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:46.556627+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68198400 unmapped: 999424 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:47.556839+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 991232 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:48.556985+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 991232 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:49.557204+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 991232 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:50.557416+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68214784 unmapped: 983040 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:51.557592+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68214784 unmapped: 983040 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:52.557858+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68222976 unmapped: 974848 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:53.558054+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68222976 unmapped: 974848 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:54.558347+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68222976 unmapped: 974848 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:55.558623+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68231168 unmapped: 966656 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:56.558868+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68231168 unmapped: 966656 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:57.559158+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68239360 unmapped: 958464 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:58.559363+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68239360 unmapped: 958464 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:59.559620+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68239360 unmapped: 958464 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:00.559762+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68239360 unmapped: 958464 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:01.560048+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68247552 unmapped: 950272 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:02.560232+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68247552 unmapped: 950272 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:03.560475+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68255744 unmapped: 942080 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:04.560636+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68255744 unmapped: 942080 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:05.560805+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:06.561032+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:07.561380+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:08.561737+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:09.562046+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:10.562279+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:11.562456+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:12.562779+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:13.563016+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:14.563169+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:15.563342+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:16.563497+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:17.563759+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:18.563923+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:19.564132+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:20.564299+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:21.564458+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:22.655883+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:23.656038+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:24.656437+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:25.656595+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:26.656829+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:27.657000+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:28.657158+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:29.657382+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:30.657566+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:31.657779+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:32.657952+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:33.658194+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:34.658417+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:35.658571+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:36.658721+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:37.658910+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:38.659081+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:39.659316+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:40.659640+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:41.659794+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:42.659954+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:43.660151+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:44.660308+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:45.660473+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:46.660609+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:47.660787+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:48.660939+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:49.661094+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:50.661288+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:51.661483+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:52.661656+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:53.661807+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:54.661969+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:55.662123+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:56.662276+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:57.662420+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:58.662576+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:59.662830+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:00.663018+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:01.663149+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:02.663259+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:03.663448+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:04.663614+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:05.663778+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:06.663990+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:07.664163+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:08.664296+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:09.664491+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:10.689906+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:11.690160+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:12.690300+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:13.693883+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:14.694023+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:15.694175+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:16.694295+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:17.694435+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:18.694605+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:19.694847+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:20.695023+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:21.695175+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:22.695340+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:23.695601+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:24.695794+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:25.695969+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:26.696132+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:27.696750+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:28.696912+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:29.697358+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:30.697585+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:31.697794+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:32.697943+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:33.698157+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:34.698326+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:35.698466+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:36.698654+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:37.698827+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:38.699093+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:39.699374+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:40.699746+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:41.700140+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:42.700321+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:43.700484+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:44.700657+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:45.700778+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:46.700942+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:47.701176+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:48.701362+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:49.701592+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:50.701747+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:51.701909+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:52.702128+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:53.702320+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:54.702486+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:55.702638+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:56.702995+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:57.703143+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:58.703321+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:59.703509+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:00.703650+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:01.703770+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:02.704046+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:03.704322+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:04.704539+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:05.704759+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:06.704959+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:07.705169+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:08.705363+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:09.705609+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:10.705795+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:11.705954+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:12.706203+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:13.706394+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:14.706534+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:15.706678+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:16.706892+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:17.707120+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:18.707270+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:19.707478+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:20.707763+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:21.707973+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:22.708109+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:23.708367+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:24.708548+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:25.708777+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:26.709033+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:27.709628+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:28.709838+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:29.710404+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:30.710563+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:31.710762+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:32.710962+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:33.711117+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:34.711262+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:35.711404+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:36.711545+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:37.711770+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:38.711924+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:39.712127+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:40.712364+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:41.712566+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:42.712736+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:43.712941+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:44.713136+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:45.713298+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:46.713492+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:47.713938+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:48.726361+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:49.726608+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:50.726810+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:51.726983+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:52.727131+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:53.727264+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:54.727380+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:55.727580+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:56.727756+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:57.728796+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:58.728936+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:59.729168+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:00.729305+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:01.729474+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68304896 unmapped: 892928 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:02.729638+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68304896 unmapped: 892928 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:03.729788+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68304896 unmapped: 892928 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:04.729947+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68304896 unmapped: 892928 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:05.730078+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68304896 unmapped: 892928 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:06.730261+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68304896 unmapped: 892928 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:07.730409+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68304896 unmapped: 892928 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:08.730573+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68304896 unmapped: 892928 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:09.730803+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68304896 unmapped: 892928 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:10.730955+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68304896 unmapped: 892928 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:11.731165+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68304896 unmapped: 892928 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:12.731310+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68304896 unmapped: 892928 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:13.731451+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68304896 unmapped: 892928 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:14.731618+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68304896 unmapped: 892928 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:15.731753+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68304896 unmapped: 892928 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:16.731891+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68304896 unmapped: 892928 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:17.732093+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 ms_handle_reset con 0x55d5962f7000 session 0x55d596c1a700
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d597b79c00
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 ms_handle_reset con 0x55d597b79000 session 0x55d596c0e700
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d5962f7000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:18.732225+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:19.732414+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:20.732574+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:21.732768+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:22.732911+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:23.733070+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:24.733263+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:25.733397+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:26.733522+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:27.733640+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:28.733809+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:29.734097+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:30.734309+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:31.734495+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:32.734672+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:33.734859+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:34.735004+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:35.735131+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:36.735296+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:37.735438+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:38.735584+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:39.735981+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:40.736233+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:41.736438+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:42.736577+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:43.736837+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:44.737020+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:45.737245+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:46.737410+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:47.737573+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:48.737722+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:49.737952+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:50.738136+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:51.738321+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:52.738502+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:53.738659+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:54.738934+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:55.739107+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:56.739329+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:57.739481+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:58.739625+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:59.739923+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:00.740086+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:01.740261+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:02.740491+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:03.740645+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:04.740809+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:05.740989+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:06.741178+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:07.741414+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:08.741564+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68444160 unmapped: 753664 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:09.741861+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68444160 unmapped: 753664 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:10.742011+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68444160 unmapped: 753664 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:11.742185+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68444160 unmapped: 753664 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:12.742342+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68444160 unmapped: 753664 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:13.742526+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68444160 unmapped: 753664 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:14.742726+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:15.742850+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:16.743056+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:17.743300+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:18.743492+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:19.743818+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:20.744003+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:21.744192+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:22.744363+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:23.744593+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:24.744788+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:25.744973+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:26.745279+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:27.745483+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:28.745635+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:29.746299+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:30.746455+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:31.746860+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:32.746996+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:33.747130+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:34.747382+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:35.747538+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:36.747669+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:37.747838+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:38.748027+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:39.748219+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:40.748487+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:41.748748+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:42.748920+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:43.749078+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:44.749305+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:45.749606+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:46.749771+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:47.749924+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:48.750034+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:49.750240+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:50.750424+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:51.750547+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:52.750672+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:53.750892+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:54.751003+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:55.751137+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:56.751280+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:57.751486+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:58.751646+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:59.751940+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:00.752152+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:01.752282+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:02.752442+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:03.752607+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:04.752817+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:05.753024+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:06.753269+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:07.753472+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:08.753655+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:09.753892+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:10.754091+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:11.754256+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:12.754597+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:13.755475+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:14.755898+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:15.756122+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:16.756274+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:17.756471+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:18.756658+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:19.756875+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:20.757081+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:21.757335+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:22.757521+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:23.757791+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:24.758017+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:25.758199+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:26.758464+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:27.758860+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:28.759119+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:29.759432+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:30.759613+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:31.759804+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:32.760072+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:33.760290+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:34.760412+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:35.760602+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:36.760920+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:37.761150+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:38.761393+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:39.764022+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:40.766102+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:41.766620+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:42.768449+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:43.769170+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:44.770674+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:45.771251+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:46.771804+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:47.772119+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:48.772805+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:49.773796+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:50.774123+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:51.774542+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:52.774769+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:53.775148+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:54.775647+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:55.775909+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:56.776204+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:57.776352+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:58.776563+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:59.777148+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:00.777492+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:01.777817+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:02.778112+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:03.778329+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:04.778646+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:05.778857+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:06.779097+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:07.779344+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:08.779491+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:09.779771+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68460544 unmapped: 737280 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:10.780018+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68460544 unmapped: 737280 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:11.780210+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68460544 unmapped: 737280 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:12.780395+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68460544 unmapped: 737280 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:13.780565+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68460544 unmapped: 737280 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:14.780766+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68460544 unmapped: 737280 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:15.780919+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68460544 unmapped: 737280 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:16.781156+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68460544 unmapped: 737280 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:17.781370+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68460544 unmapped: 737280 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:18.781575+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68460544 unmapped: 737280 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:19.781842+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68460544 unmapped: 737280 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:20.782037+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68460544 unmapped: 737280 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:21.782218+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68460544 unmapped: 737280 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:22.782437+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68460544 unmapped: 737280 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:23.782552+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68460544 unmapped: 737280 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:24.782826+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68460544 unmapped: 737280 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:25.782978+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68460544 unmapped: 737280 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:26.783148+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68460544 unmapped: 737280 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:27.783246+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68460544 unmapped: 737280 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:28.783334+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68460544 unmapped: 737280 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:29.783560+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68460544 unmapped: 737280 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:30.783765+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68460544 unmapped: 737280 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:31.783980+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68460544 unmapped: 737280 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:32.784152+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68460544 unmapped: 737280 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:33.784331+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68468736 unmapped: 729088 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:34.784540+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68468736 unmapped: 729088 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:35.784726+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68468736 unmapped: 729088 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:36.784927+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68468736 unmapped: 729088 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:37.785096+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68468736 unmapped: 729088 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:38.785286+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68468736 unmapped: 729088 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:39.785684+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68468736 unmapped: 729088 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:40.785900+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68468736 unmapped: 729088 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:41.786165+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68468736 unmapped: 729088 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:42.786350+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68468736 unmapped: 729088 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:43.788005+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68468736 unmapped: 729088 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:44.788152+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68468736 unmapped: 729088 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:45.788683+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68468736 unmapped: 729088 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:46.789445+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68468736 unmapped: 729088 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:47.789837+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68468736 unmapped: 729088 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:48.789986+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68468736 unmapped: 729088 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:49.790271+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68468736 unmapped: 729088 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:50.790579+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68468736 unmapped: 729088 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:51.790885+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68468736 unmapped: 729088 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:52.791086+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68468736 unmapped: 729088 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:53.791339+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68468736 unmapped: 729088 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:54.791621+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68468736 unmapped: 729088 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:55.791928+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68468736 unmapped: 729088 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:56.792269+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68468736 unmapped: 729088 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:57.792508+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68468736 unmapped: 729088 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:58.792791+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68468736 unmapped: 729088 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:59.793283+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68468736 unmapped: 729088 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:00.793533+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68468736 unmapped: 729088 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:01.793760+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68468736 unmapped: 729088 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:02.793968+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68468736 unmapped: 729088 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:03.794245+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68468736 unmapped: 729088 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:04.794450+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68468736 unmapped: 729088 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:05.794758+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68468736 unmapped: 729088 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:06.794995+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68468736 unmapped: 729088 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:07.795351+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68468736 unmapped: 729088 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:08.795516+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 720896 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:09.795761+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 720896 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:10.796001+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 720896 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:11.796218+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 720896 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:12.796435+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 720896 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:13.796588+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 720896 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:14.796806+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 720896 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:15.796965+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 720896 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:16.797880+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 720896 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:17.798077+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 720896 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:18.798264+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 720896 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:19.798531+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 720896 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:20.798691+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 720896 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:21.798880+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 720896 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:22.799027+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 720896 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:23.799194+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 720896 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:24.799350+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 720896 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:25.799543+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 720896 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:26.799873+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 720896 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:27.800058+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 720896 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:28.800227+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 720896 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:29.800397+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 720896 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:30.800569+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 720896 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:31.800730+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 720896 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:32.800930+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 720896 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:33.801169+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 720896 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:34.801355+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 720896 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:35.801533+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 720896 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:36.801820+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 720896 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:37.802000+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 720896 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:38.802160+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 720896 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:39.802565+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 720896 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:40.802886+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 720896 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:41.803039+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 720896 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:42.803191+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 720896 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:43.803343+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 720896 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:44.803496+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 720896 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:45.803609+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 720896 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:46.803783+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 720896 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:47.803949+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 720896 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:48.804168+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 720896 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:49.804423+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 720896 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:50.804594+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 720896 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:51.804770+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68485120 unmapped: 712704 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:52.804945+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68485120 unmapped: 712704 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:53.805194+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68485120 unmapped: 712704 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:54.805546+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68485120 unmapped: 712704 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:55.805862+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68485120 unmapped: 712704 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:56.806169+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68485120 unmapped: 712704 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:57.806532+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68485120 unmapped: 712704 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:58.806778+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68485120 unmapped: 712704 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:59.807077+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68485120 unmapped: 712704 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:00.807408+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68485120 unmapped: 712704 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:01.807563+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68485120 unmapped: 712704 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:02.807812+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68485120 unmapped: 712704 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:03.807984+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68485120 unmapped: 712704 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:04.808199+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Cumulative writes: 4552 writes, 20K keys, 4552 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 4552 writes, 515 syncs, 8.84 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.019       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.019       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.019       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d5952838d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d5952838d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d5952838d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d5952838d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.01              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d5952838d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d5952838d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d5952838d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d595283a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d595283a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.02              0.00         1    0.025       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.02              0.00         1    0.025       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.02              0.00         1    0.025       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d595283a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d5952838d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d5952838d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:05.808420+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:06.808598+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:07.808836+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:08.809038+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:09.809380+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:10.809616+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:11.809810+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:12.810013+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:13.810272+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:14.810459+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:15.810663+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:16.810946+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:17.811145+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:18.811302+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:19.811547+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:20.811785+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:21.812008+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:22.812291+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:23.812524+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:24.812728+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:25.812913+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:26.813084+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:27.813259+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:28.813485+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:29.813764+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:30.813942+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:31.814091+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:32.814238+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:33.814495+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:34.814757+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:35.814891+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:36.815143+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:37.815381+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:38.815594+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:39.815851+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:40.816035+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:41.816214+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:42.816480+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:43.816805+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:44.817092+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:45.817295+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:46.817570+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:47.817772+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:48.817983+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:49.818238+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:50.819435+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:51.819653+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:52.820336+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:53.820774+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:54.820985+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:55.821138+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:56.821679+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:57.821948+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:58.822125+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:59.822384+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:00.822959+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:01.823254+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:02.823642+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:03.824069+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:04.824381+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:05.824545+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:06.824797+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:07.824986+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:08.825203+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d598f3ec00
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:09.825449+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 68 handle_osd_map epochs [68,69], i have 68, src has [1,69]
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 987.875549316s of 987.892700195s, submitted: 8
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _renew_subs
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 69 handle_osd_map epochs [69,69], i have 69, src has [1,69]
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 548864 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:10.825605+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 573166 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _renew_subs
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 69 handle_osd_map epochs [70,70], i have 69, src has [1,70]
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77234176 unmapped: 8749056 heap: 85983232 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:11.825862+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _renew_subs
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 70 handle_osd_map epochs [71,71], i have 70, src has [1,71]
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 71 ms_handle_reset con 0x55d598f3ec00 session 0x55d5975fc000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 70000640 unmapped: 15982592 heap: 85983232 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 71 heartbeat osd_stat(store_statfs(0x4fd892000/0x0/0x4ffc00000, data 0x8c0ead/0x936000, compress 0x0/0x0/0x0, omap 0xbe6e, meta 0x1a24192), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d598f3f400
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:12.826021+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 70131712 unmapped: 15851520 heap: 85983232 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:13.826221+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 24100864 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _renew_subs
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 71 handle_osd_map epochs [72,72], i have 71, src has [1,72]
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 72 ms_handle_reset con 0x55d598f3f400 session 0x55d598b91c00
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:14.826440+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 24092672 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:15.826651+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 691316 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 24092672 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:16.826881+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 24092672 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 72 heartbeat osd_stat(store_statfs(0x4fcc1e000/0x0/0x4ffc00000, data 0x1533a7f/0x15ac000, compress 0x0/0x0/0x0, omap 0xc4ca, meta 0x1a23b36), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:17.827182+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 24092672 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 72 heartbeat osd_stat(store_statfs(0x4fcc1e000/0x0/0x4ffc00000, data 0x1533a7f/0x15ac000, compress 0x0/0x0/0x0, omap 0xc4ca, meta 0x1a23b36), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:18.827391+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 24092672 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:19.827641+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 24092672 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:20.827957+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 691316 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 24092672 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 72 handle_osd_map epochs [73,73], i have 72, src has [1,73]
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.579319000s of 11.783731461s, submitted: 42
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:21.828135+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 24068096 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:22.828344+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 24068096 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:23.828562+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 73 heartbeat osd_stat(store_statfs(0x4fcc1b000/0x0/0x4ffc00000, data 0x1534f2f/0x15af000, compress 0x0/0x0/0x0, omap 0xc703, meta 0x1a238fd), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 24068096 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:24.828796+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 24068096 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:25.828985+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 694088 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 24068096 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:26.829166+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 24068096 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:27.829359+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 73 heartbeat osd_stat(store_statfs(0x4fcc1b000/0x0/0x4ffc00000, data 0x1534f2f/0x15af000, compress 0x0/0x0/0x0, omap 0xc703, meta 0x1a238fd), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 24068096 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:28.829521+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 24068096 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:29.829786+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 24068096 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:30.830026+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 694088 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 24068096 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:31.830312+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 24068096 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:32.830516+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 24068096 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 73 heartbeat osd_stat(store_statfs(0x4fcc1b000/0x0/0x4ffc00000, data 0x1534f2f/0x15af000, compress 0x0/0x0/0x0, omap 0xc703, meta 0x1a238fd), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:33.830768+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 73 heartbeat osd_stat(store_statfs(0x4fcc1b000/0x0/0x4ffc00000, data 0x1534f2f/0x15af000, compress 0x0/0x0/0x0, omap 0xc703, meta 0x1a238fd), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 24068096 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:34.830965+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 24068096 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:35.831137+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 694088 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 73 heartbeat osd_stat(store_statfs(0x4fcc1b000/0x0/0x4ffc00000, data 0x1534f2f/0x15af000, compress 0x0/0x0/0x0, omap 0xc703, meta 0x1a238fd), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 24068096 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:36.831326+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 24068096 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:37.831467+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 24068096 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:38.831638+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 24068096 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:39.831813+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 24068096 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:40.831998+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 694088 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 24068096 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:41.832190+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 73 heartbeat osd_stat(store_statfs(0x4fcc1b000/0x0/0x4ffc00000, data 0x1534f2f/0x15af000, compress 0x0/0x0/0x0, omap 0xc703, meta 0x1a238fd), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 24068096 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:42.832322+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 24068096 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:43.832488+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 24068096 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:44.832734+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 24068096 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:45.832887+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 694088 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 24068096 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:46.833055+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 24068096 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:47.833250+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 73 heartbeat osd_stat(store_statfs(0x4fcc1b000/0x0/0x4ffc00000, data 0x1534f2f/0x15af000, compress 0x0/0x0/0x0, omap 0xc703, meta 0x1a238fd), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 24068096 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 73 heartbeat osd_stat(store_statfs(0x4fcc1b000/0x0/0x4ffc00000, data 0x1534f2f/0x15af000, compress 0x0/0x0/0x0, omap 0xc703, meta 0x1a238fd), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:48.833373+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 24068096 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:49.833581+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 24068096 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 73 heartbeat osd_stat(store_statfs(0x4fcc1b000/0x0/0x4ffc00000, data 0x1534f2f/0x15af000, compress 0x0/0x0/0x0, omap 0xc703, meta 0x1a238fd), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:50.833773+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 694088 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 24068096 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:51.833981+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 24068096 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:52.834197+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 24068096 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:53.834409+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 24068096 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:54.834587+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 24068096 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:55.834762+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 694088 data_alloc: 218103808 data_used: 0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 73 heartbeat osd_stat(store_statfs(0x4fcc1b000/0x0/0x4ffc00000, data 0x1534f2f/0x15af000, compress 0x0/0x0/0x0, omap 0xc703, meta 0x1a238fd), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 24068096 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d596c29800
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 34.930355072s of 34.937496185s, submitted: 13
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d596c28000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:56.834900+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _renew_subs
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 73 handle_osd_map epochs [74,74], i have 73, src has [1,74]
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 74 ms_handle_reset con 0x55d596c29800 session 0x55d596c1a1c0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 70696960 unmapped: 23683072 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d596c29c00
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 74 ms_handle_reset con 0x55d596c29c00 session 0x55d598e47c00
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d596c29000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 74 ms_handle_reset con 0x55d596c29000 session 0x55d596cc2e00
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:57.835572+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _renew_subs
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 74 handle_osd_map epochs [75,75], i have 74, src has [1,75]
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 75 ms_handle_reset con 0x55d596c28000 session 0x55d598e74700
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d596c29000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 70754304 unmapped: 23625728 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d596c29c00
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:58.836123+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 75 handle_osd_map epochs [75,76], i have 75, src has [1,76]
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 76 ms_handle_reset con 0x55d596c29c00 session 0x55d5975fd180
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 76 ms_handle_reset con 0x55d596c29000 session 0x55d598e26700
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d598f3f400
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 76 ms_handle_reset con 0x55d598f3f400 session 0x55d596cc2540
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d599109800
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 76 ms_handle_reset con 0x55d599109800 session 0x55d5975fc000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 72048640 unmapped: 22331392 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:59.836345+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d596c28000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 72015872 unmapped: 22364160 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 76 heartbeat osd_stat(store_statfs(0x4fcc0f000/0x0/0x4ffc00000, data 0x1539540/0x15bc000, compress 0x0/0x0/0x0, omap 0xd5db, meta 0x1a22a25), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:00.836476+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _renew_subs
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 76 handle_osd_map epochs [77,77], i have 76, src has [1,77]
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 715390 data_alloc: 218103808 data_used: 19
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 77 ms_handle_reset con 0x55d596c28000 session 0x55d596cc2a80
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 72048640 unmapped: 22331392 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d596c29000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:01.836586+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 77 handle_osd_map epochs [77,78], i have 77, src has [1,78]
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 78 ms_handle_reset con 0x55d596c29000 session 0x55d597b17180
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 72081408 unmapped: 22298624 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:02.836809+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 72105984 unmapped: 22274048 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:03.837009+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d596c5d000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _renew_subs
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 78 handle_osd_map epochs [79,79], i have 78, src has [1,79]
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 79 ms_handle_reset con 0x55d596c5d000 session 0x55d597b16c40
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 22249472 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:04.837171+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d596c29800
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _renew_subs
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 79 handle_osd_map epochs [80,80], i have 79, src has [1,80]
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 80 heartbeat osd_stat(store_statfs(0x4fcc07000/0x0/0x4ffc00000, data 0x153d348/0x15c3000, compress 0x0/0x0/0x0, omap 0xe026, meta 0x1a21fda), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _renew_subs
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 80 handle_osd_map epochs [81,81], i have 80, src has [1,81]
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 81 ms_handle_reset con 0x55d596c29800 session 0x55d5975fd180
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 22241280 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:05.837398+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d596c29c00
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 733219 data_alloc: 218103808 data_used: 8141
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _renew_subs
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 81 handle_osd_map epochs [82,82], i have 81, src has [1,82]
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 82 ms_handle_reset con 0x55d596c29c00 session 0x55d596edea80
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 72269824 unmapped: 22110208 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:06.837611+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d596c28000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d596c29000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _renew_subs
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 82 handle_osd_map epochs [83,83], i have 82, src has [1,83]
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.447594643s of 10.715058327s, submitted: 154
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 83 ms_handle_reset con 0x55d596c28000 session 0x55d597b16fc0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 83 ms_handle_reset con 0x55d596c29000 session 0x55d597b176c0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 72212480 unmapped: 22167552 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:07.837769+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d596c29800
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _renew_subs
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 83 handle_osd_map epochs [84,84], i have 83, src has [1,84]
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 84 ms_handle_reset con 0x55d596c29800 session 0x55d597b16540
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 72220672 unmapped: 22159360 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d596c5d000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:08.837909+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _renew_subs
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 84 handle_osd_map epochs [85,85], i have 84, src has [1,85]
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 85 ms_handle_reset con 0x55d596c5d000 session 0x55d5988be540
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 72302592 unmapped: 22077440 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d598f3ec00
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:09.838078+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 85 heartbeat osd_stat(store_statfs(0x4fc3ed000/0x0/0x4ffc00000, data 0x1d46de1/0x1ddd000, compress 0x0/0x0/0x0, omap 0xf073, meta 0x1a20f8d), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 21741568 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:10.838251+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 962711 data_alloc: 218103808 data_used: 8141
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 73842688 unmapped: 20537344 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _renew_subs
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 85 handle_osd_map epochs [86,86], i have 85, src has [1,86]
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 86 ms_handle_reset con 0x55d598f3ec00 session 0x55d596d128c0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:11.838467+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d596c28000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 20496384 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _renew_subs
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 86 handle_osd_map epochs [87,87], i have 86, src has [1,87]
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d596c29000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:12.838619+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 87 ms_handle_reset con 0x55d596c28000 session 0x55d597e6a000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d596c29800
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 73834496 unmapped: 20545536 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _renew_subs
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 87 handle_osd_map epochs [88,88], i have 87, src has [1,88]
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 88 heartbeat osd_stat(store_statfs(0x4fba45000/0x0/0x4ffc00000, data 0x15499ac/0x15e2000, compress 0x0/0x0/0x0, omap 0xf5cf, meta 0x2bc0a31), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:13.838784+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 88 ms_handle_reset con 0x55d596c29000 session 0x55d598e47880
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 88 ms_handle_reset con 0x55d596c29800 session 0x55d596edf6c0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d596c5d000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 73580544 unmapped: 20799488 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _renew_subs
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 88 handle_osd_map epochs [89,89], i have 88, src has [1,89]
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:14.838943+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 89 ms_handle_reset con 0x55d596c5d000 session 0x55d597b2d180
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d598f3f400
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 73728000 unmapped: 20652032 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 89 handle_osd_map epochs [89,90], i have 89, src has [1,90]
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:15.839117+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 90 ms_handle_reset con 0x55d598f3f400 session 0x55d597b17dc0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 773529 data_alloc: 218103808 data_used: 8141
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d596c28000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 73777152 unmapped: 20602880 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _renew_subs
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 90 handle_osd_map epochs [91,91], i have 90, src has [1,91]
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:16.839306+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 91 ms_handle_reset con 0x55d596c28000 session 0x55d597b17500
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d598f3f400
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 73801728 unmapped: 20578304 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d5993b4000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.893272400s of 10.285517693s, submitted: 131
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _renew_subs
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 91 handle_osd_map epochs [92,92], i have 91, src has [1,92]
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:17.839489+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 92 ms_handle_reset con 0x55d598f3f400 session 0x55d596d13880
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d5993b5000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 75014144 unmapped: 19365888 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _renew_subs
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 92 handle_osd_map epochs [93,93], i have 92, src has [1,93]
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 93 ms_handle_reset con 0x55d5993b4000 session 0x55d597e6b500
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:18.839657+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 93 ms_handle_reset con 0x55d5993b5000 session 0x55d597b161c0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 75096064 unmapped: 19283968 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba3b000/0x0/0x4ffc00000, data 0x1550b01/0x15ed000, compress 0x0/0x0/0x0, omap 0x10a25, meta 0x2bbf5db), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 93 handle_osd_map epochs [94,94], i have 93, src has [1,94]
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:19.839983+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 75120640 unmapped: 19259392 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 94 heartbeat osd_stat(store_statfs(0x4fba36000/0x0/0x4ffc00000, data 0x1551fcd/0x15f0000, compress 0x0/0x0/0x0, omap 0x10c5d, meta 0x2bbf3a3), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:20.840219+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784952 data_alloc: 218103808 data_used: 8141
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 75137024 unmapped: 19243008 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:21.840461+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 94 heartbeat osd_stat(store_statfs(0x4fba36000/0x0/0x4ffc00000, data 0x1551fcd/0x15f0000, compress 0x0/0x0/0x0, omap 0x10c5d, meta 0x2bbf3a3), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 75137024 unmapped: 19243008 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:22.840774+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 75137024 unmapped: 19243008 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:23.841141+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d5993b5400
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 94 ms_handle_reset con 0x55d5993b5400 session 0x55d596ebc380
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d596c28000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d598f3f400
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _renew_subs
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 94 handle_osd_map epochs [95,95], i have 94, src has [1,95]
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 75137024 unmapped: 19243008 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:24.841295+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 95 heartbeat osd_stat(store_statfs(0x4fba36000/0x0/0x4ffc00000, data 0x15535da/0x15f4000, compress 0x0/0x0/0x0, omap 0x11026, meta 0x2bbefda), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _renew_subs
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 95 handle_osd_map epochs [96,96], i have 95, src has [1,96]
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 96 ms_handle_reset con 0x55d598f3f400 session 0x55d597bd8c40
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 96 ms_handle_reset con 0x55d596c28000 session 0x55d596ebce00
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d5993b4000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 96 ms_handle_reset con 0x55d5993b4000 session 0x55d597e6b880
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 75194368 unmapped: 19185664 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:25.841613+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 794599 data_alloc: 218103808 data_used: 8141
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 75194368 unmapped: 19185664 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:26.841918+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 75210752 unmapped: 19169280 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 96 heartbeat osd_stat(store_statfs(0x4fba30000/0x0/0x4ffc00000, data 0x1554c48/0x15f8000, compress 0x0/0x0/0x0, omap 0x11347, meta 0x2bbecb9), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:27.842097+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 75210752 unmapped: 19169280 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:28.842222+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 75210752 unmapped: 19169280 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:29.842447+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 96 heartbeat osd_stat(store_statfs(0x4fba30000/0x0/0x4ffc00000, data 0x1554c48/0x15f8000, compress 0x0/0x0/0x0, omap 0x11347, meta 0x2bbecb9), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 75210752 unmapped: 19169280 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:30.842649+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 794599 data_alloc: 218103808 data_used: 8141
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 75210752 unmapped: 19169280 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:31.842857+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 75210752 unmapped: 19169280 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:32.843101+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 75210752 unmapped: 19169280 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:33.843274+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d5993b5000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 96 ms_handle_reset con 0x55d5993b5000 session 0x55d598e476c0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 75210752 unmapped: 19169280 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:34.843483+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d59774b400
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _renew_subs
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 96 handle_osd_map epochs [97,97], i have 96, src has [1,97]
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.301731110s of 17.467674255s, submitted: 110
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 97 ms_handle_reset con 0x55d59774b400 session 0x55d596cc2c40
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 97 heartbeat osd_stat(store_statfs(0x4fba2f000/0x0/0x4ffc00000, data 0x15560f8/0x15fb000, compress 0x0/0x0/0x0, omap 0x11667, meta 0x2bbe999), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 75096064 unmapped: 19283968 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:35.843681+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796587 data_alloc: 218103808 data_used: 8141
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d596c28000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 97 ms_handle_reset con 0x55d596c28000 session 0x55d596edf180
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 75096064 unmapped: 19283968 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d598f3f400
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:36.843849+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 75096064 unmapped: 19283968 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:37.844027+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 75096064 unmapped: 19283968 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 97 handle_osd_map epochs [98,99], i have 97, src has [1,99]
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 99 ms_handle_reset con 0x55d598f3f400 session 0x55d596d13dc0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:38.844851+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 99 heartbeat osd_stat(store_statfs(0x4fba05000/0x0/0x4ffc00000, data 0x157cce6/0x1625000, compress 0x0/0x0/0x0, omap 0x118a9, meta 0x2bbe757), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 18915328 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d5993b4000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d5993b5000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:39.845012+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 75497472 unmapped: 18882560 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:40.845125+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 805611 data_alloc: 218103808 data_used: 10701
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 99 heartbeat osd_stat(store_statfs(0x4fba05000/0x0/0x4ffc00000, data 0x157cce6/0x1625000, compress 0x0/0x0/0x0, omap 0x118a9, meta 0x2bbe757), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 75497472 unmapped: 18882560 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:41.845278+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 75497472 unmapped: 18882560 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d598fb8400
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:42.845425+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d5993b7800
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 75792384 unmapped: 18587648 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _renew_subs
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 99 handle_osd_map epochs [100,100], i have 99, src has [1,100]
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 100 ms_handle_reset con 0x55d5993b7800 session 0x55d597e6b6c0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 100 ms_handle_reset con 0x55d598fb8400 session 0x55d596ec8540
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:43.845565+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 18579456 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d5993b7c00
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _renew_subs
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 100 handle_osd_map epochs [101,101], i have 100, src has [1,101]
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:44.845778+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 101 ms_handle_reset con 0x55d5993b7c00 session 0x55d596c1b500
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d596c28000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 101 ms_handle_reset con 0x55d596c28000 session 0x55d598e8d6c0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d598f3f400
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.204052925s of 10.307401657s, submitted: 49
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 75874304 unmapped: 18505728 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:45.845947+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812037 data_alloc: 218103808 data_used: 10802
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 101 heartbeat osd_stat(store_statfs(0x4fb9ff000/0x0/0x4ffc00000, data 0x157f8d4/0x162b000, compress 0x0/0x0/0x0, omap 0x11e17, meta 0x2bbe1e9), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _renew_subs
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 101 handle_osd_map epochs [102,102], i have 101, src has [1,102]
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 102 ms_handle_reset con 0x55d598f3f400 session 0x55d598e8da40
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 75890688 unmapped: 18489344 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d598fb8400
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:46.846123+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _renew_subs
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 102 handle_osd_map epochs [103,103], i have 102, src has [1,103]
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 103 ms_handle_reset con 0x55d598fb8400 session 0x55d598e47a40
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 18472960 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:47.846345+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 103 heartbeat osd_stat(store_statfs(0x4fb9fa000/0x0/0x4ffc00000, data 0x1580ef5/0x162e000, compress 0x0/0x0/0x0, omap 0x120f9, meta 0x2bbdf07), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d5993b7800
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 103 handle_osd_map epochs [103,104], i have 103, src has [1,104]
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 104 ms_handle_reset con 0x55d5993b7800 session 0x55d597bd8e00
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 18399232 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:48.846503+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 18399232 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:49.846753+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 104 heartbeat osd_stat(store_statfs(0x4fb9f4000/0x0/0x4ffc00000, data 0x1583b69/0x1636000, compress 0x0/0x0/0x0, omap 0x12577, meta 0x2bbda89), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 18399232 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:50.846930+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 825379 data_alloc: 218103808 data_used: 10802
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 18399232 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:51.847075+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 104 heartbeat osd_stat(store_statfs(0x4fb9f4000/0x0/0x4ffc00000, data 0x1583b69/0x1636000, compress 0x0/0x0/0x0, omap 0x12577, meta 0x2bbda89), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 18399232 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:52.847271+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 18399232 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:53.847403+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d599109400
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 104 ms_handle_reset con 0x55d599109400 session 0x55d598e741c0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d596c28000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 104 ms_handle_reset con 0x55d596c28000 session 0x55d597bd8c40
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d597b78800
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 104 ms_handle_reset con 0x55d597b78800 session 0x55d597b17500
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 104 handle_osd_map epochs [104,105], i have 104, src has [1,105]
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d598f3f400
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 105 ms_handle_reset con 0x55d598f3f400 session 0x55d597e6a000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d598fb8c00
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d598fb8400
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 76283904 unmapped: 18096128 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 105 ms_handle_reset con 0x55d598fb8400 session 0x55d598e74fc0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 105 ms_handle_reset con 0x55d598fb8c00 session 0x55d598eb3180
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d596c28000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 105 ms_handle_reset con 0x55d596c28000 session 0x55d596ec8700
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d597b78800
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 105 ms_handle_reset con 0x55d597b78800 session 0x55d598eb2540
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:54.847566+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d598f3f400
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 105 ms_handle_reset con 0x55d598f3f400 session 0x55d597b17dc0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d598fb8400
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 105 ms_handle_reset con 0x55d598fb8400 session 0x55d598eb3340
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 76062720 unmapped: 18317312 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:55.847786+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 829661 data_alloc: 218103808 data_used: 10817
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 76062720 unmapped: 18317312 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:56.847974+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d599109000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.297220230s of 11.404709816s, submitted: 74
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 105 ms_handle_reset con 0x55d599109000 session 0x55d597e6ac40
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d596c28000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 76218368 unmapped: 18161664 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:57.848148+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 105 heartbeat osd_stat(store_statfs(0x4fb9f1000/0x0/0x4ffc00000, data 0x1585084/0x163b000, compress 0x0/0x0/0x0, omap 0x12d33, meta 0x2bbd2cd), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 76218368 unmapped: 18161664 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:58.848354+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 76218368 unmapped: 18161664 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:59.848635+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 76218368 unmapped: 18161664 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:00.848800+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 830678 data_alloc: 218103808 data_used: 10833
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d597b78800
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 105 ms_handle_reset con 0x55d597b78800 session 0x55d596d121c0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d598f3f400
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 105 handle_osd_map epochs [105,106], i have 105, src has [1,106]
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 76218368 unmapped: 18161664 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 106 ms_handle_reset con 0x55d598f3f400 session 0x55d5975fddc0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d598fb8400
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d59774ac00
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:01.848963+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 106 ms_handle_reset con 0x55d598fb8400 session 0x55d596d13340
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 106 ms_handle_reset con 0x55d59774ac00 session 0x55d598b916c0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d59774a400
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 106 ms_handle_reset con 0x55d59774a400 session 0x55d598e75dc0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d59774ac00
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 76627968 unmapped: 17752064 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 106 ms_handle_reset con 0x55d59774ac00 session 0x55d598fdc380
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d597b78800
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _renew_subs
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 106 handle_osd_map epochs [107,107], i have 106, src has [1,107]
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:02.849072+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _renew_subs
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 107 handle_osd_map epochs [108,108], i have 107, src has [1,108]
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 17817600 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 108 ms_handle_reset con 0x55d597b78800 session 0x55d596c1aa80
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 108 heartbeat osd_stat(store_statfs(0x4fb9e7000/0x0/0x4ffc00000, data 0x1588082/0x1643000, compress 0x0/0x0/0x0, omap 0x132c3, meta 0x2bbcd3d), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:03.849303+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 76570624 unmapped: 17809408 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d598f3f400
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 108 ms_handle_reset con 0x55d598f3f400 session 0x55d598e8ce00
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:04.849485+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d598fb8400
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 108 ms_handle_reset con 0x55d598fb8400 session 0x55d598e75c00
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d59774b400
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 108 ms_handle_reset con 0x55d59774b400 session 0x55d598ff4a80
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d59774ac00
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 76865536 unmapped: 17514496 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 108 ms_handle_reset con 0x55d59774ac00 session 0x55d596ebc000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:05.849646+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d597b78800
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 843426 data_alloc: 218103808 data_used: 14965
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 76890112 unmapped: 17489920 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _renew_subs
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 108 handle_osd_map epochs [109,109], i have 108, src has [1,109]
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:06.849804+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.546408653s of 10.006135941s, submitted: 114
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 109 ms_handle_reset con 0x55d597b78800 session 0x55d598e8d340
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77979648 unmapped: 16400384 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:07.849998+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 109 ms_handle_reset con 0x55d596c28000 session 0x55d596d12a80
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d598f3f400
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fb9e4000/0x0/0x4ffc00000, data 0x158a889/0x1646000, compress 0x0/0x0/0x0, omap 0x13de3, meta 0x2bbc21d), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77955072 unmapped: 16424960 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:08.850938+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _renew_subs
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 109 handle_osd_map epochs [110,110], i have 109, src has [1,110]
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 110 ms_handle_reset con 0x55d598f3f400 session 0x55d597b2d500
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 110 handle_osd_map epochs [110,111], i have 110, src has [1,111]
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 16367616 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:09.851159+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 16367616 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:10.851380+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 850847 data_alloc: 218103808 data_used: 10802
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 111 heartbeat osd_stat(store_statfs(0x4fb9dd000/0x0/0x4ffc00000, data 0x158d35f/0x164a000, compress 0x0/0x0/0x0, omap 0x1463c, meta 0x2bbb9c4), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 111 ms_handle_reset con 0x55d5993b4000 session 0x55d596cc3340
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 111 ms_handle_reset con 0x55d5993b5000 session 0x55d597b2c540
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77832192 unmapped: 16547840 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:11.851520+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d596c28000
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 111 ms_handle_reset con 0x55d596c28000 session 0x55d597bd9880
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77611008 unmapped: 16769024 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d59774ac00
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:12.851691+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 111 ms_handle_reset con 0x55d59774ac00 session 0x55d5988bfdc0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d597b78800
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77611008 unmapped: 16769024 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:13.851930+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _renew_subs
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 111 handle_osd_map epochs [112,112], i have 111, src has [1,112]
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 112 ms_handle_reset con 0x55d597b78800 session 0x55d5988bea80
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 112 heartbeat osd_stat(store_statfs(0x4fba02000/0x0/0x4ffc00000, data 0x156a94e/0x1628000, compress 0x0/0x0/0x0, omap 0x14a07, meta 0x2bbb5f9), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 112 handle_osd_map epochs [112,113], i have 112, src has [1,113]
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77529088 unmapped: 16850944 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:14.852123+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77529088 unmapped: 16850944 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:15.852299+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853791 data_alloc: 218103808 data_used: 12714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77529088 unmapped: 16850944 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:16.852454+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fb9fd000/0x0/0x4ffc00000, data 0x156be1a/0x162b000, compress 0x0/0x0/0x0, omap 0x14c5d, meta 0x2bbb3a3), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77529088 unmapped: 16850944 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:17.852788+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77529088 unmapped: 16850944 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:18.852943+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:19.853272+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77529088 unmapped: 16850944 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:20.853488+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77529088 unmapped: 16850944 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 113 handle_osd_map epochs [114,114], i have 113, src has [1,114]
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.037557602s of 14.196849823s, submitted: 107
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:21.853786+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:22.853992+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:23.854178+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:24.854395+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:25.854615+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:26.854809+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:27.854997+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:28.855189+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:29.855439+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:30.855680+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:31.855947+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:32.856435+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:33.856792+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:34.856971+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:35.857229+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:36.857484+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:37.857908+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:38.858338+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:39.858677+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:40.859132+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:41.859385+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:42.859575+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:43.859820+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:44.860059+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:45.860334+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:46.860607+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:47.860771+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:48.861054+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:49.861572+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:50.861810+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:51.862068+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:52.862348+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:53.862591+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:54.862790+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:55.862989+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:56.863159+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:57.863383+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:58.863817+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:59.864208+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:00.864441+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:01.864631+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:02.864821+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:03.865083+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:04.865293+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:05.865514+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:06.865791+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:07.865960+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:08.866126+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:09.866384+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:10.866552+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:11.866739+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:12.866976+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:13.867157+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:14.867296+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:15.867493+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:16.867619+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:17.867813+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:18.867997+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:19.868193+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:20.868375+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:21.868499+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:22.868767+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:23.868946+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:24.869122+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:25.869283+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:26.869468+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:27.869577+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:28.869725+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:29.869923+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:30.870093+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:31.870260+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:32.870459+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:33.870878+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:34.871070+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:35.871224+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:36.871354+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:37.871501+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:38.871644+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:39.871910+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:40.872041+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:41.872197+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:42.872325+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:43.872580+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:44.872816+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:45.872969+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: do_command 'config diff' '{prefix=config diff}'
Jan 10 17:23:19 compute-0 ceph-osd[86809]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 16728064 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: do_command 'config show' '{prefix=config show}'
Jan 10 17:23:19 compute-0 ceph-osd[86809]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Jan 10 17:23:19 compute-0 ceph-osd[86809]: do_command 'counter dump' '{prefix=counter dump}'
Jan 10 17:23:19 compute-0 ceph-osd[86809]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Jan 10 17:23:19 compute-0 ceph-osd[86809]: do_command 'counter schema' '{prefix=counter schema}'
Jan 10 17:23:19 compute-0 ceph-osd[86809]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:19 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:46.873095+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78045184 unmapped: 16334848 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:47.873214+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78045184 unmapped: 16334848 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:23:19 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:48.873379+0000)
Jan 10 17:23:19 compute-0 ceph-osd[86809]: do_command 'log dump' '{prefix=log dump}'
Jan 10 17:23:19 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 10 17:23:19 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush dump"} v 0)
Jan 10 17:23:19 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2782990970' entity='client.admin' cmd={"prefix": "osd crush dump"} : dispatch
Jan 10 17:23:19 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0)
Jan 10 17:23:19 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3516252073' entity='client.admin' cmd={"prefix": "mgr dump", "format": "json-pretty"} : dispatch
Jan 10 17:23:19 compute-0 rsyslogd[1006]: imjournal from <np0005580781:ceph-osd>: begin to drop messages due to rate-limiting
Jan 10 17:23:19 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:23:19 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush rule ls"} v 0)
Jan 10 17:23:19 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/977626955' entity='client.admin' cmd={"prefix": "osd crush rule ls"} : dispatch
Jan 10 17:23:19 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0)
Jan 10 17:23:19 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3290479549' entity='client.admin' cmd={"prefix": "mgr metadata", "format": "json-pretty"} : dispatch
Jan 10 17:23:20 compute-0 ceph-mon[75249]: from='client.14782 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 10 17:23:20 compute-0 ceph-mon[75249]: pgmap v875: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:23:20 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/2782990970' entity='client.admin' cmd={"prefix": "osd crush dump"} : dispatch
Jan 10 17:23:20 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/3516252073' entity='client.admin' cmd={"prefix": "mgr dump", "format": "json-pretty"} : dispatch
Jan 10 17:23:20 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/977626955' entity='client.admin' cmd={"prefix": "osd crush rule ls"} : dispatch
Jan 10 17:23:20 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/3290479549' entity='client.admin' cmd={"prefix": "mgr metadata", "format": "json-pretty"} : dispatch
Jan 10 17:23:20 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0)
Jan 10 17:23:20 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1606868372' entity='client.admin' cmd={"prefix": "osd crush show-tunables"} : dispatch
Jan 10 17:23:20 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0)
Jan 10 17:23:20 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3978232390' entity='client.admin' cmd={"prefix": "mgr module ls", "format": "json-pretty"} : dispatch
Jan 10 17:23:20 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v876: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:23:20 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0)
Jan 10 17:23:20 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1597573317' entity='client.admin' cmd={"prefix": "osd crush tree", "show_shadow": true} : dispatch
Jan 10 17:23:20 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0)
Jan 10 17:23:20 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2174338893' entity='client.admin' cmd={"prefix": "mgr services", "format": "json-pretty"} : dispatch
Jan 10 17:23:21 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/1606868372' entity='client.admin' cmd={"prefix": "osd crush show-tunables"} : dispatch
Jan 10 17:23:21 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/3978232390' entity='client.admin' cmd={"prefix": "mgr module ls", "format": "json-pretty"} : dispatch
Jan 10 17:23:21 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/1597573317' entity='client.admin' cmd={"prefix": "osd crush tree", "show_shadow": true} : dispatch
Jan 10 17:23:21 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/2174338893' entity='client.admin' cmd={"prefix": "mgr services", "format": "json-pretty"} : dispatch
Jan 10 17:23:21 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0)
Jan 10 17:23:21 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3811207408' entity='client.admin' cmd={"prefix": "osd erasure-code-profile ls"} : dispatch
Jan 10 17:23:21 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0)
Jan 10 17:23:21 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/116536327' entity='client.admin' cmd={"prefix": "mgr stat", "format": "json-pretty"} : dispatch
Jan 10 17:23:21 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Jan 10 17:23:21 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/304410420' entity='client.admin' cmd={"prefix": "osd metadata"} : dispatch
Jan 10 17:23:22 compute-0 ceph-mon[75249]: pgmap v876: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:23:22 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/3811207408' entity='client.admin' cmd={"prefix": "osd erasure-code-profile ls"} : dispatch
Jan 10 17:23:22 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/116536327' entity='client.admin' cmd={"prefix": "mgr stat", "format": "json-pretty"} : dispatch
Jan 10 17:23:22 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/304410420' entity='client.admin' cmd={"prefix": "osd metadata"} : dispatch
Jan 10 17:23:22 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0)
Jan 10 17:23:22 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/112384499' entity='client.admin' cmd={"prefix": "mgr versions", "format": "json-pretty"} : dispatch
Jan 10 17:23:22 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd utilization"} v 0)
Jan 10 17:23:22 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3813956711' entity='client.admin' cmd={"prefix": "osd utilization"} : dispatch
Jan 10 17:23:22 compute-0 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.14816 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 10 17:23:22 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v877: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:23:22 compute-0 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.14818 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 17:23:22 compute-0 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.14820 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 10 17:23:23 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/112384499' entity='client.admin' cmd={"prefix": "mgr versions", "format": "json-pretty"} : dispatch
Jan 10 17:23:23 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/3813956711' entity='client.admin' cmd={"prefix": "osd utilization"} : dispatch
Jan 10 17:23:23 compute-0 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.14822 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 17:23:23 compute-0 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.14824 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 42 pg[6.0( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=23/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 33'38 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 42 pg[6.e( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 42 pg[6.2( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 42 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 42 pg[6.c( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 42 pg[6.e( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/23 les/c/f=42/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006327 3 0.000118
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 42 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/23 les/c/f=42/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006286 3 0.000069
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 42 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/23 les/c/f=42/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 42 pg[6.e( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/23 les/c/f=42/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 42 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/23 les/c/f=42/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000009 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 42 pg[6.e( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/23 les/c/f=42/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000012 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 42 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/23 les/c/f=42/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 42 pg[6.e( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/23 les/c/f=42/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 42 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 42 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 42 pg[6.c( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/23 les/c/f=42/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006272 3 0.000096
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 42 pg[6.0( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=23/23 lis/c=39/23 les/c/f=42/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 33'38 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006498 3 0.000205
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 42 pg[6.2( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/23 les/c/f=42/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006414 3 0.000289
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 42 pg[6.c( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/23 les/c/f=42/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 42 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/23 les/c/f=42/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006414 3 0.000133
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 42 pg[6.0( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=23/23 lis/c=39/23 les/c/f=42/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 33'38 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 42 pg[6.2( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/23 les/c/f=42/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 42 pg[6.c( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/23 les/c/f=42/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000010 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 42 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/23 les/c/f=42/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006265 3 0.000059
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 42 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/23 les/c/f=42/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 42 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/23 les/c/f=42/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 42 pg[6.0( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=23/23 lis/c=39/23 les/c/f=42/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 33'38 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000010 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 42 pg[6.2( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/23 les/c/f=42/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000009 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 42 pg[6.0( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=23/23 lis/c=39/23 les/c/f=42/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 33'38 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 42 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/23 les/c/f=42/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 42 pg[6.2( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/23 les/c/f=42/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 42 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/23 les/c/f=42/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 42 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/23 les/c/f=42/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000016 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 42 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/23 les/c/f=42/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 42 pg[6.c( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/23 les/c/f=42/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 60882944 unmapped: 2023424 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 42 heartbeat osd_stat(store_statfs(0x4fe155000/0x0/0x4ffc00000, data 0x2afbb/0x73000, compress 0x0/0x0/0x0, omap 0x78c8, meta 0x1a28738), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T16:59:19.785412+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 42 handle_osd_map epochs [42,43], i have 42, src has [1,43]
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.493062019s of 10.747094154s, submitted: 172
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client handle_log_ack log(last 7)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T16:59:48.188829+0000 osd.0 (osd.0) 6 : cluster [DBG] 4.7 scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T16:59:48.199301+0000 osd.0 (osd.0) 7 : cluster [DBG] 4.7 scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 342906 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 60956672 unmapped: 1949696 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T16:59:20.785608+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 60964864 unmapped: 1941504 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T16:59:21.785860+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 4.1e scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 4.1e scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 60604416 unmapped: 2301952 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T16:59:22.786070+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  log_queue is 2 last_log 9 sent 7 num 2 unsent 2 sending 2
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T16:59:52.164485+0000 osd.0 (osd.0) 8 : cluster [DBG] 4.1e scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T16:59:52.175025+0000 osd.0 (osd.0) 9 : cluster [DBG] 4.1e scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 4.1c scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 4.1c scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client handle_log_ack log(last 9)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T16:59:52.164485+0000 osd.0 (osd.0) 8 : cluster [DBG] 4.1e scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T16:59:52.175025+0000 osd.0 (osd.0) 9 : cluster [DBG] 4.1e scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 60604416 unmapped: 2301952 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T16:59:23.786418+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  log_queue is 2 last_log 11 sent 9 num 2 unsent 2 sending 2
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T16:59:53.143387+0000 osd.0 (osd.0) 10 : cluster [DBG] 4.1c scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T16:59:53.153925+0000 osd.0 (osd.0) 11 : cluster [DBG] 4.1c scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 43 handle_osd_map epochs [44,44], i have 43, src has [1,44]
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client handle_log_ack log(last 11)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T16:59:53.143387+0000 osd.0 (osd.0) 10 : cluster [DBG] 4.1c scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T16:59:53.153925+0000 osd.0 (osd.0) 11 : cluster [DBG] 4.1c scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 43 handle_osd_map epochs [44,44], i have 44, src has [1,44]
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.1e(unlocked)] enter Initial
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.1e( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=0 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000073 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.1e( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=0 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.1e( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000008 1 0.000028
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.1e( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.1e( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.1e( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.1e( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000009 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.1e( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.1e( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.1e( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.1e( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000181 1 0.000044
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.1e( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.19(unlocked)] enter Initial
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.19( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=0 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000042 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.19( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=0 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.19( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000007 1 0.000011
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.19( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.19( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.19( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.19( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.19( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.19( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.19( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.19( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000113 1 0.000035
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.19( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.18(unlocked)] enter Initial
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.18( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=0 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000144 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.18( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=0 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.18( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000010 1 0.000026
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.18( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.18( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.18( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.18( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.18( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.18( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.18( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.18( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000142 1 0.000052
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.18( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.f(unlocked)] enter Initial
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.f( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=0 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000077 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.f( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=0 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.f( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000018 1 0.000023
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.f( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.f( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.f( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.f( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.f( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.f( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.f( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.f( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000075 1 0.000034
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.f( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.7(unlocked)] enter Initial
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.7( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=0 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000043 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.7( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=0 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.7( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000010
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.7( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.7( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.7( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.7( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.7( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.7( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.7( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.7( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000076 1 0.000033
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.7( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.1d(unlocked)] enter Initial
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.1d( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=0 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000080 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.1d( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=0 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.1d( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000018 1 0.000032
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.1d( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.1d( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.1d( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.1d( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.1d( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.1d( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.1d( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.1d( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000111 1 0.000040
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.1d( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.4(unlocked)] enter Initial
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.4( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=0 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000063 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.4( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=0 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.4( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000007 1 0.000018
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.4( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.4( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.4( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.4( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.4( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.4( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.4( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.4( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000072 1 0.000047
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.4( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.1c(unlocked)] enter Initial
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.1c( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=0 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000197 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.1c( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=0 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.1c( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000027 1 0.000051
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.1c( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.1c( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.1c( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.1c( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000008 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.1c( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.1c( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.1c( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.1c( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000146 1 0.000059
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.1c( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.f(unlocked)] enter Initial
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.f( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=0 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000177 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.f( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=0 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.f( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000022 1 0.000098
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.f( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.f( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.f( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.f( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000013 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.f( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.f( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.f( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.f( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000346 1 0.000230
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.f( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.2(unlocked)] enter Initial
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.2( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=0 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000070 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.2( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=0 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.2( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000008 1 0.000016
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.2( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.2( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.2( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.2( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.2( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.2( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.2( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.2( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000067 1 0.000035
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.2( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.5(unlocked)] enter Initial
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.5( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=0 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000048 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.5( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=0 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.5( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000011
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.5( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.5( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.5( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.5( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.5( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.5( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.5( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.5( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000148 1 0.000043
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.5( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.1f(unlocked)] enter Initial
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.1f( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=0 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000085 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.1f( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=0 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.1f( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000008 1 0.000013
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.1f( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.1f( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.1f( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.1f( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.1f( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.1f( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.1f( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.1f( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000059 1 0.000033
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.1f( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.2(unlocked)] enter Initial
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.2( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=0 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000113 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.2( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=0 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.2( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000013 1 0.000034
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.2( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.2( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.2( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.2( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000017 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.2( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.2( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.2( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.2( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000130 1 0.000067
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.2( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.3(unlocked)] enter Initial
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.3( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=0 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000110 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.3( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=0 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.3( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000010 1 0.000026
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.3( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.3( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.3( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.3( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000008 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.3( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.3( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.3( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.3( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000074 1 0.000057
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.3( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.b(unlocked)] enter Initial
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.b( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=0 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000084 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.b( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=0 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.b( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000011 1 0.000026
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.b( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.b( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.b( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.b( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000010 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.b( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.b( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.b( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.b( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000087 1 0.000061
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.b( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.8(unlocked)] enter Initial
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.8( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=0 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000110 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.8( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=0 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.8( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000011 1 0.000028
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.8( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.8( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.8( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.8( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000009 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.8( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.8( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.8( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.8( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000092 1 0.000054
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.8( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.16(unlocked)] enter Initial
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.16( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=0 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000052 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.16( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=0 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.16( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000007 1 0.000019
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.16( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.16( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.16( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.16( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000010 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.16( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.16( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.16( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.16( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000084 1 0.000041
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.16( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.17(unlocked)] enter Initial
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.17( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=0 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000059 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.17( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=0 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.17( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000010 1 0.000023
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.17( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.17( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.17( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.17( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000010 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.17( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.17( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.17( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.17( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000324 1 0.000056
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.17( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.13(unlocked)] enter Initial
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.13( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=0 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000096 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.13( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=0 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.13( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000021 1 0.000039
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.13( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.13( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.13( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.13( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000009 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.13( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.13( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.13( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.13( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000229 1 0.000055
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.13( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.15(unlocked)] enter Initial
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.15( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=0 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000094 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.15( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=0 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.15( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000013 1 0.000100
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.15( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.15( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.15( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.15( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000010 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.15( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.15( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.15( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.15( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000149 1 0.000058
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.15( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.15(unlocked)] enter Initial
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.15( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=0 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000058 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.15( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=0 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.15( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000008 1 0.000021
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.15( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.15( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.15( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.15( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000010 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.15( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.15( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.15( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.15( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000099 1 0.000077
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.15( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.13(unlocked)] enter Initial
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.13( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=0 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000094 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.13( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=0 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.13( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000008 1 0.000020
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.13( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.13( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.13( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.13( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000007 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.13( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.13( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.13( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.13( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000221 1 0.000037
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.13( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.14(unlocked)] enter Initial
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.14( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=0 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000081 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.14( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=0 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.14( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000009 1 0.000022
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.14( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.14( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.14( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.14( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000013 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.14( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.14( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.14( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.14( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000084 1 0.000047
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.14( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.12(unlocked)] enter Initial
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.12( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=0 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000065 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.12( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=0 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.12( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000008 1 0.000019
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.12( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.12( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.12( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.12( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000011 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.12( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.12( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.12( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.12( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000152 1 0.000071
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.12( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.11(unlocked)] enter Initial
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.11( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=0 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000075 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.11( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=0 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.11( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000011 1 0.000026
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.11( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.11( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.11( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.11( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000009 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.11( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.11( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.11( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.11( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000112 1 0.000056
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.11( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.18( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 11.042418 13 0.000093
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.18( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 11.050443 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.18( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 11.050489 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.18( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started 11.050530 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.18( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.18( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.956947327s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 97.440971375s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.18( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.956913948s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440971375s@ mbc={}] exit Reset 0.000074 1 0.000128
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.18( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.956913948s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440971375s@ mbc={}] enter Started
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.18( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.956913948s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440971375s@ mbc={}] enter Start
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.18( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.956913948s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440971375s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.18( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.956913948s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440971375s@ mbc={}] exit Start 0.000007 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.18( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.956913948s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440971375s@ mbc={}] enter Started/Stray
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.14( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 11.043192 13 0.000437
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.14( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 11.050829 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.14( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 11.050886 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.13( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 11.043244 13 0.000482
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.14( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started 11.050938 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.13( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 11.050930 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.14( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.13( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 11.050998 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.13( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started 11.051030 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.13( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.14( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.956763268s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 97.441024780s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.12( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 11.043300 13 0.000078
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.13( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.956607819s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 97.440879822s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.12( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 11.050987 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.14( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.956736565s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.441024780s@ mbc={}] exit Reset 0.000054 1 0.000096
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.12( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 11.051050 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.14( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.956736565s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.441024780s@ mbc={}] enter Started
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.13( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.956587791s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440879822s@ mbc={}] exit Reset 0.000039 1 0.000072
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.14( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.956736565s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.441024780s@ mbc={}] enter Start
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.13( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.956587791s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440879822s@ mbc={}] enter Started
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.14( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.956736565s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.441024780s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.13( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.956587791s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440879822s@ mbc={}] enter Start
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.13( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.956587791s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440879822s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.14( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.956736565s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.441024780s@ mbc={}] exit Start 0.000008 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.13( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.956587791s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440879822s@ mbc={}] exit Start 0.000007 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.14( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.956736565s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.441024780s@ mbc={}] enter Started/Stray
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.13( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.956587791s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440879822s@ mbc={}] enter Started/Stray
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.12( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started 11.051090 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.12( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.11( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 11.043361 13 0.000590
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.11( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 11.051156 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.11( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 11.051207 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.12( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.956534386s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 97.440910339s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.11( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started 11.051242 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.11( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.10( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 11.043444 13 0.000523
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.10( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 11.051277 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.10( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 11.051320 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.11( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.956438065s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 97.440856934s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.12( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.956491470s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440910339s@ mbc={}] exit Reset 0.000097 1 0.000164
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.12( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.956491470s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440910339s@ mbc={}] enter Started
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.11( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.956419945s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440856934s@ mbc={}] exit Reset 0.000036 1 0.000072
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.12( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.956491470s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440910339s@ mbc={}] enter Start
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.10( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started 11.051357 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.11( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.956419945s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440856934s@ mbc={}] enter Started
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.12( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.956491470s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440910339s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.10( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.11( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.956419945s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440856934s@ mbc={}] enter Start
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.12( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.956491470s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440910339s@ mbc={}] exit Start 0.000011 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.11( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.956419945s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440856934s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.11( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.956419945s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440856934s@ mbc={}] exit Start 0.000007 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.12( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.956491470s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440910339s@ mbc={}] enter Started/Stray
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.11( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.956419945s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440856934s@ mbc={}] enter Started/Stray
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.10( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.956384659s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 97.440849304s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.10( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.956365585s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440849304s@ mbc={}] exit Reset 0.000040 1 0.000078
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.10( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.956365585s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440849304s@ mbc={}] enter Started
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.10( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.956365585s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440849304s@ mbc={}] enter Start
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.10( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.956365585s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440849304s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.10( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.956365585s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440849304s@ mbc={}] exit Start 0.000007 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.10( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.956365585s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440849304s@ mbc={}] enter Started/Stray
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.f( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 11.042994 13 0.000188
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.f( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 11.051089 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.f( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 11.051486 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.e( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 11.043627 13 0.000163
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.e( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 11.051505 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.e( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 11.051567 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.f( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started 11.051521 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.e( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started 11.051602 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.f( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.e( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 5.429336 4 0.000073
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 5.435686 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.e( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.956201553s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 97.440826416s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 6.995945 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 6.995990 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.e( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.956164360s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440826416s@ mbc={}] exit Reset 0.000067 1 0.000089
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.f( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.956261635s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 97.440933228s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.e( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.956164360s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440826416s@ mbc={}] enter Started
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.e( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.956164360s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440826416s@ mbc={}] enter Start
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.e( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.956164360s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440826416s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.e( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.956164360s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440826416s@ mbc={}] exit Start 0.000007 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.e( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.956164360s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440826416s@ mbc={}] enter Started/Stray
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.f( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.956238747s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440933228s@ mbc={}] exit Reset 0.000108 1 0.000140
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.f( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.956238747s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440933228s@ mbc={}] enter Started
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.f( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.956238747s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440933228s@ mbc={}] enter Start
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.f( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.956238747s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440933228s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.f( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.956238747s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440933228s@ mbc={}] exit Start 0.000007 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44 pruub=10.570457458s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 active pruub 95.055160522s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.f( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.956238747s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440933228s@ mbc={}] enter Started/Stray
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.d( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 11.043897 13 0.000121
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.d( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 11.051779 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.d( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 11.051830 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 5.429675 4 0.000056
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44 pruub=10.570424080s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 95.055160522s@ mbc={}] exit Reset 0.000103 1 0.000166
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44 pruub=10.570424080s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 95.055160522s@ mbc={}] enter Started
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44 pruub=10.570424080s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 95.055160522s@ mbc={}] enter Start
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44 pruub=10.570424080s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 95.055160522s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44 pruub=10.570424080s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 95.055160522s@ mbc={}] exit Start 0.000045 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44 pruub=10.570424080s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 95.055160522s@ mbc={}] enter Started/Stray
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.2( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 11.043777 13 0.000670
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.2( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 11.052276 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.2( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 11.052392 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.2( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started 11.052458 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.2( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.2( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.955455780s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 97.440795898s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.2( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.955396652s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440795898s@ mbc={}] exit Reset 0.000081 1 0.000246
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.2( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.955396652s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440795898s@ mbc={}] enter Started
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.2( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.955396652s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440795898s@ mbc={}] enter Start
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.2( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.955396652s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440795898s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.2( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.955396652s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440795898s@ mbc={}] exit Start 0.000015 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.2( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.955396652s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440795898s@ mbc={}] enter Started/Stray
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 5.436018 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.1( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 11.044698 13 0.000085
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.1( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 11.052587 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 6.996457 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.1( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 11.052649 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.d( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started 11.051854 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 6.996518 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.d( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.1( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started 11.052690 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.d( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.955231667s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 97.440811157s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.1( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.d( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.955207825s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440811157s@ mbc={}] exit Reset 0.000087 1 0.000830
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.d( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.955207825s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440811157s@ mbc={}] enter Started
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.1( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.955141068s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 97.440750122s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.d( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.955207825s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440811157s@ mbc={}] enter Start
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44 pruub=10.569484711s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 active pruub 95.055030823s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.d( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.955207825s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440811157s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.d( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.955207825s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440811157s@ mbc={}] exit Start 0.000011 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.d( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.955207825s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440811157s@ mbc={}] enter Started/Stray
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.1( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.955098152s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440750122s@ mbc={}] exit Reset 0.000069 1 0.000168
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.1( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.955098152s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440750122s@ mbc={}] enter Started
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.1( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.955098152s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440750122s@ mbc={}] enter Start
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44 pruub=10.569366455s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 95.055030823s@ mbc={}] exit Reset 0.000147 1 0.000857
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44 pruub=10.569366455s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 95.055030823s@ mbc={}] enter Started
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44 pruub=10.569366455s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 95.055030823s@ mbc={}] enter Start
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44 pruub=10.569366455s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 95.055030823s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44 pruub=10.569366455s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 95.055030823s@ mbc={}] exit Start 0.000026 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44 pruub=10.569366455s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 95.055030823s@ mbc={}] enter Started/Stray
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 5.430479 4 0.000063
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.1( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.955098152s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440750122s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 5.437004 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.1( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.955098152s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440750122s@ mbc={}] exit Start 0.000068 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 6.996591 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.1( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.955098152s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440750122s@ mbc={}] enter Started/Stray
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 6.996647 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44 pruub=10.569361687s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 active pruub 95.055160522s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44 pruub=10.569333076s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 95.055160522s@ mbc={}] exit Reset 0.000052 1 0.000100
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44 pruub=10.569333076s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 95.055160522s@ mbc={}] enter Started
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44 pruub=10.569333076s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 95.055160522s@ mbc={}] enter Start
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44 pruub=10.569333076s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 95.055160522s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44 pruub=10.569333076s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 95.055160522s@ mbc={}] exit Start 0.000015 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[6.1( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 5.430951 4 0.000054
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44 pruub=10.569333076s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 95.055160522s@ mbc={}] enter Started/Stray
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[6.1( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 5.437234 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[6.1( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 6.996551 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[6.1( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 6.996636 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[6.1( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[6.1( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44 pruub=10.568908691s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 active pruub 95.054847717s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.4( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 11.045187 13 0.000094
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.4( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 11.053108 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.4( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 11.053156 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[6.1( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44 pruub=10.568881989s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 95.054847717s@ mbc={}] exit Reset 0.000080 1 0.000109
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.4( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started 11.053187 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[6.1( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44 pruub=10.568881989s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 95.054847717s@ mbc={}] enter Started
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[6.1( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44 pruub=10.568881989s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 95.054847717s@ mbc={}] enter Start
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[6.1( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44 pruub=10.568881989s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 95.054847717s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[6.1( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44 pruub=10.568881989s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 95.054847717s@ mbc={}] exit Start 0.000028 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.9( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 11.045301 13 0.000071
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[6.1( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44 pruub=10.568881989s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 95.054847717s@ mbc={}] enter Started/Stray
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.9( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 11.053288 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.9( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 11.053343 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.9( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started 11.053369 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.9( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.9( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.954612732s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 97.440727234s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.4( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 5.431316 4 0.000197
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.9( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.954583168s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440727234s@ mbc={}] exit Reset 0.000057 1 0.000098
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 5.437622 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.9( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.954583168s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440727234s@ mbc={}] enter Started
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.4( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.954586029s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 97.440734863s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 6.998096 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.4( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.954518318s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440734863s@ mbc={}] exit Reset 0.000203 1 0.000244
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 6.998176 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.4( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.954518318s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440734863s@ mbc={}] enter Started
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.4( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.954518318s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440734863s@ mbc={}] enter Start
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.9( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.954583168s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440727234s@ mbc={}] enter Start
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.4( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.954518318s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440734863s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.4( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.954518318s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440734863s@ mbc={}] exit Start 0.000010 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.9( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.954583168s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440727234s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.4( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.954518318s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440734863s@ mbc={}] enter Started/Stray
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.9( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.954583168s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440727234s@ mbc={}] exit Start 0.000095 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.9( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.954583168s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440727234s@ mbc={}] enter Started/Stray
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44 pruub=10.568431854s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 active pruub 95.054718018s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44 pruub=10.568410873s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 95.054718018s@ mbc={}] exit Reset 0.000071 1 0.000186
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44 pruub=10.568410873s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 95.054718018s@ mbc={}] enter Started
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44 pruub=10.568410873s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 95.054718018s@ mbc={}] enter Start
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44 pruub=10.568410873s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 95.054718018s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44 pruub=10.568410873s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 95.054718018s@ mbc={}] exit Start 0.000006 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44 pruub=10.568410873s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 95.054718018s@ mbc={}] enter Started/Stray
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.5( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 11.045656 13 0.000214
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.5( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 11.053691 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.1a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 11.045663 13 0.000089
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.1a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 11.053687 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.5( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 11.053745 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.1a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 11.053735 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.5( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started 11.053778 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.5( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.1a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started 11.053781 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.1a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.5( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.954238892s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 97.440666199s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.5( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.954220772s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440666199s@ mbc={}] exit Reset 0.000037 1 0.000087
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.5( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.954220772s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440666199s@ mbc={}] enter Started
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.5( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.954220772s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440666199s@ mbc={}] enter Start
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.1a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.954300880s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 97.440750122s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.5( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.954220772s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440666199s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.5( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.954220772s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440666199s@ mbc={}] exit Start 0.000012 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.5( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.954220772s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440666199s@ mbc={}] enter Started/Stray
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.1a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.954264641s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440750122s@ mbc={}] exit Reset 0.000065 1 0.000121
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.1a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.954264641s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440750122s@ mbc={}] enter Started
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 5.431939 4 0.000097
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 5.438112 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 6.998218 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.1a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.954264641s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440750122s@ mbc={}] enter Start
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 6.998352 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.1a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.954264641s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440750122s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.1a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.954264641s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440750122s@ mbc={}] exit Start 0.000075 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.1a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.954264641s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440750122s@ mbc={}] enter Started/Stray
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44 pruub=10.567855835s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 active pruub 95.054450989s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44 pruub=10.567838669s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 95.054450989s@ mbc={}] exit Reset 0.000041 1 0.000075
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44 pruub=10.567838669s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 95.054450989s@ mbc={}] enter Started
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44 pruub=10.567838669s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 95.054450989s@ mbc={}] enter Start
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44 pruub=10.567838669s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 95.054450989s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44 pruub=10.567838669s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 95.054450989s@ mbc={}] exit Start 0.000031 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44 pruub=10.567838669s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 95.054450989s@ mbc={}] enter Started/Stray
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 11.046189 13 0.000074
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 11.054139 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 11.054202 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started 11.054259 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.953746796s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 97.440574646s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.953726768s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440574646s@ mbc={}] exit Reset 0.000083 1 0.000128
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.953726768s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440574646s@ mbc={}] enter Started
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.953726768s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440574646s@ mbc={}] enter Start
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.953726768s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440574646s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.953726768s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440574646s@ mbc={}] exit Start 0.000007 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.953726768s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440574646s@ mbc={}] enter Started/Stray
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.1b( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 11.046277 13 0.000061
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.1b( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 11.054492 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.1b( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 11.054550 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.1b( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started 11.054574 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.1b( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.1b( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.953638077s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 97.440696716s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.1b( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.953615189s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440696716s@ mbc={}] exit Reset 0.000042 1 0.000070
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.1b( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.953615189s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440696716s@ mbc={}] enter Started
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.1b( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.953615189s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440696716s@ mbc={}] enter Start
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.1b( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.953615189s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440696716s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.1b( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.953615189s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440696716s@ mbc={}] exit Start 0.000007 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.1b( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.953615189s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440696716s@ mbc={}] enter Started/Stray
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 5.432325 4 0.000074
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 5.438791 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.7( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 11.046763 13 0.000087
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 6.998551 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.7( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 11.055018 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.7( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 11.055063 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.7( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started 11.055087 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.7( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.7( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.953246117s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 97.440460205s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.7( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.953224182s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440460205s@ mbc={}] exit Reset 0.000038 1 0.000071
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.7( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.953224182s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440460205s@ mbc={}] enter Started
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.7( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.953224182s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440460205s@ mbc={}] enter Start
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.7( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.953224182s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440460205s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.7( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.953224182s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440460205s@ mbc={}] exit Start 0.000008 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.7( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.953224182s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440460205s@ mbc={}] enter Started/Stray
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 6.998818 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44 pruub=10.567257881s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 active pruub 95.054718018s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 5.436696 4 0.000103
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 5.439412 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.8( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 11.046042 13 0.000137
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 6.999763 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.8( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 11.054072 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.8( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 11.055686 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 6.999813 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44 pruub=10.562947273s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 active pruub 95.050613403s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44 pruub=10.562922478s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 95.050613403s@ mbc={}] exit Reset 0.000053 1 0.000124
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44 pruub=10.562922478s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 95.050613403s@ mbc={}] enter Started
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44 pruub=10.562922478s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 95.050613403s@ mbc={}] enter Start
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44 pruub=10.562922478s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 95.050613403s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44 pruub=10.562922478s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 95.050613403s@ mbc={}] exit Start 0.000009 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44 pruub=10.562922478s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 95.050613403s@ mbc={}] enter Started/Stray
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.1c( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 11.047344 13 0.000100
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.1c( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 11.055457 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.1c( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 11.056031 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.1c( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started 11.056053 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.1c( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44 pruub=10.566867828s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 95.054718018s@ mbc={}] exit Reset 0.000419 1 0.000699
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44 pruub=10.566867828s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 95.054718018s@ mbc={}] enter Started
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.1c( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.952595711s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 97.440460205s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44 pruub=10.566867828s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 95.054718018s@ mbc={}] enter Start
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44 pruub=10.566867828s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 95.054718018s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44 pruub=10.566867828s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 95.054718018s@ mbc={}] exit Start 0.000010 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44 pruub=10.566867828s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 95.054718018s@ mbc={}] enter Started/Stray
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.8( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started 11.055718 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.1c( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.952573776s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440460205s@ mbc={}] exit Reset 0.000068 1 0.000068
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.1c( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.952573776s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440460205s@ mbc={}] enter Started
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.8( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.1c( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.952573776s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440460205s@ mbc={}] enter Start
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.1c( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.952573776s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440460205s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.1c( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.952573776s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440460205s@ mbc={}] exit Start 0.000008 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.1c( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.952573776s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440460205s@ mbc={}] enter Started/Stray
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.8( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.953011513s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 97.440956116s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.8( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.952985764s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440956116s@ mbc={}] exit Reset 0.000058 1 0.000362
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.8( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.952985764s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440956116s@ mbc={}] enter Started
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.8( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.952985764s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440956116s@ mbc={}] enter Start
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.8( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.952985764s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440956116s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.8( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.952985764s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440956116s@ mbc={}] exit Start 0.000007 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.8( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.952985764s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440956116s@ mbc={}] enter Started/Stray
Jan 10 17:23:23 compute-0 ceph-osd[85764]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.1e( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.015204 2 0.000045
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.1e( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:23:23 compute-0 ceph-osd[85764]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.19( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.014974 2 0.000033
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.19( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.19( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.19( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.1e( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000106 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.1e( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.18( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.014552 2 0.000067
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.18( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.18( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.18( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.9(unlocked)] enter Initial
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.9( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=0 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000276 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.9( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=0 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.9( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000008 1 0.000025
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.9( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.9( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.9( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.9( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000065 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.9( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.9( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.9( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.9( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000151 1 0.000117
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.9( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.c(unlocked)] enter Initial
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.c( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=0 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000072 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.c( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=0 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.c( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000008 1 0.000014
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.c( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.c( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.c( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.c( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.c( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.c( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.c( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.c( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000085 1 0.000037
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.c( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.f(unlocked)] enter Initial
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.f( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=0 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000107 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.f( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=0 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.f( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000009 1 0.000026
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.f( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.f( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.f( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.f( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.f( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.f( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.f( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.f( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000132 1 0.000041
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.f( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.6(unlocked)] enter Initial
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.6( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=0 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000140 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.6( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=0 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.6( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000008 1 0.000020
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.6( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.6( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.6( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.6( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.6( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.6( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.6( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.6( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000223 1 0.000041
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.6( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.4(unlocked)] enter Initial
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.4( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=0 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000092 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.4( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=0 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.4( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000027 1 0.000038
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.4( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.4( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.4( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.4( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.4( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.4( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.4( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.4( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000198 1 0.000041
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.4( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.1(unlocked)] enter Initial
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.1( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=0 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000067 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.1( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=0 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.1( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000008 1 0.000014
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.1( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.1( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.1( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.1( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.1( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.1( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.1( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.1( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000083 1 0.000040
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.1( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.3(unlocked)] enter Initial
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.3( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=0 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000040 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.3( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=0 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.3( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000010
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.3( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.3( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.3( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.3( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.3( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.3( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.3( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.3( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000070 1 0.000050
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.3( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.6(unlocked)] enter Initial
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.6( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=0 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000055 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.6( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=0 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.6( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000010
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.6( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.6( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.6( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.6( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.6( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.6( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.6( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.6( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000070 1 0.000030
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.6( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.3(unlocked)] enter Initial
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.3( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=0 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000047 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.3( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=0 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.3( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000010
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.3( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.3( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.3( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.3( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.3( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.3( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.3( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.3( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000078 1 0.000044
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.3( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.a(unlocked)] enter Initial
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.a( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=0 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000067 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.a( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=0 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.a( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000007 1 0.000011
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.a( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.a( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.a( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.a( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.a( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.a( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.a( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.a( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000042 1 0.000030
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.a( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.1f(unlocked)] enter Initial
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.1f( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=0 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000040 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.1f( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=0 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.1f( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000010
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.1f( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.1f( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.1f( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.1f( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.1f( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.1f( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.1f( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.1f( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000052 1 0.000030
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.1f( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.1b(unlocked)] enter Initial
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.1b( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=0 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000038 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.1b( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=0 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.1b( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000010
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.1b( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.1b( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.1b( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.1b( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.1b( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.1b( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.1b( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.1b( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000043 1 0.000030
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.1b( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.18(unlocked)] enter Initial
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.18( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=0 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000038 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.18( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=0 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.18( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000010
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.18( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.18( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.18( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.18( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.18( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.18( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.18( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.18( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000046 1 0.000036
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.18( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.1b(unlocked)] enter Initial
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.1b( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=0 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000044 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.1b( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=0 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.1b( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000010
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.1b( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.1b( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.1b( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.1b( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.1b( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.1b( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.1b( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.1b( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000038 1 0.000029
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.1b( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.1f(unlocked)] enter Initial
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.1f( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=0 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000069 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.1f( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=0 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.1f( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000017 1 0.000021
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.1f( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.1f( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.1f( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.1f( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.1f( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.1f( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.1f( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.1f( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000063 1 0.000033
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.1f( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 17:23:23 compute-0 ceph-osd[85764]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.f( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.018990 2 0.000026
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.f( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.f( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.f( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:23:23 compute-0 ceph-osd[85764]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.17( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.015557 2 0.000060
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.17( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.17( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.17( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:23:23 compute-0 ceph-osd[85764]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.13( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.015084 2 0.000070
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.13( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.13( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.13( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:23:23 compute-0 ceph-osd[85764]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.15( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.014526 2 0.000049
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.15( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.15( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.15( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:23:23 compute-0 ceph-osd[85764]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.7( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.022030 2 0.000036
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.7( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.7( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.7( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:23:23 compute-0 ceph-osd[85764]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.12( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.012934 2 0.000052
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.12( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.12( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.12( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.9(unlocked)] enter Initial
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.9( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=0 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000119 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.9( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=0 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.9( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000020 1 0.000047
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.9( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.9( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.9( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.9( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000020 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.9( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.9( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.9( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.9( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000177 1 0.000094
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.9( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 17:23:23 compute-0 ceph-osd[85764]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.f( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.010267 2 0.000085
Jan 10 17:23:23 compute-0 ceph-osd[85764]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.f( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.f( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.c( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.010711 2 0.000054
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.f( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.c( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.c( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000005 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.c( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:23:23 compute-0 ceph-osd[85764]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.6( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.009791 2 0.000107
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.6( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.6( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.6( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:23:23 compute-0 ceph-osd[85764]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.4( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.009266 2 0.000035
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.4( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.4( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.4( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:23:23 compute-0 ceph-osd[85764]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.3( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.008795 2 0.000025
Jan 10 17:23:23 compute-0 ceph-osd[85764]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.1( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.009058 2 0.000030
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.3( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:23:23 compute-0 ceph-osd[85764]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.1( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.3( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000044 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.3( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.6( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.008616 2 0.000027
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.1( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000010 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.1( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.6( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.6( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000011 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.6( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:23:23 compute-0 ceph-osd[85764]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.3( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.008595 2 0.000028
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.3( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:23:23 compute-0 ceph-osd[85764]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.3( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000007 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.3( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.a( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.008416 2 0.000025
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.a( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.a( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000006 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.a( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:23:23 compute-0 ceph-osd[85764]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.1b( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.008491 2 0.000025
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.1b( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.1b( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.1b( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:23:23 compute-0 ceph-osd[85764]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.18( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.010807 2 0.000024
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.18( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.18( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.18( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:23:23 compute-0 ceph-osd[85764]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.1f( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.011269 2 0.000034
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.1f( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.1f( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.1f( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.1b( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.010714 2 0.000041
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.1b( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.1b( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000007 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.9( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.014543 2 0.000062
Jan 10 17:23:23 compute-0 ceph-osd[85764]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.9( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.9( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000006 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.9( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.006544 2 0.000079
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.9( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.9( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.9( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.9( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:23:23 compute-0 ceph-osd[85764]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.1f( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.010642 2 0.000026
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.1f( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.1f( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000007 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.1f( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.1b( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:23:23 compute-0 ceph-osd[85764]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.1d( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.032683 2 0.000042
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.1d( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.1d( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.1d( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:23:23 compute-0 ceph-osd[85764]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.1c( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.031906 2 0.000087
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.1c( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.1c( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.1c( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:23:23 compute-0 ceph-osd[85764]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.5( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.030507 2 0.000033
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.5( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.5( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.5( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:23:23 compute-0 ceph-osd[85764]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.2( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.031003 2 0.000066
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.2( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.2( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000005 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.2( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:23:23 compute-0 ceph-osd[85764]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.1f( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.030544 2 0.000027
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.1f( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.1f( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.1f( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:23:23 compute-0 ceph-osd[85764]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.4( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.033880 2 0.000169
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.4( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.4( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.4( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:23:23 compute-0 ceph-osd[85764]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.2( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.030952 2 0.000052
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.2( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.2( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.2( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:23:23 compute-0 ceph-osd[85764]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.3( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.030699 2 0.000103
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.3( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.3( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000007 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.3( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:23:23 compute-0 ceph-osd[85764]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.b( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.030398 2 0.000051
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.b( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.b( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.b( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:23:23 compute-0 ceph-osd[85764]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.16( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.029920 2 0.000037
Jan 10 17:23:23 compute-0 ceph-osd[85764]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.16( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:23:23 compute-0 ceph-osd[85764]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.16( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.16( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.15( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.027724 2 0.000043
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.15( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.15( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000005 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.15( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:23:23 compute-0 ceph-osd[85764]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.8( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.030287 2 0.000051
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.8( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.8( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.8( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:23:23 compute-0 ceph-osd[85764]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.13( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.027401 2 0.000070
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.13( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.13( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.13( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:23:23 compute-0 ceph-osd[85764]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.f( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.032882 2 0.000073
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.f( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.f( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000006 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.f( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:23:23 compute-0 ceph-osd[85764]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.11( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.026923 2 0.000093
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.11( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.11( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.11( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:23:23 compute-0 ceph-osd[85764]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.14( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.027779 2 0.000039
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.14( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.14( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.14( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 62398464 unmapped: 507904 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T16:59:24.786800+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 406042 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 44 handle_osd_map epochs [44,45], i have 44, src has [1,45]
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 62406656 unmapped: 499712 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 44 handle_osd_map epochs [44,45], i have 45, src has [1,45]
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[3.1f( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.116457 2 0.000189
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[3.1f( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.127230 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[3.1f( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[3.1f( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[7.1b( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.116902 2 0.000751
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[7.1b( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.127743 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[7.1b( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[7.1b( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[2.11( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.109784 2 0.000652
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[3.12( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.124180 2 0.000034
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[2.11( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.136864 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[3.12( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.137334 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[2.11( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[3.12( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[2.11( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[3.12( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[5.14( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.109815 2 0.000588
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[5.14( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.137743 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[5.14( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[5.14( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[5.15( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.110905 2 0.000078
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[5.15( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.138807 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[5.15( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[5.15( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[3.15( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.124603 2 0.000024
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[3.15( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.139323 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[3.15( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[3.15( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[7.13( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.124744 2 0.000032
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[7.13( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.140108 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[7.13( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[7.13( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[2.13( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.110949 2 0.000049
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[2.13( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.138628 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[2.13( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[3.17( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.124931 2 0.000038
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[3.17( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.140871 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[2.13( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[3.17( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[3.17( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[3.9( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.117512 2 0.000035
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[3.9( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.124291 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[3.9( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[3.9( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[2.16( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.111253 2 0.000039
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[2.16( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.141303 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[2.8( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.111189 2 0.000036
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[2.16( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[2.8( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.141616 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[2.8( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[2.16( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[2.8( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[3.a( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.120732 2 0.000087
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[3.a( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.129277 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[3.a( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[3.a( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[2.b( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.111609 2 0.000032
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[7.f( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.121307 2 0.000045
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[7.f( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.131750 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[2.b( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.142155 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[7.f( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[2.b( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[7.f( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[2.b( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[7.3( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.120904 2 0.000067
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[7.3( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.129622 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[7.3( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[7.3( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[5.3( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.111824 2 0.000079
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[5.3( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.142672 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[5.3( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[5.3( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[3.6( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.121252 2 0.000047
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[3.6( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.129986 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[3.6( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[3.6( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[5.2( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.112070 2 0.000032
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[5.2( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.143198 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[5.2( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[5.2( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[3.3( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.121427 2 0.000069
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[3.3( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.130375 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[3.3( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[3.3( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[2.2( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.113239 2 0.000056
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[2.2( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.144353 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[2.2( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[2.2( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[2.1f( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.113089 2 0.000076
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[5.5( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.113433 2 0.000030
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[5.5( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.144218 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[2.1f( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.143876 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[5.5( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[2.1f( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[2.f( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.111015 2 0.000708
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[2.1c( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.113632 2 0.000040
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[5.5( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[2.1c( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.145732 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[2.f( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.145048 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[2.1c( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[2.f( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[2.f( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[2.1f( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[2.1c( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[7.6( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.121909 2 0.000026
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[7.6( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.131962 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[7.6( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[5.4( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.112567 2 0.000041
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[7.6( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[5.4( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.146566 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[5.4( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[5.4( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[7.9( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.118480 2 0.000064
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[7.9( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.133233 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[7.9( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[7.9( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[7.18( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.118687 2 0.000045
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[7.18( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.129571 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[7.18( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[7.18( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[3.1( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.121933 2 0.000083
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[2.1d( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.114167 2 0.000045
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[3.1( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.131136 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[2.1d( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.147010 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[3.1( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[2.1d( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[3.1( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[2.1d( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[5.7( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.125837 2 0.000044
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[5.7( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.147982 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[5.7( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[5.7( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[3.c( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.122216 2 0.000039
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[3.c( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.133051 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[3.c( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[3.c( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[7.4( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.122184 2 0.000032
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[7.4( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.131693 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[7.4( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[7.4( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[3.f( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.129244 2 0.000026
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[3.f( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.148339 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[3.f( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[3.f( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[7.1f( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.118921 2 0.000036
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[3.1b( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.121485 2 0.000048
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[7.1f( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.130274 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[3.1b( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.130067 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[3.1b( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[7.1f( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[3.1b( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[7.1f( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[2.18( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.134186 2 0.000024
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[2.18( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.148911 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[2.18( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[2.18( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[2.19( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.134270 2 0.000033
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[2.19( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.149394 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[5.1e( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.134241 2 0.000142
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[5.1e( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.149768 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[2.19( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[5.1e( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[5.1e( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[3.1f( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[2.19( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[3.1f( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=44/37 les/c/f=45/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003210 3 0.000157
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[3.1f( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=44/37 les/c/f=45/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[3.1f( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=44/37 les/c/f=45/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000010 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[3.1f( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=44/37 les/c/f=45/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.139311 7 0.000095
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 45 handle_osd_map epochs [45,45], i have 45, src has [1,45]
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.140164 7 0.000309
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.137428 7 0.000170
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.139271 7 0.000071
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.139035 7 0.000159
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.139389 7 0.000071
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 45 handle_osd_map epochs [45,45], i have 45, src has [1,45]
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[7.1b( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[2.11( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[3.12( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[5.14( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[7.1b( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=44/40 les/c/f=45/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.008753 4 0.000194
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[7.1b( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=44/40 les/c/f=45/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[5.15( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[7.1b( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=44/40 les/c/f=45/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[7.1b( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=44/40 les/c/f=45/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[3.15( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[7.13( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[2.13( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[3.17( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[3.9( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[2.11( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=44/35 les/c/f=45/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009030 4 0.000073
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[2.11( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=44/35 les/c/f=45/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[2.11( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=44/35 les/c/f=45/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000030 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[2.11( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=44/35 les/c/f=45/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[2.8( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[2.16( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[3.a( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[7.f( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[2.b( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[5.3( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[3.6( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[5.2( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[2.2( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[3.12( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=44/37 les/c/f=45/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009697 4 0.000152
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[3.12( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=44/37 les/c/f=45/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[3.12( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=44/37 les/c/f=45/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000010 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[5.14( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009630 4 0.000140
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[5.14( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[3.12( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=44/37 les/c/f=45/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[5.14( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[5.14( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[5.15( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009609 4 0.000086
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[5.15( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[5.15( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000019 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[5.15( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[7.13( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=44/40 les/c/f=45/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009569 4 0.000061
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[7.13( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=44/40 les/c/f=45/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[7.13( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=44/40 les/c/f=45/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[7.13( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=44/40 les/c/f=45/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[7.3( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[2.13( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=44/35 les/c/f=45/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009573 4 0.000076
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[2.13( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=44/35 les/c/f=45/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[2.13( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=44/35 les/c/f=45/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000009 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[2.13( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=44/35 les/c/f=45/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[3.17( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=44/37 les/c/f=45/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009587 4 0.000120
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[3.17( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=44/37 les/c/f=45/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[3.17( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=44/37 les/c/f=45/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000010 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[3.17( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=44/37 les/c/f=45/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[3.9( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=44/37 les/c/f=45/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009549 4 0.000058
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[3.9( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=44/37 les/c/f=45/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[3.9( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=44/37 les/c/f=45/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000020 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[3.9( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=44/37 les/c/f=45/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[2.8( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=44/35 les/c/f=45/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009612 4 0.000069
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[2.8( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=44/35 les/c/f=45/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[2.8( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=44/35 les/c/f=45/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000011 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[2.8( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=44/35 les/c/f=45/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[3.15( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=44/37 les/c/f=45/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009550 4 0.000101
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[3.15( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=44/37 les/c/f=45/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[3.15( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=44/37 les/c/f=45/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000014 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[3.15( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=44/37 les/c/f=45/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[2.16( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=44/35 les/c/f=45/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009920 4 0.000126
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[3.a( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=44/37 les/c/f=45/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009834 4 0.000056
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[3.a( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=44/37 les/c/f=45/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[2.16( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=44/35 les/c/f=45/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[3.a( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=44/37 les/c/f=45/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[3.a( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=44/37 les/c/f=45/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[2.16( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=44/35 les/c/f=45/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000011 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[2.16( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=44/35 les/c/f=45/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[7.f( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=44/40 les/c/f=45/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009835 4 0.000074
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[7.f( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=44/40 les/c/f=45/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[7.f( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=44/40 les/c/f=45/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[7.f( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=44/40 les/c/f=45/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[2.b( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=44/35 les/c/f=45/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009840 4 0.000091
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[2.b( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=44/35 les/c/f=45/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[2.b( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=44/35 les/c/f=45/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[2.b( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=44/35 les/c/f=45/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[5.3( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009775 4 0.000058
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[5.3( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[3.6( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=44/37 les/c/f=45/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009715 4 0.000077
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[3.6( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=44/37 les/c/f=45/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[5.3( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[5.3( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[3.6( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=44/37 les/c/f=45/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[3.6( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=44/37 les/c/f=45/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[5.2( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009698 4 0.000062
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[5.2( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[5.2( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[5.2( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[2.2( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=44/35 les/c/f=45/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009744 4 0.000063
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[2.2( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=44/35 les/c/f=45/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[2.2( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=44/35 les/c/f=45/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[2.2( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=44/35 les/c/f=45/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[3.3( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[5.5( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[2.f( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[2.1c( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[2.1f( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[7.18( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[7.6( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[5.4( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[3.1( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[7.3( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=44/40 les/c/f=45/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.010504 4 0.000441
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[7.3( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=44/40 les/c/f=45/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[7.3( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=44/40 les/c/f=45/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000029 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[7.3( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=44/40 les/c/f=45/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[3.c( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[2.1d( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[7.4( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[3.f( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[5.7( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[3.1b( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[7.1f( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[2.18( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[2.19( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[7.9( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[5.1e( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[3.3( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=44/37 les/c/f=45/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.010674 4 0.000094
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[3.3( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=44/37 les/c/f=45/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[3.3( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=44/37 les/c/f=45/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[3.3( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=44/37 les/c/f=45/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[5.5( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.010522 4 0.000167
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[2.f( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=44/35 les/c/f=45/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.010468 4 0.000205
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[5.5( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[2.f( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=44/35 les/c/f=45/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[5.5( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000017 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[2.1c( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=44/35 les/c/f=45/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.010453 4 0.000164
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[2.f( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=44/35 les/c/f=45/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000022 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[2.1f( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=44/35 les/c/f=45/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.010570 4 0.000247
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[2.1c( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=44/35 les/c/f=45/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[5.5( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[2.1f( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=44/35 les/c/f=45/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[2.1c( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=44/35 les/c/f=45/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[2.1f( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=44/35 les/c/f=45/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[2.1f( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=44/35 les/c/f=45/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[2.f( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=44/35 les/c/f=45/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[7.18( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=44/40 les/c/f=45/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.010347 4 0.000058
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[7.18( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=44/40 les/c/f=45/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[7.18( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=44/40 les/c/f=45/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000002 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[7.18( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=44/40 les/c/f=45/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[2.1c( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=44/35 les/c/f=45/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[7.6( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=44/40 les/c/f=45/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.010504 4 0.000068
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[7.6( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=44/40 les/c/f=45/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[7.6( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=44/40 les/c/f=45/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[7.6( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=44/40 les/c/f=45/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[5.4( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.010479 4 0.000119
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[5.4( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[3.1( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=44/37 les/c/f=45/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.010544 4 0.000063
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[5.4( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[5.4( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[3.1( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=44/37 les/c/f=45/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[3.c( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=44/37 les/c/f=45/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.010483 4 0.000055
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[2.1d( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=44/35 les/c/f=45/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.010553 4 0.000113
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[3.1( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=44/37 les/c/f=45/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000011 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[2.1d( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=44/35 les/c/f=45/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[3.c( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=44/37 les/c/f=45/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[3.1( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=44/37 les/c/f=45/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[2.1d( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=44/35 les/c/f=45/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[2.1d( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=44/35 les/c/f=45/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[3.c( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=44/37 les/c/f=45/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000013 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[3.c( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=44/37 les/c/f=45/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[7.4( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=44/40 les/c/f=45/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.010455 4 0.000077
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[7.4( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=44/40 les/c/f=45/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[7.4( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=44/40 les/c/f=45/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[7.4( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=44/40 les/c/f=45/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[3.f( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=44/37 les/c/f=45/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.010463 4 0.000069
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[3.f( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=44/37 les/c/f=45/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[3.f( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=44/37 les/c/f=45/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[5.7( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.010604 4 0.000054
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[3.f( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=44/37 les/c/f=45/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[5.7( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[3.1b( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=44/37 les/c/f=45/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.010390 4 0.000044
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[7.1f( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=44/40 les/c/f=45/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.010391 4 0.000062
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[7.1f( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=44/40 les/c/f=45/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[3.1b( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=44/37 les/c/f=45/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[7.1f( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=44/40 les/c/f=45/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[5.7( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000012 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[7.1f( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=44/40 les/c/f=45/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[5.7( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[3.1b( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=44/37 les/c/f=45/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000012 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[3.1b( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=44/37 les/c/f=45/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[2.19( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=44/35 les/c/f=45/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.010242 4 0.000129
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[2.19( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=44/35 les/c/f=45/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[2.19( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=44/35 les/c/f=45/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[2.18( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=44/35 les/c/f=45/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.010376 4 0.000080
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[2.19( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=44/35 les/c/f=45/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[2.18( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=44/35 les/c/f=45/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[2.18( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=44/35 les/c/f=45/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000009 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[2.18( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=44/35 les/c/f=45/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[7.9( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=44/40 les/c/f=45/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.010870 4 0.000072
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[7.9( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=44/40 les/c/f=45/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[7.9( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=44/40 les/c/f=45/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[7.9( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=44/40 les/c/f=45/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[5.1e( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.010374 4 0.000092
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[5.1e( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[5.1e( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000012 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[5.1e( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.d( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.150637 7 0.000083
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.d( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.d( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.f( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.151593 7 0.000051
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.f( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.f( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.d( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000804 1 0.000046
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.d( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.f( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000855 1 0.000074
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.f( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[6.1( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.154188 7 0.000134
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[6.1( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[6.1( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[6.1( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000070 1 0.000075
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[6.1( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.1a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.154166 7 0.000180
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.1a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.1a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.e( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.156091 7 0.000072
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.e( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.1( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.155071 7 0.000134
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.e( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.1( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.1( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.13( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.156727 7 0.000076
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.13( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.13( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.11( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.156633 7 0.000058
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.11( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.11( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.1a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000341 1 0.000042
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.1a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.1c( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.153129 7 0.000095
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.1c( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.1c( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.1( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000354 1 0.000019
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.1( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.1b( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.154115 7 0.000072
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.1b( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.1b( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.154367 7 0.000053
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.e( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000688 1 0.000059
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.e( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.13( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000464 1 0.000059
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.13( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.11( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000566 1 0.000021
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.11( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.1c( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000557 1 0.000112
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.1c( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.1b( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000555 1 0.000021
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.1b( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000573 1 0.000053
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.4( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.157576 7 0.000056
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.4( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.4( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.4( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000316 1 0.000115
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.4( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.7( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.156984 7 0.000058
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.7( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.7( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.9( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.158001 7 0.000158
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.7( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000036 1 0.000033
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.9( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.7( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.9( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.156607 7 0.000131
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.10( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.160086 7 0.000070
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.10( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.10( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.2( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.159203 7 0.000079
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.2( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.2( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.12( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.160212 7 0.000183
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.12( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.12( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.5( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.158216 7 0.000071
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.5( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.5( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.8( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.156720 7 0.000120
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.8( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.8( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.14( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.160438 7 0.000077
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.14( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.14( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.9( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000482 1 0.000082
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.9( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.18( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.160725 7 0.000076
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.18( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.18( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000358 1 0.000028
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.10( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000372 1 0.000017
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.10( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.2( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000379 1 0.000028
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.2( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.12( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000404 1 0.000021
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.12( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.5( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000441 1 0.000021
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.5( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.8( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000475 1 0.000026
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.8( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.14( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000572 1 0.000021
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.14( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.18( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000639 1 0.000627
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.18( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.d( empty lb MIN local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=-1 lpr=44 DELETING pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.011822 1 0.000100
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.d( empty lb MIN local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.012686 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.d( empty lb MIN local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.163371 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.f( empty lb MIN local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=-1 lpr=44 DELETING pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.014650 1 0.000051
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.f( empty lb MIN local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.015577 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.f( empty lb MIN local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.167226 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[6.1( v 33'39 (0'0,33'39] lb MIN local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=-1 lpr=44 DELETING pi=[39,44)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.018854 1 0.000021
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[6.1( v 33'39 (0'0,33'39] lb MIN local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.018953 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[6.1( v 33'39 (0'0,33'39] lb MIN local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.173206 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.1a( empty lb MIN local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 DELETING pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.084920 1 0.000040
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.1a( empty lb MIN local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.085320 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.1a( empty lb MIN local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.239593 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.1( empty lb MIN local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 DELETING pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.092234 1 0.000047
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.1( empty lb MIN local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.092639 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.1( empty lb MIN local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.247808 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.e( empty lb MIN local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 DELETING pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.099227 1 0.000087
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.e( empty lb MIN local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.099970 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.e( empty lb MIN local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.256111 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.13( empty lb MIN local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 DELETING pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.106674 1 0.000102
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.13( empty lb MIN local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.107190 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.13( empty lb MIN local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.263981 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.11( empty lb MIN local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 DELETING pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.113943 1 0.000040
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.11( empty lb MIN local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.114558 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.11( empty lb MIN local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.271223 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.1c( empty lb MIN local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 DELETING pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.121338 1 0.000040
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.1c( empty lb MIN local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.121970 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.1c( empty lb MIN local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.275195 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.1b( empty lb MIN local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 DELETING pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.128578 1 0.000035
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.1b( empty lb MIN local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.129181 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.1b( empty lb MIN local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.283331 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.a( empty lb MIN local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 DELETING pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.135780 1 0.000019
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.a( empty lb MIN local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.136398 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.a( empty lb MIN local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.290810 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.4( empty lb MIN local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=-1 lpr=44 DELETING pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.140880 1 0.000031
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.4( empty lb MIN local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.141241 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.4( empty lb MIN local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.298877 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.7( empty lb MIN local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=-1 lpr=44 DELETING pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.148335 1 0.000011
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.7( empty lb MIN local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.148411 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.7( empty lb MIN local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.305423 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.9( empty lb MIN local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=-1 lpr=44 DELETING pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.155137 1 0.000065
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.9( empty lb MIN local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.155666 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.9( empty lb MIN local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.313806 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[6.9( v 33'39 (0'0,33'39] lb MIN local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=-1 lpr=44 DELETING pi=[39,44)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.162633 1 0.000080
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[6.9( v 33'39 (0'0,33'39] lb MIN local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.163036 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[6.9( v 33'39 (0'0,33'39] lb MIN local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.319689 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.10( empty lb MIN local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=-1 lpr=44 DELETING pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.169706 1 0.000041
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.10( empty lb MIN local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.170119 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.10( empty lb MIN local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.330233 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.2( empty lb MIN local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=-1 lpr=44 DELETING pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.177145 1 0.000031
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.2( empty lb MIN local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.177554 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.2( empty lb MIN local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.336797 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.12( empty lb MIN local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=-1 lpr=44 DELETING pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.184486 1 0.000042
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.12( empty lb MIN local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.184931 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.12( empty lb MIN local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.345190 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.5( empty lb MIN local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=-1 lpr=44 DELETING pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.191954 1 0.000045
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.5( empty lb MIN local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.192447 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.5( empty lb MIN local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.350701 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.18( empty lb MIN local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 DELETING pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.198926 1 0.000040
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.18( empty lb MIN local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.199629 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.18( empty lb MIN local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.360385 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.8( empty lb MIN local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=-1 lpr=44 DELETING pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.206565 1 0.000041
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.8( empty lb MIN local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.207089 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.8( empty lb MIN local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.363853 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.14( empty lb MIN local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=-1 lpr=44 DELETING pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.213815 1 0.000040
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.14( empty lb MIN local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.214439 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[4.14( empty lb MIN local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.374913 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.304203 2 0.000046
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] exit Started/ReplicaActive 0.304237 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] enter Started/ToDelete
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000113 1 0.000071
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] enter Started/ToDelete/Deleting
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 45 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2e10f/0x79000, compress 0x0/0x0/0x0, omap 0x7dd2, meta 0x1a2822e), peers [1,2] op hist [0,0,1])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T16:59:25.786944+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.429685 2 0.000024
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] exit Started/ReplicaActive 0.429748 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] enter Started/ToDelete
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000225 1 0.000169
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] enter Started/ToDelete/Deleting
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[6.f( v 33'39 (0'0,33'39] lb MIN local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=-1 lpr=44 DELETING pi=[39,44)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] exit Started/ToDelete/Deleting 0.137483 2 0.000331
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[6.f( v 33'39 (0'0,33'39] lb MIN local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] exit Started/ToDelete 0.137649 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[6.f( v 33'39 (0'0,33'39] lb MIN local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] exit Started 1.581262 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.577781 2 0.000076
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] exit Started/ReplicaActive 0.577827 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] enter Started/ToDelete
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000135 1 0.000081
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] enter Started/ToDelete/Deleting
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[6.d( v 33'39 (0'0,33'39] lb MIN local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=-1 lpr=44 DELETING pi=[39,44)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] exit Started/ToDelete/Deleting 0.152135 2 0.000299
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[6.d( v 33'39 (0'0,33'39] lb MIN local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] exit Started/ToDelete 0.152435 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[6.d( v 33'39 (0'0,33'39] lb MIN local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] exit Started 1.722441 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.717906 2 0.000021
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] exit Started/ReplicaActive 0.717962 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] enter Started/ToDelete
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000102 1 0.000143
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] enter Started/ToDelete/Deleting
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[6.3( v 33'39 (0'0,33'39] lb MIN local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=-1 lpr=44 DELETING pi=[39,44)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] exit Started/ToDelete/Deleting 0.145136 2 0.000189
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[6.3( v 33'39 (0'0,33'39] lb MIN local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] exit Started/ToDelete 0.145328 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[6.3( v 33'39 (0'0,33'39] lb MIN local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] exit Started 1.862497 0 0.000000
Jan 10 17:23:23 compute-0 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.14828 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.792060 2 0.000069
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] exit Started/ReplicaActive 0.792106 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] enter Started/ToDelete
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 4.17 scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000182 1 0.000151
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] enter Started/ToDelete/Deleting
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.862968 2 0.000067
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] exit Started/ReplicaActive 0.863024 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] enter Started/ToDelete
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000396 1 0.000104
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] enter Started/ToDelete/Deleting
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[6.5( v 33'39 (0'0,33'39] lb MIN local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=-1 lpr=44 DELETING pi=[39,44)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] exit Started/ToDelete/Deleting 0.149204 2 0.000275
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[6.5( v 33'39 (0'0,33'39] lb MIN local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] exit Started/ToDelete 0.149382 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[6.5( v 33'39 (0'0,33'39] lb MIN local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] exit Started 2.004844 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[6.b( v 33'39 (0'0,33'39] lb MIN local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=-1 lpr=44 DELETING pi=[39,44)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] exit Started/ToDelete/Deleting 0.103667 2 0.000168
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[6.b( v 33'39 (0'0,33'39] lb MIN local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] exit Started/ToDelete 0.103911 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[6.b( v 33'39 (0'0,33'39] lb MIN local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] exit Started 2.035464 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[6.7( v 33'39 (0'0,33'39] lb MIN local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=-1 lpr=44 DELETING pi=[39,44)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] exit Started/ToDelete/Deleting 0.040166 2 0.000212
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[6.7( v 33'39 (0'0,33'39] lb MIN local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] exit Started/ToDelete 0.040657 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[6.7( v 33'39 (0'0,33'39] lb MIN local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] exit Started 2.042815 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 4.17 scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 62554112 unmapped: 352256 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T16:59:26.787107+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  log_queue is 2 last_log 13 sent 11 num 2 unsent 2 sending 2
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T16:59:56.154828+0000 osd.0 (osd.0) 12 : cluster [DBG] 4.17 scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T16:59:56.269160+0000 osd.0 (osd.0) 13 : cluster [DBG] 4.17 scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 62652416 unmapped: 253952 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client handle_log_ack log(last 13)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T16:59:56.154828+0000 osd.0 (osd.0) 12 : cluster [DBG] 4.17 scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T16:59:56.269160+0000 osd.0 (osd.0) 13 : cluster [DBG] 4.17 scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T16:59:27.787387+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 45 heartbeat osd_stat(store_statfs(0x4fe153000/0x0/0x4ffc00000, data 0x2f277/0x77000, compress 0x0/0x0/0x0, omap 0x8339, meta 0x1a27cc7), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 4.15 scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 4.15 scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 62652416 unmapped: 253952 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T16:59:28.787770+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  log_queue is 2 last_log 15 sent 13 num 2 unsent 2 sending 2
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T16:59:58.139073+0000 osd.0 (osd.0) 14 : cluster [DBG] 4.15 scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T16:59:58.149643+0000 osd.0 (osd.0) 15 : cluster [DBG] 4.15 scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 4.16 scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 4.16 scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 62668800 unmapped: 237568 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client handle_log_ack log(last 15)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T16:59:58.139073+0000 osd.0 (osd.0) 14 : cluster [DBG] 4.15 scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T16:59:58.149643+0000 osd.0 (osd.0) 15 : cluster [DBG] 4.15 scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T16:59:29.788065+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  log_queue is 2 last_log 17 sent 15 num 2 unsent 2 sending 2
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T16:59:59.125631+0000 osd.0 (osd.0) 16 : cluster [DBG] 4.16 scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T16:59:59.136164+0000 osd.0 (osd.0) 17 : cluster [DBG] 4.16 scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 354967 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 62668800 unmapped: 237568 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client handle_log_ack log(last 17)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T16:59:59.125631+0000 osd.0 (osd.0) 16 : cluster [DBG] 4.16 scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T16:59:59.136164+0000 osd.0 (osd.0) 17 : cluster [DBG] 4.16 scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T16:59:30.788288+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 45 heartbeat osd_stat(store_statfs(0x4fe155000/0x0/0x4ffc00000, data 0x2f277/0x77000, compress 0x0/0x0/0x0, omap 0x8339, meta 0x1a27cc7), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 45 handle_osd_map epochs [46,46], i have 45, src has [1,46]
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.575263977s of 11.020032883s, submitted: 342
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 46 pg[6.e( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 12.081620 11 0.000115
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 46 pg[6.e( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 12.088042 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 46 pg[6.e( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 13.647887 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 46 pg[6.e( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 13.648026 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 46 pg[6.e( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 46 pg[6.e( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46 pruub=11.918597221s) [1] r=-1 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 active pruub 103.055480957s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 46 pg[6.6( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 12.082065 11 0.000100
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 46 pg[6.6( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 12.088425 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 46 pg[6.6( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 13.648986 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 46 pg[6.6( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 13.649034 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 46 pg[6.6( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 46 pg[6.e( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46 pruub=11.918471336s) [1] r=-1 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 103.055480957s@ mbc={}] exit Reset 0.000187 1 0.000271
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 46 pg[6.e( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46 pruub=11.918471336s) [1] r=-1 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 103.055480957s@ mbc={}] enter Started
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 46 pg[6.e( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46 pruub=11.918471336s) [1] r=-1 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 103.055480957s@ mbc={}] enter Start
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 46 pg[6.e( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46 pruub=11.918471336s) [1] r=-1 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 103.055480957s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 46 pg[6.e( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46 pruub=11.918471336s) [1] r=-1 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 103.055480957s@ mbc={}] exit Start 0.000013 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 46 pg[6.e( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46 pruub=11.918471336s) [1] r=-1 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 103.055480957s@ mbc={}] enter Started/Stray
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 46 pg[6.6( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46 pruub=11.918116570s) [1] r=-1 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 active pruub 103.055183411s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 46 pg[6.6( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46 pruub=11.918060303s) [1] r=-1 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 103.055183411s@ mbc={}] exit Reset 0.000111 1 0.000167
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 46 pg[6.6( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46 pruub=11.918060303s) [1] r=-1 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 103.055183411s@ mbc={}] enter Started
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 46 pg[6.6( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46 pruub=11.918060303s) [1] r=-1 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 103.055183411s@ mbc={}] enter Start
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 46 pg[6.6( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46 pruub=11.918060303s) [1] r=-1 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 103.055183411s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 46 pg[6.6( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46 pruub=11.918060303s) [1] r=-1 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 103.055183411s@ mbc={}] exit Start 0.000010 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 46 pg[6.6( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46 pruub=11.918060303s) [1] r=-1 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 103.055183411s@ mbc={}] enter Started/Stray
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 46 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 12.085873 11 0.000169
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 46 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 12.089088 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 46 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 13.648622 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 46 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 13.648662 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 46 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 46 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46 pruub=11.913496971s) [1] r=-1 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 active pruub 103.050857544s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 46 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46 pruub=11.913447380s) [1] r=-1 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 103.050857544s@ mbc={}] exit Reset 0.000103 1 0.000169
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 46 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46 pruub=11.913447380s) [1] r=-1 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 103.050857544s@ mbc={}] enter Started
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 46 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46 pruub=11.913447380s) [1] r=-1 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 103.050857544s@ mbc={}] enter Start
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 46 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46 pruub=11.913447380s) [1] r=-1 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 103.050857544s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 46 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46 pruub=11.913447380s) [1] r=-1 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 103.050857544s@ mbc={}] exit Start 0.000007 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 46 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46 pruub=11.913447380s) [1] r=-1 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 103.050857544s@ mbc={}] enter Started/Stray
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 46 pg[6.2( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 12.082199 11 0.000087
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 46 pg[6.2( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 12.088673 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 46 pg[6.2( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 13.648373 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 46 pg[6.2( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 13.648454 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 46 pg[6.2( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 46 pg[6.2( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46 pruub=11.917839050s) [1] r=-1 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 active pruub 103.055328369s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 46 pg[6.2( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46 pruub=11.917801857s) [1] r=-1 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 103.055328369s@ mbc={}] exit Reset 0.000063 1 0.000099
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 46 pg[6.2( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46 pruub=11.917801857s) [1] r=-1 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 103.055328369s@ mbc={}] enter Started
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 46 pg[6.2( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46 pruub=11.917801857s) [1] r=-1 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 103.055328369s@ mbc={}] enter Start
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 46 pg[6.2( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46 pruub=11.917801857s) [1] r=-1 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 103.055328369s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 46 pg[6.2( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46 pruub=11.917801857s) [1] r=-1 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 103.055328369s@ mbc={}] exit Start 0.000009 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 46 pg[6.2( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46 pruub=11.917801857s) [1] r=-1 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 103.055328369s@ mbc={}] enter Started/Stray
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 4.c scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 4.c scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _renew_subs
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 62758912 unmapped: 147456 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 46 handle_osd_map epochs [47,47], i have 46, src has [1,47]
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 47 pg[6.e( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=-1 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.568992 7 0.000096
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 47 pg[6.e( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=-1 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 47 pg[6.e( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=-1 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 47 handle_osd_map epochs [47,47], i have 47, src has [1,47]
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 47 pg[6.6( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=-1 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.569271 7 0.000192
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 47 pg[6.6( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=-1 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 47 pg[6.6( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=-1 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 47 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=-1 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.573066 7 0.000166
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 47 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=-1 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 47 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=-1 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 47 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=-1 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000087 1 0.000072
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 47 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=-1 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 47 pg[6.2( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=-1 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.574401 7 0.000076
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 47 pg[6.2( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=-1 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 47 pg[6.2( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=-1 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 47 pg[6.2( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=-1 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000094 1 0.000060
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 47 pg[6.2( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=-1 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 47 pg[6.a( v 33'39 (0'0,33'39] lb MIN local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=-1 lpr=46 DELETING pi=[39,46)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.006591 1 0.000113
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 47 pg[6.a( v 33'39 (0'0,33'39] lb MIN local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=-1 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.006817 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 47 pg[6.a( v 33'39 (0'0,33'39] lb MIN local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=-1 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started 0.579946 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 47 pg[6.6( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=-1 lpr=46 pi=[39,46)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.012270 2 0.000057
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 47 pg[6.6( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=-1 lpr=46 pi=[39,46)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] exit Started/ReplicaActive 0.012308 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 47 pg[6.6( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=-1 lpr=46 pi=[39,46)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] enter Started/ToDelete
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 47 pg[6.6( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=-1 lpr=46 pi=[39,46)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 47 pg[6.6( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=-1 lpr=46 pi=[39,46)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000102 1 0.000140
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 47 pg[6.6( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=-1 lpr=46 pi=[39,46)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] enter Started/ToDelete/Deleting
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 47 pg[6.2( v 33'39 (0'0,33'39] lb MIN local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=-1 lpr=46 DELETING pi=[39,46)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.012387 1 0.000052
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 47 pg[6.2( v 33'39 (0'0,33'39] lb MIN local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=-1 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.012521 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 47 pg[6.2( v 33'39 (0'0,33'39] lb MIN local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=-1 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started 0.586966 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 47 pg[6.e( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=-1 lpr=46 pi=[39,46)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.087643 2 0.000076
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 47 pg[6.e( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=-1 lpr=46 pi=[39,46)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] exit Started/ReplicaActive 0.087695 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 47 pg[6.e( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=-1 lpr=46 pi=[39,46)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] enter Started/ToDelete
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 47 pg[6.e( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=-1 lpr=46 pi=[39,46)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 47 pg[6.e( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=-1 lpr=46 pi=[39,46)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000152 1 0.000095
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 47 pg[6.e( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=-1 lpr=46 pi=[39,46)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] enter Started/ToDelete/Deleting
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 47 pg[6.6( v 33'39 (0'0,33'39] lb MIN local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=-1 lpr=46 DELETING pi=[39,46)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] exit Started/ToDelete/Deleting 0.079770 2 0.000312
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 47 pg[6.6( v 33'39 (0'0,33'39] lb MIN local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=-1 lpr=46 pi=[39,46)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] exit Started/ToDelete 0.079941 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 47 pg[6.6( v 33'39 (0'0,33'39] lb MIN local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=-1 lpr=46 pi=[39,46)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] exit Started 0.661610 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 47 pg[6.e( v 33'39 (0'0,33'39] lb MIN local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=-1 lpr=46 DELETING pi=[39,46)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] exit Started/ToDelete/Deleting 0.019342 2 0.000246
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 47 pg[6.e( v 33'39 (0'0,33'39] lb MIN local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=-1 lpr=46 pi=[39,46)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] exit Started/ToDelete 0.019563 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 47 pg[6.e( v 33'39 (0'0,33'39] lb MIN local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=-1 lpr=46 pi=[39,46)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] exit Started 0.676319 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T16:59:31.788489+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  log_queue is 2 last_log 19 sent 17 num 2 unsent 2 sending 2
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:00:01.101931+0000 osd.0 (osd.0) 18 : cluster [DBG] 4.c scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:00:01.112483+0000 osd.0 (osd.0) 19 : cluster [DBG] 4.c scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 47 heartbeat osd_stat(store_statfs(0x4fe150000/0x0/0x4ffc00000, data 0x3088d/0x7a000, compress 0x0/0x0/0x0, omap 0x85f9, meta 0x1a27a07), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 4.0 scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 4.0 scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 62865408 unmapped: 1089536 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 47 handle_osd_map epochs [48,48], i have 47, src has [1,48]
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client handle_log_ack log(last 19)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:00:01.101931+0000 osd.0 (osd.0) 18 : cluster [DBG] 4.c scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:00:01.112483+0000 osd.0 (osd.0) 19 : cluster [DBG] 4.c scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 48 pg[6.7(unlocked)] enter Initial
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 48 pg[6.7( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=0 lpr=0 pi=[44,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000092 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 48 pg[6.7( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=0 lpr=0 pi=[44,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 48 pg[6.7( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=0 lpr=48 pi=[44,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000010 1 0.000026
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 48 pg[6.7( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=0 lpr=48 pi=[44,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 48 pg[6.7( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=0 lpr=48 pi=[44,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 48 pg[6.7( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=0 lpr=48 pi=[44,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 48 pg[6.7( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=0 lpr=48 pi=[44,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 48 pg[6.7( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=0 lpr=48 pi=[44,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 48 pg[6.7( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=0 lpr=48 pi=[44,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 48 pg[6.7( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=0 lpr=48 pi=[44,48)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 48 pg[6.7( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=0 lpr=48 pi=[44,48)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000183 1 0.000278
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 48 pg[6.7( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=0 lpr=48 pi=[44,48)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 48 pg[6.b(unlocked)] enter Initial
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 48 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=0 lpr=0 pi=[44,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000165 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 48 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=0 lpr=0 pi=[44,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 48 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=0 lpr=48 pi=[44,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000029 1 0.000066
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 48 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=0 lpr=48 pi=[44,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 48 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=0 lpr=48 pi=[44,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 48 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=0 lpr=48 pi=[44,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 48 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=0 lpr=48 pi=[44,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000012 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 48 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=0 lpr=48 pi=[44,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 48 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=0 lpr=48 pi=[44,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 48 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=0 lpr=48 pi=[44,48)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 48 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=0 lpr=48 pi=[44,48)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000435 1 0.000084
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 48 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=0 lpr=48 pi=[44,48)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 48 pg[6.3(unlocked)] enter Initial
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 48 pg[6.3( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=0 lpr=0 pi=[44,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000179 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 48 pg[6.3( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=0 lpr=0 pi=[44,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 48 pg[6.3( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=0 lpr=48 pi=[44,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000037 1 0.000073
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 48 pg[6.3( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=0 lpr=48 pi=[44,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 48 pg[6.3( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=0 lpr=48 pi=[44,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 48 pg[6.3( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=0 lpr=48 pi=[44,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 48 pg[6.3( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=0 lpr=48 pi=[44,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000024 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 48 pg[6.3( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=0 lpr=48 pi=[44,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 48 pg[6.3( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=0 lpr=48 pi=[44,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 48 pg[6.3( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=0 lpr=48 pi=[44,48)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 48 pg[6.3( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=0 lpr=48 pi=[44,48)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000222 1 0.000120
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 48 pg[6.3( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=0 lpr=48 pi=[44,48)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 48 pg[6.f(unlocked)] enter Initial
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 48 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=0 lpr=0 pi=[44,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000118 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 48 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=0 lpr=0 pi=[44,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 48 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=0 lpr=48 pi=[44,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000016 1 0.000040
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 48 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=0 lpr=48 pi=[44,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 48 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=0 lpr=48 pi=[44,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 48 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=0 lpr=48 pi=[44,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 48 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=0 lpr=48 pi=[44,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000014 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 48 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=0 lpr=48 pi=[44,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 48 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=0 lpr=48 pi=[44,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 48 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=0 lpr=48 pi=[44,48)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 48 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=0 lpr=48 pi=[44,48)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000123 1 0.000081
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 48 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=0 lpr=48 pi=[44,48)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 17:23:23 compute-0 ceph-osd[85764]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 48 pg[6.7( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=0 lpr=48 pi=[44,48)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetLog 0.003306 2 0.000054
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 48 pg[6.7( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=0 lpr=48 pi=[44,48)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 48 pg[6.7( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=0 lpr=48 pi=[44,48)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetMissing 0.000011 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 48 pg[6.7( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=0 lpr=48 pi=[44,48)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:23:23 compute-0 ceph-osd[85764]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 48 pg[6.f( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=0 lpr=48 pi=[44,48)/1 crt=33'39 mlcod 0'0 peering m=3 mbc={}] exit Started/Primary/Peering/GetLog 0.001766 2 0.000055
Jan 10 17:23:23 compute-0 ceph-osd[85764]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 48 pg[6.f( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=0 lpr=48 pi=[44,48)/1 crt=33'39 mlcod 0'0 peering m=3 mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:23:23 compute-0 ceph-osd[85764]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 48 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=0 lpr=48 pi=[44,48)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetLog 0.003230 2 0.000089
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 48 pg[6.3( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=0 lpr=48 pi=[44,48)/1 crt=33'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/GetLog 0.002491 2 0.000167
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 48 pg[6.f( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=0 lpr=48 pi=[44,48)/1 crt=33'39 mlcod 0'0 peering m=3 mbc={}] exit Started/Primary/Peering/GetMissing 0.000007 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 48 pg[6.3( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=0 lpr=48 pi=[44,48)/1 crt=33'39 mlcod 0'0 peering m=2 mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 48 pg[6.f( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=0 lpr=48 pi=[44,48)/1 crt=33'39 mlcod 0'0 peering m=3 mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 48 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=0 lpr=48 pi=[44,48)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 48 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=0 lpr=48 pi=[44,48)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetMissing 0.000024 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 48 pg[6.3( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=0 lpr=48 pi=[44,48)/1 crt=33'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/GetMissing 0.000013 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 48 pg[6.3( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=0 lpr=48 pi=[44,48)/1 crt=33'39 mlcod 0'0 peering m=2 mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 48 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=0 lpr=48 pi=[44,48)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T16:59:32.788770+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  log_queue is 2 last_log 21 sent 19 num 2 unsent 2 sending 2
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:00:02.071871+0000 osd.0 (osd.0) 20 : cluster [DBG] 4.0 scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:00:02.082453+0000 osd.0 (osd.0) 21 : cluster [DBG] 4.0 scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 4.3 scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 4.3 scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 62914560 unmapped: 1040384 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 48 handle_osd_map epochs [48,49], i have 48, src has [1,49]
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 48 handle_osd_map epochs [49,49], i have 49, src has [1,49]
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 49 pg[6.3( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=0 lpr=48 pi=[44,48)/1 crt=33'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/WaitUpThru 1.011312 2 0.000097
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 49 pg[6.7( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=0 lpr=48 pi=[44,48)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/WaitUpThru 1.012164 2 0.000133
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 49 pg[6.7( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=0 lpr=48 pi=[44,48)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering 1.015747 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 49 pg[6.3( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=0 lpr=48 pi=[44,48)/1 crt=33'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering 1.014146 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 49 pg[6.7( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=0 lpr=48 pi=[44,48)/1 crt=33'39 mlcod 0'0 unknown m=1 mbc={}] enter Started/Primary/Active
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 49 pg[6.3( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=0 lpr=48 pi=[44,48)/1 crt=33'39 mlcod 0'0 unknown m=2 mbc={}] enter Started/Primary/Active
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 49 pg[6.7( v 33'39 lc 33'21 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=0 lpr=48 pi=[44,48)/1 crt=33'39 lcod 0'0 mlcod 0'0 activating+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Activating
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 49 pg[6.3( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=48/49 n=2 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=0 lpr=48 pi=[44,48)/1 crt=33'39 mlcod 0'0 activating+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/Activating
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 49 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=0 lpr=48 pi=[44,48)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/WaitUpThru 1.011621 2 0.000095
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 49 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=0 lpr=48 pi=[44,48)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering 1.015411 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 49 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=0 lpr=48 pi=[44,48)/1 crt=33'39 mlcod 0'0 unknown m=1 mbc={}] enter Started/Primary/Active
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 49 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=0 lpr=48 pi=[44,48)/1 crt=33'39 mlcod 0'0 activating+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Activating
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 49 pg[6.f( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=0 lpr=48 pi=[44,48)/1 crt=33'39 mlcod 0'0 peering m=3 mbc={}] exit Started/Primary/Peering/WaitUpThru 1.011931 2 0.000107
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 49 pg[6.f( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=0 lpr=48 pi=[44,48)/1 crt=33'39 mlcod 0'0 peering m=3 mbc={}] exit Started/Primary/Peering 1.013896 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 49 pg[6.f( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=0 lpr=48 pi=[44,48)/1 crt=33'39 mlcod 0'0 unknown m=3 mbc={}] enter Started/Primary/Active
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 49 pg[6.f( v 33'39 lc 33'1 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=0 lpr=48 pi=[44,48)/1 crt=33'39 lcod 0'0 mlcod 0'0 activating+degraded m=3 mbc={255={(0+1)=3}}] enter Started/Primary/Active/Activating
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client handle_log_ack log(last 21)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:00:02.071871+0000 osd.0 (osd.0) 20 : cluster [DBG] 4.0 scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:00:02.082453+0000 osd.0 (osd.0) 21 : cluster [DBG] 4.0 scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 49 pg[6.3( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=48/49 n=2 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=0 lpr=48 pi=[44,48)/1 crt=33'39 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 49 pg[6.7( v 33'39 lc 33'21 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=0 lpr=48 pi=[44,48)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 49 pg[6.7( v 33'39 lc 33'21 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/44 les/c/f=49/45/0 sis=48) [0] r=0 lpr=48 pi=[44,48)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/Activating 0.002206 4 0.000306
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 49 pg[6.7( v 33'39 lc 33'21 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/44 les/c/f=49/45/0 sis=48) [0] r=0 lpr=48 pi=[44,48)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 49 pg[6.3( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=48/49 n=2 ec=39/23 lis/c=48/44 les/c/f=49/45/0 sis=48) [0] r=0 lpr=48 pi=[44,48)/1 crt=33'39 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/Activating 0.002052 4 0.000317
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 49 pg[6.3( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=48/49 n=2 ec=39/23 lis/c=48/44 les/c/f=49/45/0 sis=48) [0] r=0 lpr=48 pi=[44,48)/1 crt=33'39 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 49 pg[6.7( v 33'39 lc 33'21 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/44 les/c/f=49/45/0 sis=48) [0] r=0 lpr=48 pi=[44,48)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000422 1 0.000355
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 49 pg[6.7( v 33'39 lc 33'21 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/44 les/c/f=49/45/0 sis=48) [0] r=0 lpr=48 pi=[44,48)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 49 pg[6.7( v 33'39 lc 33'21 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/44 les/c/f=49/45/0 sis=48) [0] r=0 lpr=48 pi=[44,48)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000023 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 49 pg[6.7( v 33'39 lc 33'21 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/44 les/c/f=49/45/0 sis=48) [0] r=0 lpr=48 pi=[44,48)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Recovering
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 49 pg[6.f( v 33'39 lc 33'1 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=0 lpr=48 pi=[44,48)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 49 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=0 lpr=48 pi=[44,48)/1 crt=33'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 49 pg[6.f( v 33'39 lc 33'1 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/44 les/c/f=49/45/0 sis=48) [0] r=0 lpr=48 pi=[44,48)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] exit Started/Primary/Active/Activating 0.005874 4 0.000182
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 49 pg[6.f( v 33'39 lc 33'1 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/44 les/c/f=49/45/0 sis=48) [0] r=0 lpr=48 pi=[44,48)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 49 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/44 les/c/f=49/45/0 sis=48) [0] r=0 lpr=48 pi=[44,48)/1 crt=33'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/Activating 0.006072 4 0.000192
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 49 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/44 les/c/f=49/45/0 sis=48) [0] r=0 lpr=48 pi=[44,48)/1 crt=33'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 49 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/44 les/c/f=49/45/0 sis=48) [0] r=0 lpr=48 pi=[44,48)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.010344 2 0.000175
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 49 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/44 les/c/f=49/45/0 sis=48) [0] r=0 lpr=48 pi=[44,48)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 49 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/44 les/c/f=49/45/0 sis=48) [0] r=0 lpr=48 pi=[44,48)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000022 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 49 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/44 les/c/f=49/45/0 sis=48) [0] r=0 lpr=48 pi=[44,48)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 49 pg[6.3( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=48/49 n=2 ec=39/23 lis/c=48/44 les/c/f=49/45/0 sis=48) [0] r=0 lpr=48 pi=[44,48)/1 crt=33'39 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.010487 2 0.000619
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 49 pg[6.3( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=48/49 n=2 ec=39/23 lis/c=48/44 les/c/f=49/45/0 sis=48) [0] r=0 lpr=48 pi=[44,48)/1 crt=33'39 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 49 pg[6.3( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=48/49 n=2 ec=39/23 lis/c=48/44 les/c/f=49/45/0 sis=48) [0] r=0 lpr=48 pi=[44,48)/1 crt=33'39 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000008 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 49 pg[6.3( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=48/49 n=2 ec=39/23 lis/c=48/44 les/c/f=49/45/0 sis=48) [0] r=0 lpr=48 pi=[44,48)/1 crt=33'39 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/Recovering
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 49 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=48/49 n=2 ec=39/23 lis/c=48/44 les/c/f=49/45/0 sis=48) [0] r=0 lpr=48 pi=[44,48)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.123834 1 0.000109
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 49 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=48/49 n=2 ec=39/23 lis/c=48/44 les/c/f=49/45/0 sis=48) [0] r=0 lpr=48 pi=[44,48)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 49 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=48/49 n=2 ec=39/23 lis/c=48/44 les/c/f=49/45/0 sis=48) [0] r=0 lpr=48 pi=[44,48)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000029 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 49 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=48/49 n=2 ec=39/23 lis/c=48/44 les/c/f=49/45/0 sis=48) [0] r=0 lpr=48 pi=[44,48)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 49 pg[6.f( v 33'39 lc 33'1 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/44 les/c/f=49/45/0 sis=48) [0] r=0 lpr=48 pi=[44,48)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=3 mbc={255={(0+1)=3}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.130786 2 0.000068
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 49 pg[6.f( v 33'39 lc 33'1 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/44 les/c/f=49/45/0 sis=48) [0] r=0 lpr=48 pi=[44,48)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=3 mbc={255={(0+1)=3}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 49 pg[6.f( v 33'39 lc 33'1 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/44 les/c/f=49/45/0 sis=48) [0] r=0 lpr=48 pi=[44,48)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=3 mbc={255={(0+1)=3}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000009 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 49 pg[6.f( v 33'39 lc 33'1 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/44 les/c/f=49/45/0 sis=48) [0] r=0 lpr=48 pi=[44,48)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=3 mbc={255={(0+1)=3}}] enter Started/Primary/Active/Recovering
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T16:59:33.789044+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  log_queue is 2 last_log 23 sent 21 num 2 unsent 2 sending 2
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:00:03.030367+0000 osd.0 (osd.0) 22 : cluster [DBG] 4.3 scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:00:03.040975+0000 osd.0 (osd.0) 23 : cluster [DBG] 4.3 scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 49 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/44 les/c/f=49/45/0 sis=48) [0] r=0 lpr=48 pi=[44,48)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.207221 1 0.000109
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 49 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/44 les/c/f=49/45/0 sis=48) [0] r=0 lpr=48 pi=[44,48)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 49 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/44 les/c/f=49/45/0 sis=48) [0] r=0 lpr=48 pi=[44,48)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000016 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 49 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/44 les/c/f=49/45/0 sis=48) [0] r=0 lpr=48 pi=[44,48)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 49 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/44 les/c/f=49/45/0 sis=48) [0] r=0 lpr=48 pi=[44,48)/1 crt=33'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.338119 2 0.000191
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 49 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/44 les/c/f=49/45/0 sis=48) [0] r=0 lpr=48 pi=[44,48)/1 crt=33'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 49 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/44 les/c/f=49/45/0 sis=48) [0] r=0 lpr=48 pi=[44,48)/1 crt=33'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000008 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 49 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/44 les/c/f=49/45/0 sis=48) [0] r=0 lpr=48 pi=[44,48)/1 crt=33'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Recovering
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 49 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/44 les/c/f=49/45/0 sis=48) [0] r=0 lpr=48 pi=[44,48)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.058992 1 0.000142
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 49 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/44 les/c/f=49/45/0 sis=48) [0] r=0 lpr=48 pi=[44,48)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 49 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/44 les/c/f=49/45/0 sis=48) [0] r=0 lpr=48 pi=[44,48)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000023 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 49 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/44 les/c/f=49/45/0 sis=48) [0] r=0 lpr=48 pi=[44,48)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 4.19 scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 4.19 scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 62955520 unmapped: 999424 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 49 handle_osd_map epochs [49,50], i have 49, src has [1,50]
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 50 pg[6.c( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 15.686059 23 0.000187
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 50 pg[6.c( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 15.692407 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 50 pg[6.c( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 17.253306 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 50 pg[6.c( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 17.253379 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 50 pg[6.c( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 50 pg[6.c( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=50 pruub=8.313828468s) [1] r=-1 lpr=50 pi=[39,50)/1 crt=33'39 lcod 0'0 active pruub 103.055412292s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 50 pg[6.c( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=50 pruub=8.313767433s) [1] r=-1 lpr=50 pi=[39,50)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 103.055412292s@ mbc={}] exit Reset 0.000111 1 0.000358
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 50 pg[6.c( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=50 pruub=8.313767433s) [1] r=-1 lpr=50 pi=[39,50)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 103.055412292s@ mbc={}] enter Started
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 50 pg[6.c( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=50 pruub=8.313767433s) [1] r=-1 lpr=50 pi=[39,50)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 103.055412292s@ mbc={}] enter Start
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 50 pg[6.c( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=50 pruub=8.313767433s) [1] r=-1 lpr=50 pi=[39,50)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 103.055412292s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 50 pg[6.c( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=50 pruub=8.313767433s) [1] r=-1 lpr=50 pi=[39,50)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 103.055412292s@ mbc={}] exit Start 0.000020 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 50 pg[6.c( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=50 pruub=8.313767433s) [1] r=-1 lpr=50 pi=[39,50)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 103.055412292s@ mbc={}] enter Started/Stray
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 50 pg[6.4( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 15.687108 23 0.000120
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 50 pg[6.4( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 15.693398 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 50 pg[6.4( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 17.253255 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 50 pg[6.4( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 17.253283 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 50 pg[6.4( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 50 pg[6.4( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=50 pruub=8.312836647s) [1] r=-1 lpr=50 pi=[39,50)/1 crt=33'39 lcod 0'0 active pruub 103.054672241s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 50 pg[6.4( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=50 pruub=8.312801361s) [1] r=-1 lpr=50 pi=[39,50)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 103.054672241s@ mbc={}] exit Reset 0.000072 1 0.000124
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 50 pg[6.4( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=50 pruub=8.312801361s) [1] r=-1 lpr=50 pi=[39,50)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 103.054672241s@ mbc={}] enter Started
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 50 pg[6.4( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=50 pruub=8.312801361s) [1] r=-1 lpr=50 pi=[39,50)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 103.054672241s@ mbc={}] enter Start
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 50 pg[6.4( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=50 pruub=8.312801361s) [1] r=-1 lpr=50 pi=[39,50)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 103.054672241s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 50 pg[6.4( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=50 pruub=8.312801361s) [1] r=-1 lpr=50 pi=[39,50)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 103.054672241s@ mbc={}] exit Start 0.000020 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 50 pg[6.4( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=50 pruub=8.312801361s) [1] r=-1 lpr=50 pi=[39,50)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 103.054672241s@ mbc={}] enter Started/Stray
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client handle_log_ack log(last 23)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:00:03.030367+0000 osd.0 (osd.0) 22 : cluster [DBG] 4.3 scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:00:03.040975+0000 osd.0 (osd.0) 23 : cluster [DBG] 4.3 scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 50 handle_osd_map epochs [50,50], i have 50, src has [1,50]
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T16:59:34.789325+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  log_queue is 2 last_log 25 sent 23 num 2 unsent 2 sending 2
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:00:04.054398+0000 osd.0 (osd.0) 24 : cluster [DBG] 4.19 scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:00:04.064948+0000 osd.0 (osd.0) 25 : cluster [DBG] 4.19 scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 386881 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 62955520 unmapped: 999424 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 50 handle_osd_map epochs [50,51], i have 50, src has [1,51]
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client handle_log_ack log(last 25)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:00:04.054398+0000 osd.0 (osd.0) 24 : cluster [DBG] 4.19 scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:00:04.064948+0000 osd.0 (osd.0) 25 : cluster [DBG] 4.19 scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 51 pg[6.4( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=50) [1] r=-1 lpr=50 pi=[39,50)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.022625 7 0.000178
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 51 pg[6.4( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=50) [1] r=-1 lpr=50 pi=[39,50)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 51 pg[6.4( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=50) [1] r=-1 lpr=50 pi=[39,50)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 51 pg[6.c( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=50) [1] r=-1 lpr=50 pi=[39,50)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.022903 7 0.000140
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 51 pg[6.c( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=50) [1] r=-1 lpr=50 pi=[39,50)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 51 pg[6.c( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=50) [1] r=-1 lpr=50 pi=[39,50)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 51 handle_osd_map epochs [51,51], i have 51, src has [1,51]
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 51 pg[6.c( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=50) [1] r=-1 lpr=50 pi=[39,50)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.011801 2 0.000061
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 51 pg[6.c( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=50) [1] r=-1 lpr=50 pi=[39,50)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] exit Started/ReplicaActive 0.011844 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 51 pg[6.c( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=50) [1] r=-1 lpr=50 pi=[39,50)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] enter Started/ToDelete
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 51 pg[6.c( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=50) [1] r=-1 lpr=50 pi=[39,50)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 51 pg[6.c( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=50) [1] r=-1 lpr=50 pi=[39,50)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000127 1 0.000115
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 51 pg[6.c( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=50) [1] r=-1 lpr=50 pi=[39,50)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] enter Started/ToDelete/Deleting
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 51 pg[6.c( v 33'39 (0'0,33'39] lb MIN local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=50) [1] r=-1 lpr=50 DELETING pi=[39,50)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] exit Started/ToDelete/Deleting 0.123957 2 0.000251
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 51 pg[6.c( v 33'39 (0'0,33'39] lb MIN local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=50) [1] r=-1 lpr=50 pi=[39,50)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] exit Started/ToDelete 0.124155 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 51 pg[6.c( v 33'39 (0'0,33'39] lb MIN local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=50) [1] r=-1 lpr=50 pi=[39,50)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] exit Started 1.158991 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 51 pg[6.4( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=50) [1] r=-1 lpr=50 pi=[39,50)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.266955 2 0.000075
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 51 pg[6.4( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=50) [1] r=-1 lpr=50 pi=[39,50)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] exit Started/ReplicaActive 0.267020 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 51 pg[6.4( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=50) [1] r=-1 lpr=50 pi=[39,50)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] enter Started/ToDelete
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 51 pg[6.4( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=50) [1] r=-1 lpr=50 pi=[39,50)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 51 pg[6.4( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=50) [1] r=-1 lpr=50 pi=[39,50)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000290 1 0.000148
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 51 pg[6.4( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=50) [1] r=-1 lpr=50 pi=[39,50)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] enter Started/ToDelete/Deleting
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T16:59:35.789549+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 51 pg[6.4( v 33'39 (0'0,33'39] lb MIN local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=50) [1] r=-1 lpr=50 DELETING pi=[39,50)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] exit Started/ToDelete/Deleting 0.032975 2 0.000312
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 51 pg[6.4( v 33'39 (0'0,33'39] lb MIN local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=50) [1] r=-1 lpr=50 pi=[39,50)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] exit Started/ToDelete 0.033338 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 51 pg[6.4( v 33'39 (0'0,33'39] lb MIN local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=50) [1] r=-1 lpr=50 pi=[39,50)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] exit Started 1.323098 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 62955520 unmapped: 999424 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T16:59:36.789718+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 4.6 scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 4.6 scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 62988288 unmapped: 966656 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T16:59:37.789941+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  log_queue is 2 last_log 27 sent 25 num 2 unsent 2 sending 2
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:00:06.997635+0000 osd.0 (osd.0) 26 : cluster [DBG] 4.6 scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:00:07.008125+0000 osd.0 (osd.0) 27 : cluster [DBG] 4.6 scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client handle_log_ack log(last 27)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:00:06.997635+0000 osd.0 (osd.0) 26 : cluster [DBG] 4.6 scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:00:07.008125+0000 osd.0 (osd.0) 27 : cluster [DBG] 4.6 scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 51 heartbeat osd_stat(store_statfs(0x4fe13f000/0x0/0x4ffc00000, data 0x36e6b/0x89000, compress 0x0/0x0/0x0, omap 0xabcb, meta 0x1a25435), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 62988288 unmapped: 966656 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T16:59:38.790240+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 62988288 unmapped: 966656 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T16:59:39.790424+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 4.b scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 4.b scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 383123 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 62939136 unmapped: 1015808 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T16:59:40.790661+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  log_queue is 2 last_log 29 sent 27 num 2 unsent 2 sending 2
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:00:10.031755+0000 osd.0 (osd.0) 28 : cluster [DBG] 4.b scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:00:10.042190+0000 osd.0 (osd.0) 29 : cluster [DBG] 4.b scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client handle_log_ack log(last 29)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:00:10.031755+0000 osd.0 (osd.0) 28 : cluster [DBG] 4.b scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:00:10.042190+0000 osd.0 (osd.0) 29 : cluster [DBG] 4.b scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 51 handle_osd_map epochs [51,52], i have 51, src has [1,52]
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.965067863s of 10.104757309s, submitted: 77
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _renew_subs
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 52 heartbeat osd_stat(store_statfs(0x4fe13e000/0x0/0x4ffc00000, data 0x38481/0x8c000, compress 0x0/0x0/0x0, omap 0xac7b, meta 0x1a25385), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 52 pg[6.5(unlocked)] enter Initial
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 52 pg[6.5( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [0] r=0 lpr=0 pi=[44,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000170 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 52 pg[6.5( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [0] r=0 lpr=0 pi=[44,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 52 pg[6.5( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [0] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000061 1 0.000099
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 52 pg[6.5( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [0] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 52 pg[6.5( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [0] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 52 pg[6.5( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [0] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 52 pg[6.5( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [0] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000173 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 52 pg[6.5( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [0] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 52 pg[6.5( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [0] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 52 pg[6.5( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [0] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 52 pg[6.5( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [0] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000236 1 0.000406
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 52 pg[6.5( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [0] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 52 pg[6.d(unlocked)] enter Initial
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 52 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [0] r=0 lpr=0 pi=[44,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000153 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 52 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [0] r=0 lpr=0 pi=[44,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 52 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [0] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000024 1 0.000047
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 52 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [0] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 52 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [0] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 52 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [0] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 52 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [0] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000091 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 52 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [0] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 52 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [0] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 52 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [0] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 52 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [0] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000135 1 0.000210
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 52 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [0] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 17:23:23 compute-0 ceph-osd[85764]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 52 pg[6.5( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [0] r=0 lpr=52 pi=[44,52)/1 crt=33'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/GetLog 0.001384 2 0.000102
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 52 pg[6.5( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [0] r=0 lpr=52 pi=[44,52)/1 crt=33'39 mlcod 0'0 peering m=2 mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:23:23 compute-0 ceph-osd[85764]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 52 pg[6.5( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [0] r=0 lpr=52 pi=[44,52)/1 crt=33'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/GetMissing 0.000006 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 52 pg[6.5( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [0] r=0 lpr=52 pi=[44,52)/1 crt=33'39 mlcod 0'0 peering m=2 mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:23:23 compute-0 ceph-osd[85764]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 52 pg[6.d( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [0] r=0 lpr=52 pi=[44,52)/1 crt=33'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/GetLog 0.000568 2 0.000071
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 52 pg[6.d( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [0] r=0 lpr=52 pi=[44,52)/1 crt=33'39 mlcod 0'0 peering m=2 mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 52 pg[6.d( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [0] r=0 lpr=52 pi=[44,52)/1 crt=33'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/GetMissing 0.000011 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 52 pg[6.d( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [0] r=0 lpr=52 pi=[44,52)/1 crt=33'39 mlcod 0'0 peering m=2 mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 63004672 unmapped: 950272 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T16:59:41.790921+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 4.1d scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 4.1d scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 52 handle_osd_map epochs [53,53], i have 52, src has [1,53]
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 53 pg[6.d( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [0] r=0 lpr=52 pi=[44,52)/1 crt=33'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/WaitUpThru 0.929685 2 0.000206
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 53 pg[6.d( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [0] r=0 lpr=52 pi=[44,52)/1 crt=33'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering 0.930596 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 53 pg[6.d( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [0] r=0 lpr=52 pi=[44,52)/1 crt=33'39 mlcod 0'0 unknown m=2 mbc={}] enter Started/Primary/Active
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 53 pg[6.d( v 33'39 lc 33'13 (0'0,33'39] local-lis/les=52/53 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [0] r=0 lpr=52 pi=[44,52)/1 crt=33'39 lcod 0'0 mlcod 0'0 activating+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/Activating
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 53 pg[6.5( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [0] r=0 lpr=52 pi=[44,52)/1 crt=33'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/WaitUpThru 0.930487 2 0.000136
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 53 pg[6.5( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [0] r=0 lpr=52 pi=[44,52)/1 crt=33'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering 0.932280 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 53 pg[6.5( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [0] r=0 lpr=52 pi=[44,52)/1 crt=33'39 mlcod 0'0 unknown m=2 mbc={}] enter Started/Primary/Active
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 53 pg[6.5( v 33'39 lc 33'11 (0'0,33'39] local-lis/les=52/53 n=2 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [0] r=0 lpr=52 pi=[44,52)/1 crt=33'39 lcod 0'0 mlcod 0'0 activating+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/Activating
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 53 pg[6.d( v 33'39 lc 33'13 (0'0,33'39] local-lis/les=52/53 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [0] r=0 lpr=52 pi=[44,52)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 53 pg[6.d( v 33'39 lc 33'13 (0'0,33'39] local-lis/les=52/53 n=1 ec=39/23 lis/c=52/44 les/c/f=53/45/0 sis=52) [0] r=0 lpr=52 pi=[44,52)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/Activating 0.003310 3 0.000505
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 53 pg[6.d( v 33'39 lc 33'13 (0'0,33'39] local-lis/les=52/53 n=1 ec=39/23 lis/c=52/44 les/c/f=53/45/0 sis=52) [0] r=0 lpr=52 pi=[44,52)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 53 pg[6.d( v 33'39 lc 33'13 (0'0,33'39] local-lis/les=52/53 n=1 ec=39/23 lis/c=52/44 les/c/f=53/45/0 sis=52) [0] r=0 lpr=52 pi=[44,52)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000101 1 0.000113
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 53 pg[6.d( v 33'39 lc 33'13 (0'0,33'39] local-lis/les=52/53 n=1 ec=39/23 lis/c=52/44 les/c/f=53/45/0 sis=52) [0] r=0 lpr=52 pi=[44,52)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 53 pg[6.d( v 33'39 lc 33'13 (0'0,33'39] local-lis/les=52/53 n=1 ec=39/23 lis/c=52/44 les/c/f=53/45/0 sis=52) [0] r=0 lpr=52 pi=[44,52)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000017 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 53 pg[6.d( v 33'39 lc 33'13 (0'0,33'39] local-lis/les=52/53 n=1 ec=39/23 lis/c=52/44 les/c/f=53/45/0 sis=52) [0] r=0 lpr=52 pi=[44,52)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/Recovering
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 53 pg[6.5( v 33'39 lc 33'11 (0'0,33'39] local-lis/les=52/53 n=2 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [0] r=0 lpr=52 pi=[44,52)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 53 pg[6.5( v 33'39 lc 33'11 (0'0,33'39] local-lis/les=52/53 n=2 ec=39/23 lis/c=52/44 les/c/f=53/45/0 sis=52) [0] r=0 lpr=52 pi=[44,52)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/Activating 0.004876 4 0.000396
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 53 pg[6.5( v 33'39 lc 33'11 (0'0,33'39] local-lis/les=52/53 n=2 ec=39/23 lis/c=52/44 les/c/f=53/45/0 sis=52) [0] r=0 lpr=52 pi=[44,52)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 53 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=52/53 n=1 ec=39/23 lis/c=52/44 les/c/f=53/45/0 sis=52) [0] r=0 lpr=52 pi=[44,52)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.078046 3 0.000132
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 53 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=52/53 n=1 ec=39/23 lis/c=52/44 les/c/f=53/45/0 sis=52) [0] r=0 lpr=52 pi=[44,52)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 53 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=52/53 n=1 ec=39/23 lis/c=52/44 les/c/f=53/45/0 sis=52) [0] r=0 lpr=52 pi=[44,52)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000021 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 53 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=52/53 n=1 ec=39/23 lis/c=52/44 les/c/f=53/45/0 sis=52) [0] r=0 lpr=52 pi=[44,52)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 53 pg[6.5( v 33'39 lc 33'11 (0'0,33'39] local-lis/les=52/53 n=2 ec=39/23 lis/c=52/44 les/c/f=53/45/0 sis=52) [0] r=0 lpr=52 pi=[44,52)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.076168 2 0.000052
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 53 pg[6.5( v 33'39 lc 33'11 (0'0,33'39] local-lis/les=52/53 n=2 ec=39/23 lis/c=52/44 les/c/f=53/45/0 sis=52) [0] r=0 lpr=52 pi=[44,52)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 53 pg[6.5( v 33'39 lc 33'11 (0'0,33'39] local-lis/les=52/53 n=2 ec=39/23 lis/c=52/44 les/c/f=53/45/0 sis=52) [0] r=0 lpr=52 pi=[44,52)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000006 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 53 pg[6.5( v 33'39 lc 33'11 (0'0,33'39] local-lis/les=52/53 n=2 ec=39/23 lis/c=52/44 les/c/f=53/45/0 sis=52) [0] r=0 lpr=52 pi=[44,52)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/Recovering
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 63004672 unmapped: 950272 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 53 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=52/53 n=2 ec=39/23 lis/c=52/44 les/c/f=53/45/0 sis=52) [0] r=0 lpr=52 pi=[44,52)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.144850 1 0.000067
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 53 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=52/53 n=2 ec=39/23 lis/c=52/44 les/c/f=53/45/0 sis=52) [0] r=0 lpr=52 pi=[44,52)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 53 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=52/53 n=2 ec=39/23 lis/c=52/44 les/c/f=53/45/0 sis=52) [0] r=0 lpr=52 pi=[44,52)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000025 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 53 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=52/53 n=2 ec=39/23 lis/c=52/44 les/c/f=53/45/0 sis=52) [0] r=0 lpr=52 pi=[44,52)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T16:59:42.791069+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  log_queue is 2 last_log 31 sent 29 num 2 unsent 2 sending 2
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:00:11.999102+0000 osd.0 (osd.0) 30 : cluster [DBG] 4.1d scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:00:12.009686+0000 osd.0 (osd.0) 31 : cluster [DBG] 4.1d scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 7.1b scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 7.1b scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client handle_log_ack log(last 31)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:00:11.999102+0000 osd.0 (osd.0) 30 : cluster [DBG] 4.1d scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:00:12.009686+0000 osd.0 (osd.0) 31 : cluster [DBG] 4.1d scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 63078400 unmapped: 876544 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T16:59:43.791351+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  log_queue is 2 last_log 33 sent 31 num 2 unsent 2 sending 2
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:00:12.959438+0000 osd.0 (osd.0) 32 : cluster [DBG] 7.1b scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:00:12.969971+0000 osd.0 (osd.0) 33 : cluster [DBG] 7.1b scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 53 heartbeat osd_stat(store_statfs(0x4fe137000/0x0/0x4ffc00000, data 0x399a1/0x91000, compress 0x0/0x0/0x0, omap 0xafd9, meta 0x1a25027), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 53 handle_osd_map epochs [54,54], i have 53, src has [1,54]
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 53 handle_osd_map epochs [54,54], i have 54, src has [1,54]
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 64200704 unmapped: 802816 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client handle_log_ack log(last 33)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:00:12.959438+0000 osd.0 (osd.0) 32 : cluster [DBG] 7.1b scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:00:12.969971+0000 osd.0 (osd.0) 33 : cluster [DBG] 7.1b scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T16:59:44.791602+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 407295 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 64200704 unmapped: 802816 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T16:59:45.791790+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 54 heartbeat osd_stat(store_statfs(0x4fe132000/0x0/0x4ffc00000, data 0x3afb7/0x94000, compress 0x0/0x0/0x0, omap 0xb0e1, meta 0x1a24f1f), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 64200704 unmapped: 802816 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T16:59:46.791954+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 64225280 unmapped: 778240 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T16:59:47.792171+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 64225280 unmapped: 778240 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T16:59:48.792397+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 54 heartbeat osd_stat(store_statfs(0x4fe132000/0x0/0x4ffc00000, data 0x3afb7/0x94000, compress 0x0/0x0/0x0, omap 0xb0e1, meta 0x1a24f1f), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 54 handle_osd_map epochs [55,55], i have 54, src has [1,55]
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 2.11 scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 2.11 scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 64290816 unmapped: 712704 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T16:59:49.792657+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  log_queue is 2 last_log 35 sent 33 num 2 unsent 2 sending 2
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:00:19.010294+0000 osd.0 (osd.0) 34 : cluster [DBG] 2.11 scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:00:19.020833+0000 osd.0 (osd.0) 35 : cluster [DBG] 2.11 scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 3.12 scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 3.12 scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 412957 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 55 heartbeat osd_stat(store_statfs(0x4fe133000/0x0/0x4ffc00000, data 0x3c5cd/0x97000, compress 0x0/0x0/0x0, omap 0xb191, meta 0x1a24e6f), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client handle_log_ack log(last 35)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:00:19.010294+0000 osd.0 (osd.0) 34 : cluster [DBG] 2.11 scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:00:19.020833+0000 osd.0 (osd.0) 35 : cluster [DBG] 2.11 scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 55 handle_osd_map epochs [56,56], i have 55, src has [1,56]
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 56 pg[6.8( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 31.790528 43 0.000202
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 56 pg[6.8( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 31.796997 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 56 pg[6.8( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 33.357050 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 56 pg[6.8( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 33.357113 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 56 pg[6.8( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 56 pg[6.8( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=56 pruub=8.209659576s) [2] r=-1 lpr=56 pi=[39,56)/1 crt=33'39 lcod 0'0 active pruub 119.055152893s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 56 pg[6.8( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=56 pruub=8.209583282s) [2] r=-1 lpr=56 pi=[39,56)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 119.055152893s@ mbc={}] exit Reset 0.000125 1 0.000212
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 56 pg[6.8( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=56 pruub=8.209583282s) [2] r=-1 lpr=56 pi=[39,56)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 119.055152893s@ mbc={}] enter Started
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 56 pg[6.8( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=56 pruub=8.209583282s) [2] r=-1 lpr=56 pi=[39,56)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 119.055152893s@ mbc={}] enter Start
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 56 pg[6.8( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=56 pruub=8.209583282s) [2] r=-1 lpr=56 pi=[39,56)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 119.055152893s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 56 pg[6.8( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=56 pruub=8.209583282s) [2] r=-1 lpr=56 pi=[39,56)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 119.055152893s@ mbc={}] exit Start 0.000009 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 56 pg[6.8( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=56 pruub=8.209583282s) [2] r=-1 lpr=56 pi=[39,56)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 119.055152893s@ mbc={}] enter Started/Stray
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T16:59:50.792898+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  log_queue is 2 last_log 37 sent 35 num 2 unsent 2 sending 2
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:00:19.961425+0000 osd.0 (osd.0) 36 : cluster [DBG] 3.12 scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:00:19.972023+0000 osd.0 (osd.0) 37 : cluster [DBG] 3.12 scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 56 heartbeat osd_stat(store_statfs(0x4fe130000/0x0/0x4ffc00000, data 0x3dbe3/0x9a000, compress 0x0/0x0/0x0, omap 0xb241, meta 0x1a24dbf), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _renew_subs
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 647168 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client handle_log_ack log(last 37)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:00:19.961425+0000 osd.0 (osd.0) 36 : cluster [DBG] 3.12 scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:00:19.972023+0000 osd.0 (osd.0) 37 : cluster [DBG] 3.12 scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 56 handle_osd_map epochs [57,57], i have 56, src has [1,57]
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.181605339s of 10.612998009s, submitted: 29
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 57 pg[6.8( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=56) [2] r=-1 lpr=56 pi=[39,56)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.015219 7 0.000092
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 57 pg[6.8( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=56) [2] r=-1 lpr=56 pi=[39,56)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 57 pg[6.8( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=56) [2] r=-1 lpr=56 pi=[39,56)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 57 pg[6.8( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=56) [2] r=-1 lpr=56 pi=[39,56)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000088 1 0.000093
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 57 pg[6.8( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=56) [2] r=-1 lpr=56 pi=[39,56)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 57 pg[6.8( v 33'39 (0'0,33'39] lb MIN local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=56) [2] r=-1 lpr=56 DELETING pi=[39,56)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.002092 1 0.000047
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 57 pg[6.8( v 33'39 (0'0,33'39] lb MIN local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=56) [2] r=-1 lpr=56 pi=[39,56)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.002216 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 57 pg[6.8( v 33'39 (0'0,33'39] lb MIN local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=56) [2] r=-1 lpr=56 pi=[39,56)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.017495 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T16:59:51.793123+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 5.14 scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 5.14 scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 647168 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 57 handle_osd_map epochs [58,58], i have 57, src has [1,58]
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 58 pg[6.9(unlocked)] enter Initial
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 58 pg[6.9( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=58) [0] r=0 lpr=0 pi=[44,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000122 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 58 pg[6.9( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=58) [0] r=0 lpr=0 pi=[44,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 58 pg[6.9( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=58) [0] r=0 lpr=58 pi=[44,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000008 1 0.000022
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 58 pg[6.9( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=58) [0] r=0 lpr=58 pi=[44,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 58 pg[6.9( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=58) [0] r=0 lpr=58 pi=[44,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 58 pg[6.9( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=58) [0] r=0 lpr=58 pi=[44,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 58 pg[6.9( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=58) [0] r=0 lpr=58 pi=[44,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 58 pg[6.9( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=58) [0] r=0 lpr=58 pi=[44,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 58 pg[6.9( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=58) [0] r=0 lpr=58 pi=[44,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 58 pg[6.9( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=58) [0] r=0 lpr=58 pi=[44,58)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 58 pg[6.9( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=58) [0] r=0 lpr=58 pi=[44,58)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000407 1 0.000040
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 58 pg[6.9( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=58) [0] r=0 lpr=58 pi=[44,58)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 58 handle_osd_map epochs [58,58], i have 58, src has [1,58]
Jan 10 17:23:23 compute-0 ceph-osd[85764]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 58 pg[6.9( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=58) [0] r=0 lpr=58 pi=[44,58)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000628 2 0.000061
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 58 pg[6.9( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=58) [0] r=0 lpr=58 pi=[44,58)/1 crt=33'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 58 pg[6.9( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=58) [0] r=0 lpr=58 pi=[44,58)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000007 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 58 pg[6.9( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=58) [0] r=0 lpr=58 pi=[44,58)/1 crt=33'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T16:59:52.793314+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  log_queue is 2 last_log 39 sent 37 num 2 unsent 2 sending 2
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:00:22.008784+0000 osd.0 (osd.0) 38 : cluster [DBG] 5.14 scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:00:22.019417+0000 osd.0 (osd.0) 39 : cluster [DBG] 5.14 scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 64364544 unmapped: 638976 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client handle_log_ack log(last 39)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:00:22.008784+0000 osd.0 (osd.0) 38 : cluster [DBG] 5.14 scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:00:22.019417+0000 osd.0 (osd.0) 39 : cluster [DBG] 5.14 scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 58 handle_osd_map epochs [58,59], i have 58, src has [1,59]
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 58 handle_osd_map epochs [58,59], i have 59, src has [1,59]
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 59 pg[6.9( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=58) [0] r=0 lpr=58 pi=[44,58)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.944177 2 0.000058
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 59 pg[6.9( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=58) [0] r=0 lpr=58 pi=[44,58)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.945304 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 59 pg[6.9( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=58) [0] r=0 lpr=58 pi=[44,58)/1 crt=33'39 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 59 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=58/59 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=58) [0] r=0 lpr=58 pi=[44,58)/1 crt=33'39 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 59 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=58/59 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=58) [0] r=0 lpr=58 pi=[44,58)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 59 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=58/59 n=1 ec=39/23 lis/c=58/44 les/c/f=59/45/0 sis=58) [0] r=0 lpr=58 pi=[44,58)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002761 3 0.000475
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 59 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=58/59 n=1 ec=39/23 lis/c=58/44 les/c/f=59/45/0 sis=58) [0] r=0 lpr=58 pi=[44,58)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 59 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=58/59 n=1 ec=39/23 lis/c=58/44 les/c/f=59/45/0 sis=58) [0] r=0 lpr=58 pi=[44,58)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000020 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 59 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=58/59 n=1 ec=39/23 lis/c=58/44 les/c/f=59/45/0 sis=58) [0] r=0 lpr=58 pi=[44,58)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T16:59:53.794039+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 5.15 scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 5.15 scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 59 handle_osd_map epochs [59,59], i have 59, src has [1,59]
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 630784 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 59 heartbeat osd_stat(store_statfs(0x4fe125000/0x0/0x4ffc00000, data 0x41af9/0xa3000, compress 0x0/0x0/0x0, omap 0xb4a9, meta 0x1a24b57), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T16:59:54.794196+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  log_queue is 2 last_log 41 sent 39 num 2 unsent 2 sending 2
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:00:24.050094+0000 osd.0 (osd.0) 40 : cluster [DBG] 5.15 scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:00:24.060633+0000 osd.0 (osd.0) 41 : cluster [DBG] 5.15 scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 59 handle_osd_map epochs [60,60], i have 59, src has [1,60]
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 60 pg[6.a(unlocked)] enter Initial
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 60 pg[6.a( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=46/46 les/c/f=47/47/0 sis=60) [0] r=0 lpr=0 pi=[46,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000208 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 60 pg[6.a( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=46/46 les/c/f=47/47/0 sis=60) [0] r=0 lpr=0 pi=[46,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 60 pg[6.a( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=46/46 les/c/f=47/47/0 sis=60) [0] r=0 lpr=60 pi=[46,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000048 1 0.000107
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 60 pg[6.a( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=46/46 les/c/f=47/47/0 sis=60) [0] r=0 lpr=60 pi=[46,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 60 pg[6.a( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=46/46 les/c/f=47/47/0 sis=60) [0] r=0 lpr=60 pi=[46,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 60 pg[6.a( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=46/46 les/c/f=47/47/0 sis=60) [0] r=0 lpr=60 pi=[46,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 60 pg[6.a( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=46/46 les/c/f=47/47/0 sis=60) [0] r=0 lpr=60 pi=[46,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000155 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 60 pg[6.a( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=46/46 les/c/f=47/47/0 sis=60) [0] r=0 lpr=60 pi=[46,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 60 pg[6.a( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=46/46 les/c/f=47/47/0 sis=60) [0] r=0 lpr=60 pi=[46,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 60 pg[6.a( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=46/46 les/c/f=47/47/0 sis=60) [0] r=0 lpr=60 pi=[46,60)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 60 pg[6.a( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=46/46 les/c/f=47/47/0 sis=60) [0] r=0 lpr=60 pi=[46,60)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000248 1 0.000282
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 60 handle_osd_map epochs [60,60], i have 60, src has [1,60]
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 60 pg[6.a( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=46/46 les/c/f=47/47/0 sis=60) [0] r=0 lpr=60 pi=[46,60)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 60 pg[6.a( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=46/47 n=1 ec=39/23 lis/c=46/46 les/c/f=47/47/0 sis=60) [0] r=0 lpr=60 pi=[46,60)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000822 2 0.000118
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 60 pg[6.a( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=46/47 n=1 ec=39/23 lis/c=46/46 les/c/f=47/47/0 sis=60) [0] r=0 lpr=60 pi=[46,60)/1 crt=33'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 60 pg[6.a( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=46/47 n=1 ec=39/23 lis/c=46/46 les/c/f=47/47/0 sis=60) [0] r=0 lpr=60 pi=[46,60)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000010 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 60 pg[6.a( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=46/47 n=1 ec=39/23 lis/c=46/46 les/c/f=47/47/0 sis=60) [0] r=0 lpr=60 pi=[46,60)/1 crt=33'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 434701 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 630784 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T16:59:55.794447+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client handle_log_ack log(last 41)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:00:24.050094+0000 osd.0 (osd.0) 40 : cluster [DBG] 5.15 scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:00:24.060633+0000 osd.0 (osd.0) 41 : cluster [DBG] 5.15 scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 60 handle_osd_map epochs [60,61], i have 60, src has [1,61]
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 60 handle_osd_map epochs [61,61], i have 61, src has [1,61]
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 61 pg[6.a( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=46/47 n=1 ec=39/23 lis/c=46/46 les/c/f=47/47/0 sis=60) [0] r=0 lpr=60 pi=[46,60)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.978027 2 0.000118
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 61 pg[6.a( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=46/47 n=1 ec=39/23 lis/c=46/46 les/c/f=47/47/0 sis=60) [0] r=0 lpr=60 pi=[46,60)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.979227 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 61 pg[6.a( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=46/47 n=1 ec=39/23 lis/c=46/46 les/c/f=47/47/0 sis=60) [0] r=0 lpr=60 pi=[46,60)/1 crt=33'39 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 61 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=60/61 n=1 ec=39/23 lis/c=46/46 les/c/f=47/47/0 sis=60) [0] r=0 lpr=60 pi=[46,60)/1 crt=33'39 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 61 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=60/61 n=1 ec=39/23 lis/c=46/46 les/c/f=47/47/0 sis=60) [0] r=0 lpr=60 pi=[46,60)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 61 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=60/61 n=1 ec=39/23 lis/c=60/46 les/c/f=61/47/0 sis=60) [0] r=0 lpr=60 pi=[46,60)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002788 4 0.000218
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 61 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=60/61 n=1 ec=39/23 lis/c=60/46 les/c/f=61/47/0 sis=60) [0] r=0 lpr=60 pi=[46,60)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 61 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=60/61 n=1 ec=39/23 lis/c=60/46 les/c/f=61/47/0 sis=60) [0] r=0 lpr=60 pi=[46,60)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000018 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 61 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=60/61 n=1 ec=39/23 lis/c=60/46 les/c/f=61/47/0 sis=60) [0] r=0 lpr=60 pi=[46,60)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 64364544 unmapped: 638976 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T16:59:56.794629+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 598016 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T16:59:57.794818+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 61 heartbeat osd_stat(store_statfs(0x4fe121000/0x0/0x4ffc00000, data 0x4458f/0xa9000, compress 0x0/0x0/0x0, omap 0xb609, meta 0x1a249f7), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 61 handle_osd_map epochs [62,62], i have 61, src has [1,62]
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 62 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=48) [0] r=0 lpr=48 crt=33'39 mlcod 33'39 active+clean] exit Started/Primary/Active/Clean 24.222818 40 0.000274
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 62 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=48) [0] r=0 lpr=48 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active 24.626437 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 62 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=48) [0] r=0 lpr=48 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary 25.641881 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 62 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=48) [0] r=0 lpr=48 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started 25.641940 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 62 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=48) [0] r=0 lpr=48 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Reset
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 62 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=62 pruub=15.380357742s) [1] r=-1 lpr=62 pi=[48,62)/1 crt=33'39 active pruub 133.736038208s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 62 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=62 pruub=15.380233765s) [1] r=-1 lpr=62 pi=[48,62)/1 crt=33'39 unknown NOTIFY pruub 133.736038208s@ mbc={}] exit Reset 0.000217 1 0.000319
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 62 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=62 pruub=15.380233765s) [1] r=-1 lpr=62 pi=[48,62)/1 crt=33'39 unknown NOTIFY pruub 133.736038208s@ mbc={}] enter Started
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 62 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=62 pruub=15.380233765s) [1] r=-1 lpr=62 pi=[48,62)/1 crt=33'39 unknown NOTIFY pruub 133.736038208s@ mbc={}] enter Start
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 62 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=62 pruub=15.380233765s) [1] r=-1 lpr=62 pi=[48,62)/1 crt=33'39 unknown NOTIFY pruub 133.736038208s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 62 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=62 pruub=15.380233765s) [1] r=-1 lpr=62 pi=[48,62)/1 crt=33'39 unknown NOTIFY pruub 133.736038208s@ mbc={}] exit Start 0.000017 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 62 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=62 pruub=15.380233765s) [1] r=-1 lpr=62 pi=[48,62)/1 crt=33'39 unknown NOTIFY pruub 133.736038208s@ mbc={}] enter Started/Stray
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 64413696 unmapped: 589824 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T16:59:58.794973+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 62 handle_osd_map epochs [63,63], i have 62, src has [1,63]
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 63 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=62) [1] r=-1 lpr=62 pi=[48,62)/1 crt=33'39 unknown NOTIFY mbc={}] exit Started/Stray 0.808420 6 0.000156
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 63 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=62) [1] r=-1 lpr=62 pi=[48,62)/1 crt=33'39 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 63 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=62) [1] r=-1 lpr=62 pi=[48,62)/1 crt=33'39 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 63 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=62) [1] r=-1 lpr=62 pi=[48,62)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.007272 3 0.000513
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 63 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=62) [1] r=-1 lpr=62 pi=[48,62)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ReplicaActive 0.007559 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 63 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=62) [1] r=-1 lpr=62 pi=[48,62)/1 pct=0'0 crt=33'39 active mbc={}] enter Started/ToDelete
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 63 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=62) [1] r=-1 lpr=62 pi=[48,62)/1 pct=0'0 crt=33'39 active mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 63 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=62) [1] r=-1 lpr=62 pi=[48,62)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000115 1 0.000110
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 63 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=62) [1] r=-1 lpr=62 pi=[48,62)/1 pct=0'0 crt=33'39 active mbc={}] enter Started/ToDelete/Deleting
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 63 pg[6.b( v 33'39 (0'0,33'39] lb MIN local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=62) [1] r=-1 lpr=62 DELETING pi=[48,62)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ToDelete/Deleting 0.009465 2 0.000360
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 63 pg[6.b( v 33'39 (0'0,33'39] lb MIN local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=62) [1] r=-1 lpr=62 pi=[48,62)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ToDelete 0.009683 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 63 pg[6.b( v 33'39 (0'0,33'39] lb MIN local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=62) [1] r=-1 lpr=62 pi=[48,62)/1 pct=0'0 crt=33'39 active mbc={}] exit Started 0.825935 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 64430080 unmapped: 573440 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T16:59:59.795197+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 439161 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 64430080 unmapped: 573440 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:00.795434+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _renew_subs
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 63 handle_osd_map epochs [64,64], i have 63, src has [1,64]
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 64 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=52/53 n=1 ec=39/23 lis/c=52/52 les/c/f=53/53/0 sis=52) [0] r=0 lpr=52 crt=33'39 mlcod 33'39 active+clean] exit Started/Primary/Active/Clean 18.903283 33 0.000417
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 64 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=52/53 n=1 ec=39/23 lis/c=52/52 les/c/f=53/53/0 sis=52) [0] r=0 lpr=52 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active 18.985193 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 64 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=52/53 n=1 ec=39/23 lis/c=52/52 les/c/f=53/53/0 sis=52) [0] r=0 lpr=52 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary 19.915866 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 64 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=52/53 n=1 ec=39/23 lis/c=52/52 les/c/f=53/53/0 sis=52) [0] r=0 lpr=52 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started 19.916015 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 64 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=52/53 n=1 ec=39/23 lis/c=52/52 les/c/f=53/53/0 sis=52) [0] r=0 lpr=52 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Reset
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 64 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=52/53 n=1 ec=39/23 lis/c=52/52 les/c/f=53/53/0 sis=64 pruub=13.018323898s) [1] r=-1 lpr=64 pi=[52,64)/1 crt=33'39 active pruub 134.575500488s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 64 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=52/53 n=1 ec=39/23 lis/c=52/52 les/c/f=53/53/0 sis=64 pruub=13.018235207s) [1] r=-1 lpr=64 pi=[52,64)/1 crt=33'39 unknown NOTIFY pruub 134.575500488s@ mbc={}] exit Reset 0.000158 1 0.000269
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 64 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=52/53 n=1 ec=39/23 lis/c=52/52 les/c/f=53/53/0 sis=64 pruub=13.018235207s) [1] r=-1 lpr=64 pi=[52,64)/1 crt=33'39 unknown NOTIFY pruub 134.575500488s@ mbc={}] enter Started
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 64 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=52/53 n=1 ec=39/23 lis/c=52/52 les/c/f=53/53/0 sis=64 pruub=13.018235207s) [1] r=-1 lpr=64 pi=[52,64)/1 crt=33'39 unknown NOTIFY pruub 134.575500488s@ mbc={}] enter Start
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 64 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=52/53 n=1 ec=39/23 lis/c=52/52 les/c/f=53/53/0 sis=64 pruub=13.018235207s) [1] r=-1 lpr=64 pi=[52,64)/1 crt=33'39 unknown NOTIFY pruub 134.575500488s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 64 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=52/53 n=1 ec=39/23 lis/c=52/52 les/c/f=53/53/0 sis=64 pruub=13.018235207s) [1] r=-1 lpr=64 pi=[52,64)/1 crt=33'39 unknown NOTIFY pruub 134.575500488s@ mbc={}] exit Start 0.000007 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 64 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=52/53 n=1 ec=39/23 lis/c=52/52 les/c/f=53/53/0 sis=64 pruub=13.018235207s) [1] r=-1 lpr=64 pi=[52,64)/1 crt=33'39 unknown NOTIFY pruub 134.575500488s@ mbc={}] enter Started/Stray
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 64 handle_osd_map epochs [64,64], i have 64, src has [1,64]
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 64446464 unmapped: 557056 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:01.795616+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 64 handle_osd_map epochs [64,65], i have 64, src has [1,65]
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.260480881s of 10.348821640s, submitted: 32
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 65 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=52/53 n=1 ec=39/23 lis/c=52/52 les/c/f=53/53/0 sis=64) [1] r=-1 lpr=64 pi=[52,64)/1 crt=33'39 unknown NOTIFY mbc={}] exit Started/Stray 0.650333 7 0.000176
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 65 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=52/53 n=1 ec=39/23 lis/c=52/52 les/c/f=53/53/0 sis=64) [1] r=-1 lpr=64 pi=[52,64)/1 crt=33'39 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 65 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=52/53 n=1 ec=39/23 lis/c=52/52 les/c/f=53/53/0 sis=64) [1] r=-1 lpr=64 pi=[52,64)/1 crt=33'39 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 65 handle_osd_map epochs [65,65], i have 65, src has [1,65]
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 65 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=52/53 n=1 ec=39/23 lis/c=52/52 les/c/f=53/53/0 sis=64) [1] r=-1 lpr=64 pi=[52,64)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.071731 2 0.000068
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 65 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=52/53 n=1 ec=39/23 lis/c=52/52 les/c/f=53/53/0 sis=64) [1] r=-1 lpr=64 pi=[52,64)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ReplicaActive 0.071770 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 65 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=52/53 n=1 ec=39/23 lis/c=52/52 les/c/f=53/53/0 sis=64) [1] r=-1 lpr=64 pi=[52,64)/1 pct=0'0 crt=33'39 active mbc={}] enter Started/ToDelete
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 65 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=52/53 n=1 ec=39/23 lis/c=52/52 les/c/f=53/53/0 sis=64) [1] r=-1 lpr=64 pi=[52,64)/1 pct=0'0 crt=33'39 active mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 65 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=52/53 n=1 ec=39/23 lis/c=52/52 les/c/f=53/53/0 sis=64) [1] r=-1 lpr=64 pi=[52,64)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000105 1 0.000070
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 65 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=52/53 n=1 ec=39/23 lis/c=52/52 les/c/f=53/53/0 sis=64) [1] r=-1 lpr=64 pi=[52,64)/1 pct=0'0 crt=33'39 active mbc={}] enter Started/ToDelete/Deleting
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 65 pg[6.d( v 33'39 (0'0,33'39] lb MIN local-lis/les=52/53 n=1 ec=39/23 lis/c=52/52 les/c/f=53/53/0 sis=64) [1] r=-1 lpr=64 DELETING pi=[52,64)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ToDelete/Deleting 0.016832 2 0.000231
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 65 pg[6.d( v 33'39 (0'0,33'39] lb MIN local-lis/les=52/53 n=1 ec=39/23 lis/c=52/52 les/c/f=53/53/0 sis=64) [1] r=-1 lpr=64 pi=[52,64)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ToDelete 0.016988 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 65 pg[6.d( v 33'39 (0'0,33'39] lb MIN local-lis/les=52/53 n=1 ec=39/23 lis/c=52/52 les/c/f=53/53/0 sis=64) [1] r=-1 lpr=64 pi=[52,64)/1 pct=0'0 crt=33'39 active mbc={}] exit Started 0.739154 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 64462848 unmapped: 540672 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 65 heartbeat osd_stat(store_statfs(0x4fe116000/0x0/0x4ffc00000, data 0x49c13/0xb4000, compress 0x0/0x0/0x0, omap 0xb5e2, meta 0x1a24a1e), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:02.795766+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 65 heartbeat osd_stat(store_statfs(0x4fe116000/0x0/0x4ffc00000, data 0x49c13/0xb4000, compress 0x0/0x0/0x0, omap 0xb5e2, meta 0x1a24a1e), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 65 handle_osd_map epochs [66,66], i have 65, src has [1,66]
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 65 handle_osd_map epochs [66,66], i have 66, src has [1,66]
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 64487424 unmapped: 516096 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:03.795945+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 64487424 unmapped: 516096 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:04.796100+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 443393 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 64503808 unmapped: 499712 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:05.796258+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 64503808 unmapped: 499712 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:06.796408+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 64512000 unmapped: 491520 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:07.796650+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 66 handle_osd_map epochs [67,67], i have 66, src has [1,67]
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 66 handle_osd_map epochs [67,67], i have 67, src has [1,67]
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 66 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4b229/0xb7000, compress 0x0/0x0/0x0, omap 0xb692, meta 0x1a2496e), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:08.796865+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 67 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=48) [0] r=0 lpr=48 crt=33'39 mlcod 33'39 active+clean] exit Started/Primary/Active/Clean 35.178636 56 0.000295
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 67 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=48) [0] r=0 lpr=48 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active 35.522821 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 67 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=48) [0] r=0 lpr=48 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary 36.536740 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 67 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=48) [0] r=0 lpr=48 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started 36.536799 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 67 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=48) [0] r=0 lpr=48 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Reset
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 67 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=67 pruub=12.483779907s) [2] r=-1 lpr=67 pi=[48,67)/1 crt=33'39 active pruub 141.736114502s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 67 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=67 pruub=12.483655930s) [2] r=-1 lpr=67 pi=[48,67)/1 crt=33'39 unknown NOTIFY pruub 141.736114502s@ mbc={}] exit Reset 0.000209 1 0.000365
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 67 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=67 pruub=12.483655930s) [2] r=-1 lpr=67 pi=[48,67)/1 crt=33'39 unknown NOTIFY pruub 141.736114502s@ mbc={}] enter Started
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 67 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=67 pruub=12.483655930s) [2] r=-1 lpr=67 pi=[48,67)/1 crt=33'39 unknown NOTIFY pruub 141.736114502s@ mbc={}] enter Start
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 67 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=67 pruub=12.483655930s) [2] r=-1 lpr=67 pi=[48,67)/1 crt=33'39 unknown NOTIFY pruub 141.736114502s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 67 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=67 pruub=12.483655930s) [2] r=-1 lpr=67 pi=[48,67)/1 crt=33'39 unknown NOTIFY pruub 141.736114502s@ mbc={}] exit Start 0.000010 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 67 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=67 pruub=12.483655930s) [2] r=-1 lpr=67 pi=[48,67)/1 crt=33'39 unknown NOTIFY pruub 141.736114502s@ mbc={}] enter Started/Stray
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 7.13 scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 7.13 scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 64552960 unmapped: 1499136 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:09.797039+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  log_queue is 2 last_log 43 sent 41 num 2 unsent 2 sending 2
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:00:38.991334+0000 osd.0 (osd.0) 42 : cluster [DBG] 7.13 scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:00:39.001773+0000 osd.0 (osd.0) 43 : cluster [DBG] 7.13 scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 67 handle_osd_map epochs [68,68], i have 67, src has [1,68]
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 68 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=67) [2] r=-1 lpr=67 pi=[48,67)/1 crt=33'39 unknown NOTIFY mbc={}] exit Started/Stray 1.022619 6 0.000297
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 68 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=67) [2] r=-1 lpr=67 pi=[48,67)/1 crt=33'39 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 68 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=67) [2] r=-1 lpr=67 pi=[48,67)/1 crt=33'39 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client handle_log_ack log(last 43)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:00:38.991334+0000 osd.0 (osd.0) 42 : cluster [DBG] 7.13 scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:00:39.001773+0000 osd.0 (osd.0) 43 : cluster [DBG] 7.13 scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 2.13 scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 2.13 scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 68 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=67) [2] r=-1 lpr=67 pi=[48,67)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.138844 3 0.000713
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 68 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=67) [2] r=-1 lpr=67 pi=[48,67)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ReplicaActive 0.138897 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 68 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=67) [2] r=-1 lpr=67 pi=[48,67)/1 pct=0'0 crt=33'39 active mbc={}] enter Started/ToDelete
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 68 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=67) [2] r=-1 lpr=67 pi=[48,67)/1 pct=0'0 crt=33'39 active mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 68 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=67) [2] r=-1 lpr=67 pi=[48,67)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000077 1 0.000083
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 68 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=67) [2] r=-1 lpr=67 pi=[48,67)/1 pct=0'0 crt=33'39 active mbc={}] enter Started/ToDelete/Deleting
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 68 pg[6.f( v 33'39 (0'0,33'39] lb MIN local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=67) [2] r=-1 lpr=67 DELETING pi=[48,67)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ToDelete/Deleting 0.024160 2 0.000210
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 68 pg[6.f( v 33'39 (0'0,33'39] lb MIN local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=67) [2] r=-1 lpr=67 pi=[48,67)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ToDelete 0.024307 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 68 pg[6.f( v 33'39 (0'0,33'39] lb MIN local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=67) [2] r=-1 lpr=67 pi=[48,67)/1 pct=0'0 crt=33'39 active mbc={}] exit Started 1.186475 0 0.000000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 447709 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 64618496 unmapped: 1433600 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:10.797430+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  log_queue is 2 last_log 45 sent 43 num 2 unsent 2 sending 2
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:00:40.075419+0000 osd.0 (osd.0) 44 : cluster [DBG] 2.13 scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:00:40.145251+0000 osd.0 (osd.0) 45 : cluster [DBG] 2.13 scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client handle_log_ack log(last 45)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:00:40.075419+0000 osd.0 (osd.0) 44 : cluster [DBG] 2.13 scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:00:40.145251+0000 osd.0 (osd.0) 45 : cluster [DBG] 2.13 scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _renew_subs
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 64618496 unmapped: 1433600 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:11.797640+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe10d000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 64626688 unmapped: 1425408 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:12.797839+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 3.17 scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.325577736s of 11.136447906s, submitted: 18
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 3.17 scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe10d000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 64643072 unmapped: 1409024 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:13.798080+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  log_queue is 2 last_log 47 sent 45 num 2 unsent 2 sending 2
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:00:43.073333+0000 osd.0 (osd.0) 46 : cluster [DBG] 3.17 scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:00:43.083776+0000 osd.0 (osd.0) 47 : cluster [DBG] 3.17 scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 3.9 scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 3.9 scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client handle_log_ack log(last 47)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:00:43.073333+0000 osd.0 (osd.0) 46 : cluster [DBG] 3.17 scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:00:43.083776+0000 osd.0 (osd.0) 47 : cluster [DBG] 3.17 scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 64651264 unmapped: 1400832 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:14.798313+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  log_queue is 2 last_log 49 sent 47 num 2 unsent 2 sending 2
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:00:44.107776+0000 osd.0 (osd.0) 48 : cluster [DBG] 3.9 scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:00:44.118278+0000 osd.0 (osd.0) 49 : cluster [DBG] 3.9 scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client handle_log_ack log(last 49)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:00:44.107776+0000 osd.0 (osd.0) 48 : cluster [DBG] 3.9 scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:00:44.118278+0000 osd.0 (osd.0) 49 : cluster [DBG] 3.9 scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 451349 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 1392640 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:15.798559+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 1392640 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:16.798751+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 64675840 unmapped: 1376256 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:17.798957+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 64675840 unmapped: 1376256 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:18.799136+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 64675840 unmapped: 1376256 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:19.799319+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 2.8 scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 2.8 scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 453760 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 64684032 unmapped: 1368064 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:20.799512+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  log_queue is 2 last_log 51 sent 49 num 2 unsent 2 sending 2
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:00:50.146244+0000 osd.0 (osd.0) 50 : cluster [DBG] 2.8 scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:00:50.156789+0000 osd.0 (osd.0) 51 : cluster [DBG] 2.8 scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client handle_log_ack log(last 51)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:00:50.146244+0000 osd.0 (osd.0) 50 : cluster [DBG] 2.8 scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:00:50.156789+0000 osd.0 (osd.0) 51 : cluster [DBG] 2.8 scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 64684032 unmapped: 1368064 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:21.799810+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 3.15 scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 3.15 scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 64716800 unmapped: 1335296 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:22.800069+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  log_queue is 2 last_log 53 sent 51 num 2 unsent 2 sending 2
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:00:52.167446+0000 osd.0 (osd.0) 52 : cluster [DBG] 3.15 scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:00:52.177985+0000 osd.0 (osd.0) 53 : cluster [DBG] 3.15 scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 64716800 unmapped: 1335296 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:23.800448+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client handle_log_ack log(last 53)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:00:52.167446+0000 osd.0 (osd.0) 52 : cluster [DBG] 3.15 scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:00:52.177985+0000 osd.0 (osd.0) 53 : cluster [DBG] 3.15 scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 64716800 unmapped: 1335296 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:24.800597+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 2.16 scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.089248657s of 12.105111122s, submitted: 8
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 2.16 scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 458586 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 64716800 unmapped: 1335296 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:25.800860+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  log_queue is 2 last_log 55 sent 53 num 2 unsent 2 sending 2
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:00:55.178486+0000 osd.0 (osd.0) 54 : cluster [DBG] 2.16 scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:00:55.189069+0000 osd.0 (osd.0) 55 : cluster [DBG] 2.16 scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client handle_log_ack log(last 55)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:00:55.178486+0000 osd.0 (osd.0) 54 : cluster [DBG] 2.16 scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:00:55.189069+0000 osd.0 (osd.0) 55 : cluster [DBG] 2.16 scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 7.f scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 7.f scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 64724992 unmapped: 1327104 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:26.801153+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  log_queue is 2 last_log 57 sent 55 num 2 unsent 2 sending 2
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:00:56.159998+0000 osd.0 (osd.0) 56 : cluster [DBG] 7.f scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:00:56.170302+0000 osd.0 (osd.0) 57 : cluster [DBG] 7.f scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client handle_log_ack log(last 57)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:00:56.159998+0000 osd.0 (osd.0) 56 : cluster [DBG] 7.f scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:00:56.170302+0000 osd.0 (osd.0) 57 : cluster [DBG] 7.f scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 5.3 scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 5.3 scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 64733184 unmapped: 1318912 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:27.801435+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  log_queue is 2 last_log 59 sent 57 num 2 unsent 2 sending 2
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:00:57.132905+0000 osd.0 (osd.0) 58 : cluster [DBG] 5.3 scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:00:57.143248+0000 osd.0 (osd.0) 59 : cluster [DBG] 5.3 scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client handle_log_ack log(last 59)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:00:57.132905+0000 osd.0 (osd.0) 58 : cluster [DBG] 5.3 scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:00:57.143248+0000 osd.0 (osd.0) 59 : cluster [DBG] 5.3 scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 3.6 scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 3.6 scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 64733184 unmapped: 1318912 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:28.801786+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  log_queue is 2 last_log 61 sent 59 num 2 unsent 2 sending 2
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:00:58.118921+0000 osd.0 (osd.0) 60 : cluster [DBG] 3.6 scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:00:58.129347+0000 osd.0 (osd.0) 61 : cluster [DBG] 3.6 scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client handle_log_ack log(last 61)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:00:58.118921+0000 osd.0 (osd.0) 60 : cluster [DBG] 3.6 scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:00:58.129347+0000 osd.0 (osd.0) 61 : cluster [DBG] 3.6 scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 64733184 unmapped: 1318912 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:29.802021+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 465819 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 64741376 unmapped: 1310720 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:30.802213+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 64741376 unmapped: 1310720 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:31.802372+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 5.2 scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 5.2 scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 64765952 unmapped: 1286144 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:32.802554+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  log_queue is 2 last_log 63 sent 61 num 2 unsent 2 sending 2
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:01:01.971813+0000 osd.0 (osd.0) 62 : cluster [DBG] 5.2 scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:01:01.982259+0000 osd.0 (osd.0) 63 : cluster [DBG] 5.2 scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 2.2 scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 2.2 scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client handle_log_ack log(last 63)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:01:01.971813+0000 osd.0 (osd.0) 62 : cluster [DBG] 5.2 scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:01:01.982259+0000 osd.0 (osd.0) 63 : cluster [DBG] 5.2 scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 64782336 unmapped: 1269760 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:33.802948+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  log_queue is 2 last_log 65 sent 63 num 2 unsent 2 sending 2
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:01:02.955270+0000 osd.0 (osd.0) 64 : cluster [DBG] 2.2 scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:01:02.965811+0000 osd.0 (osd.0) 65 : cluster [DBG] 2.2 scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client handle_log_ack log(last 65)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:01:02.955270+0000 osd.0 (osd.0) 64 : cluster [DBG] 2.2 scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:01:02.965811+0000 osd.0 (osd.0) 65 : cluster [DBG] 2.2 scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 64782336 unmapped: 1269760 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:34.804227+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 7.3 scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 7.3 scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 473052 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 64798720 unmapped: 1253376 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:35.805490+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  log_queue is 2 last_log 67 sent 65 num 2 unsent 2 sending 2
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:01:05.022340+0000 osd.0 (osd.0) 66 : cluster [DBG] 7.3 scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:01:05.032804+0000 osd.0 (osd.0) 67 : cluster [DBG] 7.3 scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client handle_log_ack log(last 67)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:01:05.022340+0000 osd.0 (osd.0) 66 : cluster [DBG] 7.3 scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:01:05.032804+0000 osd.0 (osd.0) 67 : cluster [DBG] 7.3 scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 64798720 unmapped: 1253376 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:36.806248+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 1245184 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:37.806565+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 1245184 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:38.807119+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 1245184 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:39.807345+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 473052 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 64815104 unmapped: 1236992 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:40.808245+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 64823296 unmapped: 1228800 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:41.808509+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 3.3 scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.449089050s of 16.955524445s, submitted: 14
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 3.3 scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 64831488 unmapped: 1220608 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:42.808739+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  log_queue is 2 last_log 69 sent 67 num 2 unsent 2 sending 2
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:01:12.133022+0000 osd.0 (osd.0) 68 : cluster [DBG] 3.3 scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:01:12.143576+0000 osd.0 (osd.0) 69 : cluster [DBG] 3.3 scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client handle_log_ack log(last 69)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:01:12.133022+0000 osd.0 (osd.0) 68 : cluster [DBG] 3.3 scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:01:12.143576+0000 osd.0 (osd.0) 69 : cluster [DBG] 3.3 scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:43.809109+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 64831488 unmapped: 1220608 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:44.809443+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 64831488 unmapped: 1220608 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 5.5 scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 5.5 scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 477874 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:45.809840+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  log_queue is 2 last_log 71 sent 69 num 2 unsent 2 sending 2
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:01:15.144038+0000 osd.0 (osd.0) 70 : cluster [DBG] 5.5 scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:01:15.154550+0000 osd.0 (osd.0) 71 : cluster [DBG] 5.5 scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 64839680 unmapped: 1212416 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client handle_log_ack log(last 71)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:01:15.144038+0000 osd.0 (osd.0) 70 : cluster [DBG] 5.5 scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:01:15.154550+0000 osd.0 (osd.0) 71 : cluster [DBG] 5.5 scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:46.810126+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 64839680 unmapped: 1212416 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:47.810357+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 64847872 unmapped: 1204224 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:48.810557+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 64847872 unmapped: 1204224 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:49.810777+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 64847872 unmapped: 1204224 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 2.f scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 2.f scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 480285 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:50.810961+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  log_queue is 2 last_log 73 sent 71 num 2 unsent 2 sending 2
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:01:20.305589+0000 osd.0 (osd.0) 72 : cluster [DBG] 2.f scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:01:20.316053+0000 osd.0 (osd.0) 73 : cluster [DBG] 2.f scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 64864256 unmapped: 1187840 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client handle_log_ack log(last 73)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:01:20.305589+0000 osd.0 (osd.0) 72 : cluster [DBG] 2.f scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:01:20.316053+0000 osd.0 (osd.0) 73 : cluster [DBG] 2.f scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:51.811213+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 64864256 unmapped: 1187840 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:52.811370+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 1163264 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 7.18 scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.150645256s of 11.165366173s, submitted: 6
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 7.18 scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:53.811741+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  log_queue is 2 last_log 75 sent 73 num 2 unsent 2 sending 2
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:01:23.299639+0000 osd.0 (osd.0) 74 : cluster [DBG] 7.18 scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:01:23.310243+0000 osd.0 (osd.0) 75 : cluster [DBG] 7.18 scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 1163264 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client handle_log_ack log(last 75)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:01:23.299639+0000 osd.0 (osd.0) 74 : cluster [DBG] 7.18 scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:01:23.310243+0000 osd.0 (osd.0) 75 : cluster [DBG] 7.18 scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:54.811978+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 64897024 unmapped: 1155072 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 482698 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:55.812375+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 64897024 unmapped: 1155072 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:56.812527+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 64897024 unmapped: 1155072 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:57.812666+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 64913408 unmapped: 1138688 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 7.6 scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 7.6 scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:58.812829+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  log_queue is 2 last_log 77 sent 75 num 2 unsent 2 sending 2
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:01:28.345164+0000 osd.0 (osd.0) 76 : cluster [DBG] 7.6 scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:01:28.355761+0000 osd.0 (osd.0) 77 : cluster [DBG] 7.6 scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 64913408 unmapped: 1138688 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client handle_log_ack log(last 77)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:01:28.345164+0000 osd.0 (osd.0) 76 : cluster [DBG] 7.6 scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:01:28.355761+0000 osd.0 (osd.0) 77 : cluster [DBG] 7.6 scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 5.4 scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 5.4 scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:59.813136+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  log_queue is 2 last_log 79 sent 77 num 2 unsent 2 sending 2
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:01:29.359069+0000 osd.0 (osd.0) 78 : cluster [DBG] 5.4 scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:01:29.369640+0000 osd.0 (osd.0) 79 : cluster [DBG] 5.4 scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 64929792 unmapped: 1122304 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client handle_log_ack log(last 79)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:01:29.359069+0000 osd.0 (osd.0) 78 : cluster [DBG] 5.4 scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:01:29.369640+0000 osd.0 (osd.0) 79 : cluster [DBG] 5.4 scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 3.1 scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 3.1 scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 489931 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:00.813534+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  log_queue is 2 last_log 81 sent 79 num 2 unsent 2 sending 2
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:01:30.325015+0000 osd.0 (osd.0) 80 : cluster [DBG] 3.1 scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:01:30.335518+0000 osd.0 (osd.0) 81 : cluster [DBG] 3.1 scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 64946176 unmapped: 1105920 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client handle_log_ack log(last 81)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:01:30.325015+0000 osd.0 (osd.0) 80 : cluster [DBG] 3.1 scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:01:30.335518+0000 osd.0 (osd.0) 81 : cluster [DBG] 3.1 scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:01.813807+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 64954368 unmapped: 1097728 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 3.c scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 3.c scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:02.813979+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  log_queue is 2 last_log 83 sent 81 num 2 unsent 2 sending 2
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:01:32.442874+0000 osd.0 (osd.0) 82 : cluster [DBG] 3.c scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:01:32.453548+0000 osd.0 (osd.0) 83 : cluster [DBG] 3.c scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 64987136 unmapped: 1064960 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client handle_log_ack log(last 83)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:01:32.442874+0000 osd.0 (osd.0) 82 : cluster [DBG] 3.c scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:01:32.453548+0000 osd.0 (osd.0) 83 : cluster [DBG] 3.c scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:03.814560+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 64987136 unmapped: 1064960 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:04.814774+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 64995328 unmapped: 1056768 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 492342 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:05.814934+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 64995328 unmapped: 1056768 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:06.815134+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65003520 unmapped: 1048576 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:07.815303+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65003520 unmapped: 1048576 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:08.815438+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65003520 unmapped: 1048576 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 7.4 scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.110231400s of 16.139881134s, submitted: 10
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 7.4 scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:09.815597+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  log_queue is 2 last_log 85 sent 83 num 2 unsent 2 sending 2
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:01:39.439440+0000 osd.0 (osd.0) 84 : cluster [DBG] 7.4 scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:01:39.450044+0000 osd.0 (osd.0) 85 : cluster [DBG] 7.4 scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65036288 unmapped: 1015808 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client handle_log_ack log(last 85)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:01:39.439440+0000 osd.0 (osd.0) 84 : cluster [DBG] 7.4 scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:01:39.450044+0000 osd.0 (osd.0) 85 : cluster [DBG] 7.4 scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 494753 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:10.815852+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65036288 unmapped: 1015808 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:11.816036+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65044480 unmapped: 1007616 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:12.816275+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65060864 unmapped: 991232 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:13.816538+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65060864 unmapped: 991232 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:14.816721+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65069056 unmapped: 983040 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 3.f scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 3.f scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 497164 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:15.816869+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  log_queue is 2 last_log 87 sent 85 num 2 unsent 2 sending 2
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:01:45.322816+0000 osd.0 (osd.0) 86 : cluster [DBG] 3.f scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:01:45.333508+0000 osd.0 (osd.0) 87 : cluster [DBG] 3.f scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client handle_log_ack log(last 87)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:01:45.322816+0000 osd.0 (osd.0) 86 : cluster [DBG] 3.f scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:01:45.333508+0000 osd.0 (osd.0) 87 : cluster [DBG] 3.f scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65069056 unmapped: 983040 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:16.817103+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65077248 unmapped: 974848 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:17.817316+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65077248 unmapped: 974848 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:18.817476+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65077248 unmapped: 974848 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 7.1f scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 7.1f scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:19.817658+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  log_queue is 2 last_log 89 sent 87 num 2 unsent 2 sending 2
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:01:49.359848+0000 osd.0 (osd.0) 88 : cluster [DBG] 7.1f scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:01:49.370356+0000 osd.0 (osd.0) 89 : cluster [DBG] 7.1f scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65093632 unmapped: 958464 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client handle_log_ack log(last 89)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:01:49.359848+0000 osd.0 (osd.0) 88 : cluster [DBG] 7.1f scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:01:49.370356+0000 osd.0 (osd.0) 89 : cluster [DBG] 7.1f scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499577 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:20.817988+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65093632 unmapped: 958464 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:21.818196+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65093632 unmapped: 958464 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 5.7 scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.795870781s of 12.943486214s, submitted: 6
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 5.7 scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:22.818459+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  log_queue is 2 last_log 91 sent 89 num 2 unsent 2 sending 2
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:01:52.383048+0000 osd.0 (osd.0) 90 : cluster [DBG] 5.7 scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:01:52.393362+0000 osd.0 (osd.0) 91 : cluster [DBG] 5.7 scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65101824 unmapped: 950272 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 3.1b scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 3.1b scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client handle_log_ack log(last 91)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:01:52.383048+0000 osd.0 (osd.0) 90 : cluster [DBG] 5.7 scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:01:52.393362+0000 osd.0 (osd.0) 91 : cluster [DBG] 5.7 scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:23.819200+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  log_queue is 2 last_log 93 sent 91 num 2 unsent 2 sending 2
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:01:53.349860+0000 osd.0 (osd.0) 92 : cluster [DBG] 3.1b scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:01:53.360349+0000 osd.0 (osd.0) 93 : cluster [DBG] 3.1b scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65110016 unmapped: 942080 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client handle_log_ack log(last 93)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:01:53.349860+0000 osd.0 (osd.0) 92 : cluster [DBG] 3.1b scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:01:53.360349+0000 osd.0 (osd.0) 93 : cluster [DBG] 3.1b scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:24.819472+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65118208 unmapped: 933888 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 504401 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:25.819759+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65118208 unmapped: 933888 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 2.19 scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 2.19 scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:26.819965+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  log_queue is 2 last_log 95 sent 93 num 2 unsent 2 sending 2
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:01:56.294828+0000 osd.0 (osd.0) 94 : cluster [DBG] 2.19 scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:01:56.305271+0000 osd.0 (osd.0) 95 : cluster [DBG] 2.19 scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65126400 unmapped: 925696 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client handle_log_ack log(last 95)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:01:56.294828+0000 osd.0 (osd.0) 94 : cluster [DBG] 2.19 scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:01:56.305271+0000 osd.0 (osd.0) 95 : cluster [DBG] 2.19 scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:27.820260+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65126400 unmapped: 925696 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:28.820481+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65126400 unmapped: 925696 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:29.820882+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65134592 unmapped: 917504 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 506814 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:30.821047+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65134592 unmapped: 917504 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:31.821299+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 909312 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:32.821511+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65159168 unmapped: 892928 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 2.18 scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.704629898s of 10.843565941s, submitted: 6
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 2.18 scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:33.821789+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  log_queue is 2 last_log 97 sent 95 num 2 unsent 2 sending 2
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:02:03.226608+0000 osd.0 (osd.0) 96 : cluster [DBG] 2.18 scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:02:03.237147+0000 osd.0 (osd.0) 97 : cluster [DBG] 2.18 scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65183744 unmapped: 868352 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 7.9 scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 7.9 scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client handle_log_ack log(last 97)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:02:03.226608+0000 osd.0 (osd.0) 96 : cluster [DBG] 2.18 scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:02:03.237147+0000 osd.0 (osd.0) 97 : cluster [DBG] 2.18 scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:34.822416+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  log_queue is 2 last_log 99 sent 97 num 2 unsent 2 sending 2
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:02:04.192904+0000 osd.0 (osd.0) 98 : cluster [DBG] 7.9 scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:02:04.203476+0000 osd.0 (osd.0) 99 : cluster [DBG] 7.9 scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65200128 unmapped: 851968 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client handle_log_ack log(last 99)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:02:04.192904+0000 osd.0 (osd.0) 98 : cluster [DBG] 7.9 scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:02:04.203476+0000 osd.0 (osd.0) 99 : cluster [DBG] 7.9 scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 511638 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:35.822622+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65200128 unmapped: 851968 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:36.822785+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65208320 unmapped: 843776 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:37.822990+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65208320 unmapped: 843776 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 5.1e scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 5.1e scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:38.823162+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  log_queue is 2 last_log 101 sent 99 num 2 unsent 2 sending 2
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:02:08.235978+0000 osd.0 (osd.0) 100 : cluster [DBG] 5.1e scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:02:08.246454+0000 osd.0 (osd.0) 101 : cluster [DBG] 5.1e scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65216512 unmapped: 835584 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client handle_log_ack log(last 101)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:02:08.235978+0000 osd.0 (osd.0) 100 : cluster [DBG] 5.1e scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:02:08.246454+0000 osd.0 (osd.0) 101 : cluster [DBG] 5.1e scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:39.823381+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65216512 unmapped: 835584 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 6.0 scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 6.0 scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 516462 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:40.823546+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  log_queue is 2 last_log 103 sent 101 num 2 unsent 2 sending 2
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:02:10.278437+0000 osd.0 (osd.0) 102 : cluster [DBG] 6.0 scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:02:10.303151+0000 osd.0 (osd.0) 103 : cluster [DBG] 6.0 scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65216512 unmapped: 835584 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client handle_log_ack log(last 103)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:02:10.278437+0000 osd.0 (osd.0) 102 : cluster [DBG] 6.0 scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:02:10.303151+0000 osd.0 (osd.0) 103 : cluster [DBG] 6.0 scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:41.823760+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:42.824074+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:43.824262+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 811008 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:44.824376+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 811008 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 6.3 scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.053336143s of 12.073998451s, submitted: 8
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 6.3 scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 518873 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:45.824537+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  log_queue is 2 last_log 105 sent 103 num 2 unsent 2 sending 2
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:02:15.300674+0000 osd.0 (osd.0) 104 : cluster [DBG] 6.3 scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:02:15.318179+0000 osd.0 (osd.0) 105 : cluster [DBG] 6.3 scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65257472 unmapped: 794624 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client handle_log_ack log(last 105)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:02:15.300674+0000 osd.0 (osd.0) 104 : cluster [DBG] 6.3 scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:02:15.318179+0000 osd.0 (osd.0) 105 : cluster [DBG] 6.3 scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:46.824768+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:47.824975+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65273856 unmapped: 778240 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:48.825132+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65282048 unmapped: 770048 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:49.825303+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65282048 unmapped: 770048 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 518873 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:50.825488+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65290240 unmapped: 761856 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:51.825629+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65290240 unmapped: 761856 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:52.825776+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65306624 unmapped: 745472 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 6.7 scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 6.7 scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:53.825987+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  log_queue is 2 last_log 107 sent 105 num 2 unsent 2 sending 2
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:02:23.293442+0000 osd.0 (osd.0) 106 : cluster [DBG] 6.7 scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:02:23.307688+0000 osd.0 (osd.0) 107 : cluster [DBG] 6.7 scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65331200 unmapped: 720896 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client handle_log_ack log(last 107)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:02:23.293442+0000 osd.0 (osd.0) 106 : cluster [DBG] 6.7 scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:02:23.307688+0000 osd.0 (osd.0) 107 : cluster [DBG] 6.7 scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:54.826249+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65331200 unmapped: 720896 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 521284 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:55.826430+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65339392 unmapped: 712704 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:56.826572+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65347584 unmapped: 704512 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:57.826811+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65355776 unmapped: 696320 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 6.9 scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.993289948s of 13.001964569s, submitted: 4
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 6.9 scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:58.826984+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  log_queue is 2 last_log 109 sent 107 num 2 unsent 2 sending 2
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:02:28.302620+0000 osd.0 (osd.0) 108 : cluster [DBG] 6.9 scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:02:28.313214+0000 osd.0 (osd.0) 109 : cluster [DBG] 6.9 scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client handle_log_ack log(last 109)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:02:28.302620+0000 osd.0 (osd.0) 108 : cluster [DBG] 6.9 scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:02:28.313214+0000 osd.0 (osd.0) 109 : cluster [DBG] 6.9 scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65355776 unmapped: 696320 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:59.827216+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65355776 unmapped: 696320 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:00.827387+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 523695 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65363968 unmapped: 688128 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 6.5 scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 6.5 scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:01.827522+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  log_queue is 2 last_log 111 sent 109 num 2 unsent 2 sending 2
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:02:31.351772+0000 osd.0 (osd.0) 110 : cluster [DBG] 6.5 scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:02:31.369817+0000 osd.0 (osd.0) 111 : cluster [DBG] 6.5 scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65372160 unmapped: 679936 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client handle_log_ack log(last 111)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:02:31.351772+0000 osd.0 (osd.0) 110 : cluster [DBG] 6.5 scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:02:31.369817+0000 osd.0 (osd.0) 111 : cluster [DBG] 6.5 scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:02.827787+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65372160 unmapped: 679936 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:03.827963+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65380352 unmapped: 671744 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 6.a scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 6.a scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:04.828121+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  log_queue is 2 last_log 113 sent 111 num 2 unsent 2 sending 2
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:02:34.386657+0000 osd.0 (osd.0) 112 : cluster [DBG] 6.a scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:02:34.397201+0000 osd.0 (osd.0) 113 : cluster [DBG] 6.a scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65388544 unmapped: 663552 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client handle_log_ack log(last 113)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:02:34.386657+0000 osd.0 (osd.0) 112 : cluster [DBG] 6.a scrub starts
Jan 10 17:23:23 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:02:34.397201+0000 osd.0 (osd.0) 113 : cluster [DBG] 6.a scrub ok
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:05.828392+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65396736 unmapped: 655360 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:06.828586+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65396736 unmapped: 655360 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:07.828780+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65404928 unmapped: 647168 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:08.828911+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65404928 unmapped: 647168 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:09.829038+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65413120 unmapped: 638976 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:10.829208+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65421312 unmapped: 630784 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:11.829403+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65421312 unmapped: 630784 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:12.829609+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65445888 unmapped: 606208 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:13.829872+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65454080 unmapped: 598016 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:14.830059+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65454080 unmapped: 598016 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:15.830224+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65462272 unmapped: 589824 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:16.830390+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65462272 unmapped: 589824 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:17.830535+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65462272 unmapped: 589824 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:18.830766+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65470464 unmapped: 581632 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:19.831036+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65470464 unmapped: 581632 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:20.831234+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65478656 unmapped: 573440 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:21.831370+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65478656 unmapped: 573440 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:22.831654+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65486848 unmapped: 565248 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:23.831908+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65486848 unmapped: 565248 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:24.832125+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65486848 unmapped: 565248 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:25.832263+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65503232 unmapped: 548864 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:26.832405+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65503232 unmapped: 548864 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:27.832645+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65511424 unmapped: 540672 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:28.832846+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65511424 unmapped: 540672 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:29.833010+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65527808 unmapped: 524288 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:30.833243+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65527808 unmapped: 524288 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:31.833401+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65527808 unmapped: 524288 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:32.833592+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65552384 unmapped: 499712 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:33.833914+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65552384 unmapped: 499712 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:34.834083+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65552384 unmapped: 499712 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:35.834268+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65560576 unmapped: 491520 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:36.834438+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65560576 unmapped: 491520 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:37.834623+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65568768 unmapped: 483328 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:38.834800+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65568768 unmapped: 483328 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:39.834976+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65568768 unmapped: 483328 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:40.835215+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65585152 unmapped: 466944 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:41.835459+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65585152 unmapped: 466944 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:42.835691+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65593344 unmapped: 458752 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:43.835963+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65593344 unmapped: 458752 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:44.836167+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65593344 unmapped: 458752 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:45.836433+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65601536 unmapped: 450560 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:46.836665+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65601536 unmapped: 450560 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:47.836855+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65617920 unmapped: 434176 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:48.837070+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65617920 unmapped: 434176 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:49.837264+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65617920 unmapped: 434176 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:50.837434+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65626112 unmapped: 425984 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:51.837594+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65626112 unmapped: 425984 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:52.837796+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65650688 unmapped: 401408 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:53.838043+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65650688 unmapped: 401408 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:54.838245+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65650688 unmapped: 401408 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:55.838413+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65658880 unmapped: 393216 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:56.838558+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65658880 unmapped: 393216 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:57.838727+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65667072 unmapped: 385024 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:58.838878+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65667072 unmapped: 385024 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:59.839036+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65667072 unmapped: 385024 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:00.839231+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65675264 unmapped: 376832 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:01.839440+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65675264 unmapped: 376832 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:02.839584+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65683456 unmapped: 368640 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:03.839839+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65683456 unmapped: 368640 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:04.852872+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65683456 unmapped: 368640 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:05.853016+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65691648 unmapped: 360448 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:06.853176+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65691648 unmapped: 360448 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:07.853369+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65699840 unmapped: 352256 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:08.853513+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65732608 unmapped: 319488 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:09.853746+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65732608 unmapped: 319488 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:10.853946+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65740800 unmapped: 311296 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:11.854115+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65740800 unmapped: 311296 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:12.854495+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65757184 unmapped: 294912 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:13.854770+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65757184 unmapped: 294912 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:14.854985+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65757184 unmapped: 294912 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:15.855248+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65765376 unmapped: 286720 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:16.855478+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65765376 unmapped: 286720 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:17.855816+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65765376 unmapped: 286720 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:18.856142+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65773568 unmapped: 278528 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:19.856296+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65773568 unmapped: 278528 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:20.856652+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65781760 unmapped: 270336 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:21.857038+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65773568 unmapped: 278528 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:22.857192+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65773568 unmapped: 278528 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:23.857356+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65773568 unmapped: 278528 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:24.857598+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65781760 unmapped: 270336 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:25.857895+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65781760 unmapped: 270336 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:26.858085+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65781760 unmapped: 270336 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:27.858243+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65789952 unmapped: 262144 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:28.858417+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65789952 unmapped: 262144 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:29.858614+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65789952 unmapped: 262144 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:30.858769+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65798144 unmapped: 253952 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:31.858921+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65798144 unmapped: 253952 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:32.859202+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65806336 unmapped: 245760 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:33.859662+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65806336 unmapped: 245760 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:34.859903+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65814528 unmapped: 237568 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:35.860191+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65814528 unmapped: 237568 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:36.860468+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65814528 unmapped: 237568 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:37.860681+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65822720 unmapped: 229376 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:38.860862+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65822720 unmapped: 229376 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:39.861063+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65830912 unmapped: 221184 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:40.861225+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65830912 unmapped: 221184 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:41.861350+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65830912 unmapped: 221184 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:42.861513+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65839104 unmapped: 212992 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:43.861743+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65839104 unmapped: 212992 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:44.861911+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65847296 unmapped: 204800 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:45.862076+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65847296 unmapped: 204800 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:46.862737+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65847296 unmapped: 204800 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:47.863116+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65847296 unmapped: 204800 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:48.863322+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65847296 unmapped: 204800 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:49.863505+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65855488 unmapped: 196608 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:50.863658+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65855488 unmapped: 196608 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:51.863779+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65855488 unmapped: 196608 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:52.863988+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65863680 unmapped: 188416 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:53.864269+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65863680 unmapped: 188416 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:54.864487+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65863680 unmapped: 188416 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:55.864767+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65871872 unmapped: 180224 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:56.865014+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65871872 unmapped: 180224 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:57.865235+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65880064 unmapped: 172032 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:58.865471+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65880064 unmapped: 172032 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:59.865765+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65880064 unmapped: 172032 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:00.865965+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65888256 unmapped: 163840 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:01.866340+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65888256 unmapped: 163840 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:02.866491+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65896448 unmapped: 155648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:03.866782+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65896448 unmapped: 155648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:04.866916+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65896448 unmapped: 155648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:05.867160+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65904640 unmapped: 147456 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:06.867317+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65904640 unmapped: 147456 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:07.867575+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65912832 unmapped: 139264 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:08.867764+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65921024 unmapped: 131072 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:09.867954+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65929216 unmapped: 122880 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:10.868161+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65929216 unmapped: 122880 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:11.868405+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65929216 unmapped: 122880 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:12.868613+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65937408 unmapped: 114688 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:13.868841+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65937408 unmapped: 114688 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:14.869012+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65945600 unmapped: 106496 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:15.869200+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65945600 unmapped: 106496 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:16.869423+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65953792 unmapped: 98304 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:17.869632+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65953792 unmapped: 98304 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:18.869835+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65953792 unmapped: 98304 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:19.870015+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65961984 unmapped: 90112 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:20.870161+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65961984 unmapped: 90112 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:21.870315+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65961984 unmapped: 90112 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:22.870447+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65970176 unmapped: 81920 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:23.870685+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65970176 unmapped: 81920 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:24.870870+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65970176 unmapped: 81920 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:25.871011+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65978368 unmapped: 73728 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:26.871146+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65978368 unmapped: 73728 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:27.871328+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65986560 unmapped: 65536 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:28.871488+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65986560 unmapped: 65536 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:29.871623+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65994752 unmapped: 57344 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:30.871784+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65994752 unmapped: 57344 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:31.871927+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65994752 unmapped: 57344 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:32.872065+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66002944 unmapped: 49152 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:33.872286+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66002944 unmapped: 49152 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:34.872436+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 40960 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:35.872576+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 40960 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:36.872764+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66019328 unmapped: 32768 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:37.872903+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66019328 unmapped: 32768 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:38.873087+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66019328 unmapped: 32768 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:39.873233+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66027520 unmapped: 24576 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:40.873358+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66027520 unmapped: 24576 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:41.873500+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66027520 unmapped: 24576 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:42.873632+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66035712 unmapped: 16384 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:43.873886+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66035712 unmapped: 16384 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:44.874012+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66043904 unmapped: 8192 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:45.874136+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:46.874316+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66043904 unmapped: 8192 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:47.874571+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66052096 unmapped: 0 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:48.874900+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66052096 unmapped: 0 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:49.875119+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66052096 unmapped: 0 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:50.875328+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66060288 unmapped: 1040384 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:51.875585+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66060288 unmapped: 1040384 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:52.875825+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66060288 unmapped: 1040384 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:53.876045+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66068480 unmapped: 1032192 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:54.876178+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66068480 unmapped: 1032192 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:55.876332+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66076672 unmapped: 1024000 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:56.876608+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66076672 unmapped: 1024000 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:57.876837+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66076672 unmapped: 1024000 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:58.877067+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66084864 unmapped: 1015808 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:59.877334+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66084864 unmapped: 1015808 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:00.877516+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66093056 unmapped: 1007616 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:01.877721+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66093056 unmapped: 1007616 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:02.877863+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66093056 unmapped: 1007616 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:03.878059+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66101248 unmapped: 999424 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:04.878213+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66101248 unmapped: 999424 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:05.878387+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66101248 unmapped: 999424 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:06.878570+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66109440 unmapped: 991232 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:07.878753+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66109440 unmapped: 991232 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:08.878982+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66117632 unmapped: 983040 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:09.879285+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66109440 unmapped: 991232 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:10.879466+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66109440 unmapped: 991232 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:11.879595+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66117632 unmapped: 983040 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:12.879801+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66117632 unmapped: 983040 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:13.880029+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66125824 unmapped: 974848 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:14.880222+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66125824 unmapped: 974848 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:15.880400+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66125824 unmapped: 974848 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:16.880558+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66134016 unmapped: 966656 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:17.880766+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66134016 unmapped: 966656 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:18.880951+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66142208 unmapped: 958464 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:19.881128+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66142208 unmapped: 958464 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:20.881297+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66142208 unmapped: 958464 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:21.881448+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66150400 unmapped: 950272 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:22.881608+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66150400 unmapped: 950272 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:23.881781+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66158592 unmapped: 942080 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:24.881921+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66158592 unmapped: 942080 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:25.882067+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66158592 unmapped: 942080 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:26.882195+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66166784 unmapped: 933888 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:27.882380+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66166784 unmapped: 933888 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:28.882542+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66174976 unmapped: 925696 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:29.882758+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66174976 unmapped: 925696 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:30.883064+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66174976 unmapped: 925696 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:31.883188+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66183168 unmapped: 917504 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:32.883341+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66183168 unmapped: 917504 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:33.883515+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66183168 unmapped: 917504 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:34.883672+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66191360 unmapped: 909312 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:35.883877+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66191360 unmapped: 909312 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:36.884017+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66199552 unmapped: 901120 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:37.884172+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66191360 unmapped: 909312 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:38.884322+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66199552 unmapped: 901120 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:39.884435+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66199552 unmapped: 901120 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:40.884617+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66199552 unmapped: 901120 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:41.884810+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66207744 unmapped: 892928 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:42.884959+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66207744 unmapped: 892928 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:43.885483+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66207744 unmapped: 892928 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:44.885660+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66215936 unmapped: 884736 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:45.885832+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66215936 unmapped: 884736 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:46.886034+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66215936 unmapped: 884736 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:47.886186+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66224128 unmapped: 876544 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:48.886473+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66224128 unmapped: 876544 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:49.886674+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66232320 unmapped: 868352 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:50.887003+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66232320 unmapped: 868352 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:51.887275+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66240512 unmapped: 860160 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:52.887478+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66240512 unmapped: 860160 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:53.887839+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66248704 unmapped: 851968 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:54.887970+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66248704 unmapped: 851968 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:55.888130+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66248704 unmapped: 851968 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:56.888327+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66256896 unmapped: 843776 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:57.888462+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66256896 unmapped: 843776 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:58.888607+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66265088 unmapped: 835584 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:59.888780+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66265088 unmapped: 835584 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:00.888957+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66265088 unmapped: 835584 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:01.889139+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66273280 unmapped: 827392 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:02.889284+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66273280 unmapped: 827392 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:03.889493+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66281472 unmapped: 819200 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:04.889664+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66314240 unmapped: 786432 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:05.889886+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66314240 unmapped: 786432 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:06.890098+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66355200 unmapped: 745472 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:07.890260+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66355200 unmapped: 745472 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:08.890469+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66371584 unmapped: 729088 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:09.890627+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66371584 unmapped: 729088 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:10.890798+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:11.890935+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:12.891120+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:13.891302+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:14.891434+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:15.891596+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:16.891787+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66396160 unmapped: 704512 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:17.892055+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66396160 unmapped: 704512 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:18.892290+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66404352 unmapped: 696320 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:19.892456+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66404352 unmapped: 696320 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:20.892624+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66404352 unmapped: 696320 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:21.892780+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66412544 unmapped: 688128 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:22.892934+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66412544 unmapped: 688128 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:23.893151+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66420736 unmapped: 679936 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:24.893343+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66420736 unmapped: 679936 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:25.893517+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66428928 unmapped: 671744 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:26.893749+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66428928 unmapped: 671744 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:27.893901+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66428928 unmapped: 671744 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:28.894086+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66437120 unmapped: 663552 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:29.894295+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66437120 unmapped: 663552 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:30.894480+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66445312 unmapped: 655360 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:31.894630+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66445312 unmapped: 655360 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:32.894804+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66445312 unmapped: 655360 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:33.895004+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66453504 unmapped: 647168 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:34.895140+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66453504 unmapped: 647168 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:35.895274+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66461696 unmapped: 638976 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:36.895436+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66461696 unmapped: 638976 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:37.895623+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66461696 unmapped: 638976 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:38.895758+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66469888 unmapped: 630784 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:39.895937+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66469888 unmapped: 630784 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:40.896113+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66478080 unmapped: 622592 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:41.896305+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66478080 unmapped: 622592 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:42.896466+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66478080 unmapped: 622592 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:43.896636+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66486272 unmapped: 614400 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:44.896828+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66486272 unmapped: 614400 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:45.897000+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66494464 unmapped: 606208 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:46.897200+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66494464 unmapped: 606208 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:47.897646+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66494464 unmapped: 606208 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:48.897979+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66502656 unmapped: 598016 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:49.898194+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66502656 unmapped: 598016 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:50.898544+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66510848 unmapped: 589824 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:51.898835+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66510848 unmapped: 589824 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:52.899157+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66510848 unmapped: 589824 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:53.899503+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66519040 unmapped: 581632 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:54.899806+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66519040 unmapped: 581632 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:55.900024+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66519040 unmapped: 581632 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:56.900311+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66527232 unmapped: 573440 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:57.900595+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66527232 unmapped: 573440 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:58.900844+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66527232 unmapped: 573440 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:59.901150+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66535424 unmapped: 565248 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:00.901547+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66535424 unmapped: 565248 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:01.901869+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66543616 unmapped: 557056 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:02.902103+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66543616 unmapped: 557056 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:03.902407+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66543616 unmapped: 557056 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:04.902620+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66551808 unmapped: 548864 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:05.902799+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66551808 unmapped: 548864 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:06.903026+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66560000 unmapped: 540672 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:07.903192+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66560000 unmapped: 540672 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:08.903387+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66568192 unmapped: 532480 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:09.903521+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66568192 unmapped: 532480 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:10.903645+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66568192 unmapped: 532480 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:11.903772+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66576384 unmapped: 524288 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:12.903937+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66576384 unmapped: 524288 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:13.904192+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66584576 unmapped: 516096 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:14.904361+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66584576 unmapped: 516096 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:15.904528+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66584576 unmapped: 516096 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:16.904719+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66592768 unmapped: 507904 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:17.904895+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66592768 unmapped: 507904 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:18.905067+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66600960 unmapped: 499712 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:19.905215+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66600960 unmapped: 499712 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:20.905474+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66600960 unmapped: 499712 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:21.905640+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66609152 unmapped: 491520 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:22.905807+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66609152 unmapped: 491520 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:23.906022+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66617344 unmapped: 483328 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:24.906266+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66617344 unmapped: 483328 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:25.906451+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66625536 unmapped: 475136 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:26.906604+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66625536 unmapped: 475136 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:27.906822+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66625536 unmapped: 475136 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:28.906984+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66633728 unmapped: 466944 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:29.907146+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66633728 unmapped: 466944 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:30.907289+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66641920 unmapped: 458752 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:31.907427+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66641920 unmapped: 458752 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:32.907578+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66641920 unmapped: 458752 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:33.907797+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66650112 unmapped: 450560 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:34.907963+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66650112 unmapped: 450560 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:35.908142+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66650112 unmapped: 450560 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:36.908407+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66658304 unmapped: 442368 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:37.908584+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66666496 unmapped: 434176 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:38.908790+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66666496 unmapped: 434176 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:39.908957+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66674688 unmapped: 425984 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:40.909104+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66674688 unmapped: 425984 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:41.909291+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66682880 unmapped: 417792 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:42.909455+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66682880 unmapped: 417792 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:43.909713+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66691072 unmapped: 409600 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:44.909920+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66691072 unmapped: 409600 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:45.910083+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66691072 unmapped: 409600 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:46.910233+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66699264 unmapped: 401408 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:47.910382+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66699264 unmapped: 401408 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:48.910518+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66707456 unmapped: 393216 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:49.910662+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66707456 unmapped: 393216 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:50.910770+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66707456 unmapped: 393216 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:51.910930+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66715648 unmapped: 385024 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:52.911065+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66715648 unmapped: 385024 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:53.911323+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66715648 unmapped: 385024 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:54.911477+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66723840 unmapped: 376832 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:55.911598+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66723840 unmapped: 376832 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:56.911774+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66732032 unmapped: 368640 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:57.912022+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66732032 unmapped: 368640 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:58.912205+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66732032 unmapped: 368640 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Cumulative writes: 4379 writes, 20K keys, 4379 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 4379 writes, 468 syncs, 9.36 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 4379 writes, 20K keys, 4379 commit groups, 1.0 writes per commit group, ingest: 16.51 MB, 0.03 MB/s
                                           Interval WAL: 4379 writes, 468 syncs, 9.36 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560f2dc198d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 0.000158 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560f2dc198d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 0.000158 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560f2dc198d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 0.000158 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560f2dc198d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 0.000158 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560f2dc198d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 0.000158 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560f2dc198d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 0.000158 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560f2dc198d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 0.000158 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560f2dc19a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560f2dc19a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560f2dc19a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560f2dc198d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 0.000158 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560f2dc198d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 0.000158 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:59.912408+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66797568 unmapped: 303104 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:00.912567+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66797568 unmapped: 303104 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:01.912722+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66805760 unmapped: 294912 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:02.912906+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66805760 unmapped: 294912 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:03.913112+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66805760 unmapped: 294912 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:04.913323+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66813952 unmapped: 286720 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:05.913519+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66813952 unmapped: 286720 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:06.913682+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66822144 unmapped: 278528 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:07.913957+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66822144 unmapped: 278528 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:08.914340+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66822144 unmapped: 278528 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:09.914592+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66830336 unmapped: 270336 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:10.914789+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66830336 unmapped: 270336 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:11.915047+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66838528 unmapped: 262144 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:12.915225+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66838528 unmapped: 262144 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:13.915459+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66838528 unmapped: 262144 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:14.915621+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66846720 unmapped: 253952 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:15.915931+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66846720 unmapped: 253952 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:16.916116+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66846720 unmapped: 253952 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:17.916311+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66854912 unmapped: 245760 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:18.916532+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66854912 unmapped: 245760 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:19.916844+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66863104 unmapped: 237568 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:20.917054+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66863104 unmapped: 237568 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:21.917268+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66871296 unmapped: 229376 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:22.917540+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66871296 unmapped: 229376 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:23.917812+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66871296 unmapped: 229376 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:24.918012+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66879488 unmapped: 221184 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:25.918210+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66879488 unmapped: 221184 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:26.918392+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66887680 unmapped: 212992 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:27.918597+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66887680 unmapped: 212992 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:28.918805+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66887680 unmapped: 212992 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:29.919046+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66895872 unmapped: 204800 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:30.919217+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66895872 unmapped: 204800 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:31.919418+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66904064 unmapped: 196608 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:32.919720+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66904064 unmapped: 196608 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:33.919930+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66904064 unmapped: 196608 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:34.920055+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66912256 unmapped: 188416 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:35.920183+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66912256 unmapped: 188416 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:36.920330+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66904064 unmapped: 196608 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:37.920484+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66912256 unmapped: 188416 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:38.920635+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66912256 unmapped: 188416 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:39.920776+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66920448 unmapped: 180224 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:40.920923+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66920448 unmapped: 180224 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:41.921069+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66920448 unmapped: 180224 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:42.921206+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66928640 unmapped: 172032 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:43.921391+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:44.921624+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66928640 unmapped: 172032 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:45.921854+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66936832 unmapped: 163840 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:46.922106+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66936832 unmapped: 163840 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:47.922294+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66936832 unmapped: 163840 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:48.922480+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66945024 unmapped: 155648 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:49.922771+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66945024 unmapped: 155648 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:50.923001+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66953216 unmapped: 147456 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:51.923131+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66953216 unmapped: 147456 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:52.923360+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66953216 unmapped: 147456 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:53.923630+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66961408 unmapped: 139264 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:54.923756+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66961408 unmapped: 139264 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:55.923879+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66961408 unmapped: 139264 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:56.924023+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66969600 unmapped: 131072 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:57.924186+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66969600 unmapped: 131072 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:58.924385+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66977792 unmapped: 122880 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:59.924562+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66977792 unmapped: 122880 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:00.924806+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66985984 unmapped: 114688 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:01.925051+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66985984 unmapped: 114688 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:02.925193+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66985984 unmapped: 114688 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:03.925421+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66994176 unmapped: 106496 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:04.925610+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66994176 unmapped: 106496 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:05.925845+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66994176 unmapped: 106496 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:06.925996+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67002368 unmapped: 98304 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:07.926169+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67002368 unmapped: 98304 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:08.926362+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67018752 unmapped: 81920 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:09.926534+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67018752 unmapped: 81920 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:10.926730+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67026944 unmapped: 73728 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:11.926875+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67026944 unmapped: 73728 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:12.926999+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67026944 unmapped: 73728 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:13.927218+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67035136 unmapped: 65536 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:14.927420+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67035136 unmapped: 65536 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:15.927581+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67043328 unmapped: 57344 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:16.927774+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67043328 unmapped: 57344 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:17.927952+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67043328 unmapped: 57344 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:18.928080+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67051520 unmapped: 49152 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:19.928223+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67051520 unmapped: 49152 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:20.928404+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67051520 unmapped: 49152 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:21.928595+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67059712 unmapped: 40960 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:22.928773+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67059712 unmapped: 40960 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:23.929115+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67067904 unmapped: 32768 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:24.929266+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67067904 unmapped: 32768 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:25.929436+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67067904 unmapped: 32768 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:26.929606+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67076096 unmapped: 24576 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:27.929816+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67076096 unmapped: 24576 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:28.929990+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67076096 unmapped: 24576 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:29.930139+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67084288 unmapped: 16384 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:30.930321+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67084288 unmapped: 16384 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:31.930468+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67092480 unmapped: 8192 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:32.930623+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67092480 unmapped: 8192 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:33.930879+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67092480 unmapped: 8192 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:34.931093+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67100672 unmapped: 0 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:35.931276+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67100672 unmapped: 0 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:36.931511+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67108864 unmapped: 1040384 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:37.931746+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67108864 unmapped: 1040384 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:38.931957+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67108864 unmapped: 1040384 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:39.932139+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67117056 unmapped: 1032192 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:40.932406+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67117056 unmapped: 1032192 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:41.932605+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67125248 unmapped: 1024000 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:42.932819+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67125248 unmapped: 1024000 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:43.933057+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67125248 unmapped: 1024000 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:44.933178+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 1015808 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:45.933370+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 1015808 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:46.933520+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 1015808 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:47.933692+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 1007616 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:48.933909+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 1007616 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:49.934108+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67149824 unmapped: 999424 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:50.934261+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67149824 unmapped: 999424 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:51.934402+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67158016 unmapped: 991232 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:52.934556+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67158016 unmapped: 991232 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:53.934774+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67158016 unmapped: 991232 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:54.934927+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67166208 unmapped: 983040 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:55.935119+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67166208 unmapped: 983040 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:56.935357+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67166208 unmapped: 983040 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:57.935492+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67174400 unmapped: 974848 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:58.935671+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67174400 unmapped: 974848 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:59.935868+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67182592 unmapped: 966656 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:00.936065+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67182592 unmapped: 966656 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:01.936251+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67182592 unmapped: 966656 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:02.936386+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67182592 unmapped: 966656 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:03.936541+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67182592 unmapped: 966656 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:04.936723+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67182592 unmapped: 966656 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:05.936879+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 958464 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:06.937113+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 958464 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:07.937278+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 958464 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:08.937427+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 958464 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:09.937598+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 958464 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:10.937772+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 958464 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:11.937916+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 958464 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:12.938104+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 958464 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:13.938312+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 958464 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:14.938455+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 958464 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:15.938770+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 958464 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:16.938924+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 958464 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:17.939111+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 958464 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:18.939301+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 958464 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:19.939965+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 958464 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:20.940198+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 958464 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:21.940358+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 958464 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:22.940834+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 958464 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:23.941165+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 958464 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:24.941334+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 958464 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:25.941571+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 958464 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:26.941776+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 958464 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:27.942227+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 958464 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:28.942395+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 958464 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:29.942541+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 958464 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:30.942765+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 958464 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:31.942921+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 958464 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:32.943070+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 958464 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:33.943383+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 958464 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:34.943579+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 958464 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:35.943793+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 958464 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:36.943984+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 958464 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:37.944146+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 958464 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:38.944352+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 958464 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:39.944490+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 958464 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:40.944646+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 958464 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:41.944951+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 958464 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:42.945169+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 958464 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:43.945393+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 958464 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:44.945550+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 958464 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:45.945804+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 958464 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:46.945968+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 958464 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:47.946118+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 958464 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:48.946277+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 958464 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:49.946386+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 958464 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:50.946505+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 958464 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:51.946656+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 958464 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:52.946820+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 958464 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:53.947006+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 958464 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:54.947157+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 958464 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:55.947288+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 958464 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:56.947451+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 958464 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:57.947625+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 958464 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:58.947809+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 958464 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:59.947973+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 958464 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:00.948156+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 958464 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:01.948307+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 958464 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:02.948449+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 958464 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:03.948643+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 958464 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:04.948782+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 958464 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:05.948922+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 958464 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:06.949140+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 958464 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:07.949294+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 958464 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:08.949465+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67198976 unmapped: 950272 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:09.949655+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67198976 unmapped: 950272 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:10.949773+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67198976 unmapped: 950272 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:11.949941+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67198976 unmapped: 950272 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:12.950080+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67198976 unmapped: 950272 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:13.950302+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67198976 unmapped: 950272 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:14.950528+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:15.950693+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67198976 unmapped: 950272 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:16.950884+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67198976 unmapped: 950272 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:17.951038+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67198976 unmapped: 950272 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:18.951185+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67198976 unmapped: 950272 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:19.951518+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67198976 unmapped: 950272 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:20.951793+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67198976 unmapped: 950272 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:21.951954+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67198976 unmapped: 950272 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:22.952142+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67198976 unmapped: 950272 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:23.952389+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67198976 unmapped: 950272 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:24.952544+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67198976 unmapped: 950272 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:25.952747+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67198976 unmapped: 950272 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:26.953156+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67198976 unmapped: 950272 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:27.953332+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67198976 unmapped: 950272 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:28.953467+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67198976 unmapped: 950272 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:29.953689+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67198976 unmapped: 950272 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:30.953895+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67198976 unmapped: 950272 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:31.954043+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67198976 unmapped: 950272 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:32.954175+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67198976 unmapped: 950272 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:33.954360+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67198976 unmapped: 950272 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:34.954511+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67198976 unmapped: 950272 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:35.954785+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67198976 unmapped: 950272 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:36.954969+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67198976 unmapped: 950272 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:37.955151+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67198976 unmapped: 950272 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:38.955326+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67198976 unmapped: 950272 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:39.955488+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67198976 unmapped: 950272 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:40.955671+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67198976 unmapped: 950272 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:41.955823+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67198976 unmapped: 950272 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:42.955976+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67198976 unmapped: 950272 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:43.956190+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67198976 unmapped: 950272 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:44.956357+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67198976 unmapped: 950272 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:45.956626+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67198976 unmapped: 950272 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:46.956786+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67198976 unmapped: 950272 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:47.956992+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67198976 unmapped: 950272 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:48.957131+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67198976 unmapped: 950272 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:49.957283+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 942080 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:50.957538+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 942080 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:51.957879+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 942080 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:52.958075+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 942080 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:53.958269+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 942080 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:54.958463+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 942080 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:55.958626+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 942080 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:56.958791+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 942080 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:57.958924+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 942080 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:58.959065+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 942080 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:59.959232+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 942080 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:00.959398+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 942080 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:01.959529+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 942080 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:02.959739+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 942080 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:03.960009+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 942080 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:04.960169+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 942080 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:05.960334+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 942080 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:06.960512+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 942080 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:07.960669+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 942080 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:08.960868+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 942080 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:09.961047+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 942080 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:10.961296+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 942080 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:11.961456+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 942080 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:12.961594+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 942080 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:13.961808+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 942080 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:14.961984+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 942080 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:15.962264+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 942080 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:16.962425+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 942080 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:17.962585+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 942080 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:18.962794+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 942080 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:19.962931+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 942080 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:20.963130+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 942080 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:21.963374+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 942080 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:22.963543+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 942080 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:23.963771+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 942080 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:24.963934+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 942080 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:25.964087+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 942080 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:26.964277+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 942080 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:27.964468+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 942080 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:28.964662+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 942080 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:29.964856+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 942080 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:30.964977+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 942080 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:31.965167+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 942080 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:32.965366+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 942080 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:33.965562+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67215360 unmapped: 933888 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:34.965735+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67215360 unmapped: 933888 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:35.965877+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67215360 unmapped: 933888 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:36.966012+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67215360 unmapped: 933888 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:37.966192+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67215360 unmapped: 933888 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:38.966322+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67215360 unmapped: 933888 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:39.966448+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67215360 unmapped: 933888 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:40.966625+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67215360 unmapped: 933888 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:41.966833+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67215360 unmapped: 933888 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:42.967080+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67215360 unmapped: 933888 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:43.967379+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67215360 unmapped: 933888 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:44.967571+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67215360 unmapped: 933888 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:45.967774+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67215360 unmapped: 933888 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:46.967955+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67215360 unmapped: 933888 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:47.968157+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67215360 unmapped: 933888 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:48.968367+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67215360 unmapped: 933888 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:49.968538+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67215360 unmapped: 933888 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:50.968735+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67215360 unmapped: 933888 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:51.968923+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67215360 unmapped: 933888 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:52.969119+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67215360 unmapped: 933888 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:53.969367+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67215360 unmapped: 933888 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:54.969519+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67215360 unmapped: 933888 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:55.969730+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67215360 unmapped: 933888 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:56.969944+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67215360 unmapped: 933888 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:57.970105+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67215360 unmapped: 933888 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:58.970265+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67215360 unmapped: 933888 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:59.970498+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67215360 unmapped: 933888 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:00.970690+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67215360 unmapped: 933888 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:01.970941+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67215360 unmapped: 933888 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:02.971097+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67215360 unmapped: 933888 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:03.971359+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67215360 unmapped: 933888 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:04.971615+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67215360 unmapped: 933888 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:05.971851+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67215360 unmapped: 933888 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:06.972061+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: mgrc ms_handle_reset ms_handle_reset con 0x560f2f55e000
Jan 10 17:23:23 compute-0 ceph-osd[85764]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3703679480
Jan 10 17:23:23 compute-0 ceph-osd[85764]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3703679480,v1:192.168.122.100:6801/3703679480]
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: get_auth_request con 0x560f2ea1b400 auth_method 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: mgrc handle_mgr_configure stats_period=5
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67395584 unmapped: 753664 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:07.972246+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67395584 unmapped: 753664 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:08.972438+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67395584 unmapped: 753664 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:09.972665+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67395584 unmapped: 753664 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:10.972835+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67395584 unmapped: 753664 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:11.972974+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67395584 unmapped: 753664 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:12.973154+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67395584 unmapped: 753664 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:13.973372+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67395584 unmapped: 753664 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:14.973570+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67395584 unmapped: 753664 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:15.973762+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67395584 unmapped: 753664 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:16.973944+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67395584 unmapped: 753664 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:17.974859+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 ms_handle_reset con 0x560f2f55f000 session 0x560f3060f180
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: handle_auth_request added challenge on 0x560f2f66cc00
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:18.975026+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:19.975176+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:20.975349+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:21.975506+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:22.975655+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:23.975933+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:24.976108+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:25.976253+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:26.976427+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:27.976590+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:28.976754+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:29.976902+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:30.977080+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:31.977314+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:32.977598+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:33.977753+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:34.977952+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:35.978109+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:36.978339+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:37.978513+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:38.978675+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:39.978873+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:40.979051+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:41.979209+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:42.979364+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:43.979547+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:44.979736+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:45.979909+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:46.980049+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:47.980228+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:48.980390+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:49.980587+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:50.980776+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:51.980953+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:52.981085+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:53.981260+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:54.981420+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:55.981558+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:56.981722+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:57.981827+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:58.981982+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:59.982130+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:00.982284+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:01.982469+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:02.982669+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:03.982927+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:04.983170+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:05.983352+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:06.983504+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:07.983637+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:08.983877+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:09.984038+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:10.984189+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:11.984405+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:12.984589+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:13.984780+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:14.984901+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:15.985013+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:16.985160+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:17.985328+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:18.985538+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:19.985810+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:20.986025+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:21.986189+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:22.986315+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:23.986513+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:24.986737+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:25.986904+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:26.987066+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:27.987266+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:28.987443+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:29.987574+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:30.987792+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:31.987933+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:32.988069+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:33.988304+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:34.988476+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:35.988641+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:36.988834+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:37.989054+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:38.989210+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:39.989435+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:40.989597+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:41.989785+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:42.989968+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:43.990236+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:44.990451+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:45.990614+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:46.990840+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:47.990982+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:48.991107+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:49.991250+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:50.991380+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:51.991547+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:52.991689+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:53.991905+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:54.992048+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:55.992313+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:56.992581+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:57.992757+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:58.992922+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:59.993191+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:00.993681+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:01.993887+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:02.994094+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:03.994334+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:04.994524+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:05.994725+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:06.995256+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:07.995450+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:08.995612+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67534848 unmapped: 614400 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:09.995960+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67534848 unmapped: 614400 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:10.996162+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67534848 unmapped: 614400 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:11.996330+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67534848 unmapped: 614400 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:12.996472+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67534848 unmapped: 614400 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:13.996669+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:23 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67534848 unmapped: 614400 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:23 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:14.997180+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67534848 unmapped: 614400 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:15.997467+0000)
Jan 10 17:23:23 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67534848 unmapped: 614400 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:23 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:16.997658+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67534848 unmapped: 614400 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:17.997812+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67534848 unmapped: 614400 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:18.997996+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67534848 unmapped: 614400 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:19.998282+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67534848 unmapped: 614400 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:20.998457+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67534848 unmapped: 614400 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:21.998664+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67534848 unmapped: 614400 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:22.998849+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67534848 unmapped: 614400 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:23.999102+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67534848 unmapped: 614400 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:24.999285+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67534848 unmapped: 614400 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:25.999431+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67534848 unmapped: 614400 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:26.999618+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67534848 unmapped: 614400 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:27.999848+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67534848 unmapped: 614400 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:29.000066+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67534848 unmapped: 614400 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:30.000237+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67534848 unmapped: 614400 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:31.000444+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67534848 unmapped: 614400 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:32.000689+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67534848 unmapped: 614400 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:33.000939+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67534848 unmapped: 614400 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:34.001174+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67534848 unmapped: 614400 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:35.001380+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67534848 unmapped: 614400 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:36.001597+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 606208 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:37.002156+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 606208 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:38.002315+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 606208 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:39.002974+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 606208 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:40.004013+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 606208 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:41.005224+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 606208 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:42.006289+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 606208 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:43.006568+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 606208 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:44.006981+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:24 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 606208 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:45.007475+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 606208 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:46.007687+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 606208 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:47.007892+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 606208 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:48.008029+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 606208 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:49.008233+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:24 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 606208 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:50.008752+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 606208 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:51.009018+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 606208 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:52.009352+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67485696 unmapped: 663552 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:53.009612+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67485696 unmapped: 663552 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:54.009815+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:24 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67485696 unmapped: 663552 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:55.009964+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67485696 unmapped: 663552 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:56.010192+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67485696 unmapped: 663552 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:57.010433+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67485696 unmapped: 663552 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:58.010589+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67485696 unmapped: 663552 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:59.010787+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:24 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67485696 unmapped: 663552 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:00.010967+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67485696 unmapped: 663552 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:01.011256+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67485696 unmapped: 663552 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:02.011484+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67485696 unmapped: 663552 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:03.011620+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67485696 unmapped: 663552 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:04.011809+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:24 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67485696 unmapped: 663552 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:05.012010+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67485696 unmapped: 663552 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:06.012238+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67485696 unmapped: 663552 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:07.012420+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67485696 unmapped: 663552 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:08.012616+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67485696 unmapped: 663552 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:09.012835+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:24 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 655360 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:10.013034+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 655360 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:11.013211+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 655360 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:12.013366+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 655360 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:13.013558+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 655360 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:14.013885+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:24 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 655360 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:15.014023+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 655360 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:16.014158+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 655360 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:17.014348+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 655360 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:18.014502+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 655360 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:19.014668+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:24 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 655360 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:20.014829+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 655360 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:21.015005+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 655360 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:22.015242+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 655360 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:23.015381+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 655360 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:24.015538+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:24 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 655360 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:25.015785+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 655360 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:26.015959+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 655360 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:27.016129+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 655360 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:28.016299+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 655360 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:29.016441+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:24 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 655360 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:30.016614+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 655360 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:31.016750+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 655360 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:32.016881+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 655360 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:33.017078+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 655360 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:34.017357+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:24 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 655360 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:35.017552+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 655360 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:36.017757+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 655360 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:37.017900+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 655360 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:38.018099+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 655360 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:39.018275+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:24 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 655360 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:40.018438+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 655360 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:41.018644+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 655360 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:42.019177+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 655360 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:43.019441+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 655360 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:44.019683+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:24 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 655360 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:45.019909+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 655360 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:46.020199+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 655360 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:47.020541+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 655360 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:48.020835+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 655360 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:49.021031+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:24 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 655360 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:50.021782+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 655360 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:51.022059+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 655360 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:52.022298+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 655360 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:53.022515+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 655360 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:54.022747+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:24 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 655360 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:55.022942+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 655360 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:56.023150+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 655360 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:57.023336+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 655360 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:58.023558+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 655360 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:59.023813+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:24 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 655360 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:00.023987+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 655360 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:01.024268+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 655360 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:02.024607+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 655360 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:03.024917+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 655360 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:04.025163+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:24 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 655360 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:05.025492+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 655360 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:06.025672+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 655360 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:07.025859+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 655360 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:08.026049+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 655360 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:09.026205+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:24 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:10.026402+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 655360 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:11.026619+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67502080 unmapped: 647168 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:12.026782+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67502080 unmapped: 647168 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:13.026949+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67502080 unmapped: 647168 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:14.027191+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67502080 unmapped: 647168 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:24 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:15.027325+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67502080 unmapped: 647168 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:16.027482+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67502080 unmapped: 647168 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:17.027640+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67502080 unmapped: 647168 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:18.027848+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67502080 unmapped: 647168 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:19.028024+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67502080 unmapped: 647168 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:24 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:20.028212+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67502080 unmapped: 647168 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:21.028350+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67502080 unmapped: 647168 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:22.028498+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67502080 unmapped: 647168 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:23.028641+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67502080 unmapped: 647168 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:24.028855+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67502080 unmapped: 647168 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:24 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:25.029032+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67502080 unmapped: 647168 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:26.029199+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67502080 unmapped: 647168 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:27.029357+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67502080 unmapped: 647168 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:28.029553+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67502080 unmapped: 647168 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:29.029793+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67502080 unmapped: 647168 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:30.029950+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:24 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67502080 unmapped: 647168 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:31.030084+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67502080 unmapped: 647168 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:32.030255+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67502080 unmapped: 647168 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:33.030418+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67502080 unmapped: 647168 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:34.030588+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67502080 unmapped: 647168 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:35.030783+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:24 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67502080 unmapped: 647168 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:36.030950+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67502080 unmapped: 647168 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:37.031091+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67502080 unmapped: 647168 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:38.031236+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67502080 unmapped: 647168 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:39.031403+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67502080 unmapped: 647168 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:40.031567+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:24 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67502080 unmapped: 647168 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:41.031765+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67502080 unmapped: 647168 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:42.031972+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67502080 unmapped: 647168 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:43.032166+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67502080 unmapped: 647168 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:44.032343+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67502080 unmapped: 647168 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:45.032504+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:24 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67502080 unmapped: 647168 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:46.032633+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67502080 unmapped: 647168 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:47.034175+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67502080 unmapped: 647168 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:48.034291+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67502080 unmapped: 647168 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:49.034449+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67502080 unmapped: 647168 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:50.034643+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:24 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67502080 unmapped: 647168 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:51.034788+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67502080 unmapped: 647168 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:52.034909+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67502080 unmapped: 647168 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:53.035135+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67502080 unmapped: 647168 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:54.035383+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67502080 unmapped: 647168 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:55.035636+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:24 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67502080 unmapped: 647168 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:56.035935+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67502080 unmapped: 647168 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:57.036199+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67502080 unmapped: 647168 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:58.036432+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67502080 unmapped: 647168 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:59.036816+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67502080 unmapped: 647168 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Cumulative writes: 4379 writes, 20K keys, 4379 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 4379 writes, 468 syncs, 9.36 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560f2dc198d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 0.000197 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560f2dc198d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 0.000197 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560f2dc198d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 0.000197 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560f2dc198d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 0.000197 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560f2dc198d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 0.000197 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560f2dc198d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 0.000197 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560f2dc198d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 0.000197 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560f2dc19a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560f2dc19a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560f2dc19a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560f2dc198d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 0.000197 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560f2dc198d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 0.000197 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:00.037053+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:24 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67534848 unmapped: 614400 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:01.037240+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67534848 unmapped: 614400 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:02.070507+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67534848 unmapped: 614400 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:03.070870+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67534848 unmapped: 614400 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:04.071177+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67534848 unmapped: 614400 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:05.071373+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:24 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67534848 unmapped: 614400 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:06.071599+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67534848 unmapped: 614400 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:07.071770+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67534848 unmapped: 614400 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:08.072028+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67534848 unmapped: 614400 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:09.072242+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67534848 unmapped: 614400 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:10.072419+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:24 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67534848 unmapped: 614400 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:11.072645+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67534848 unmapped: 614400 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:12.072880+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67534848 unmapped: 614400 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:13.073107+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67534848 unmapped: 614400 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:14.073403+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67534848 unmapped: 614400 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:15.073597+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:24 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67534848 unmapped: 614400 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:16.073801+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67534848 unmapped: 614400 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:17.074056+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67534848 unmapped: 614400 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:18.074269+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67534848 unmapped: 614400 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:19.074528+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67534848 unmapped: 614400 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:20.074738+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:24 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67534848 unmapped: 614400 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:21.075000+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67534848 unmapped: 614400 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:22.075325+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 606208 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:23.075610+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 606208 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:24.076192+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 606208 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:25.076503+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:24 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 606208 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:26.076811+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 606208 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:27.077097+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 606208 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:28.077339+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 606208 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:29.077628+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 606208 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:30.077931+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:24 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 606208 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:31.078233+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 606208 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:32.078477+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 606208 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:33.078827+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 606208 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:34.079214+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 606208 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:35.079398+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:24 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 606208 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:36.079572+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 606208 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:37.079756+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 606208 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:38.079893+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 606208 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:39.080672+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 606208 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:40.080816+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:24 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 606208 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:41.080988+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 606208 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:42.081124+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 606208 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:43.081475+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 606208 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:44.081843+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 606208 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:45.082017+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:24 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 606208 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:46.082296+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 606208 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:47.082567+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 606208 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:48.082786+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 606208 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:49.082984+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 606208 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:50.083112+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:24 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 606208 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:51.084516+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 606208 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:52.085117+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 606208 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:53.085297+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 606208 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:54.085838+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 606208 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:55.085962+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:24 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 606208 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:56.086152+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 606208 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:57.086460+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 606208 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:58.086648+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 606208 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:59.086805+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 606208 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:00.087035+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:24 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 606208 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:01.087247+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 606208 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:02.087390+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 606208 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:03.087599+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 606208 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:04.087795+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 606208 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:05.088014+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:24 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 606208 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:06.088198+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 606208 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:07.088467+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 606208 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:08.088646+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 606208 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:09.088885+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67551232 unmapped: 598016 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 68 handle_osd_map epochs [69,69], i have 68, src has [1,69]
Jan 10 17:23:24 compute-0 ceph-osd[85764]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 1031.683105469s of 1031.696044922s, submitted: 6
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:10.089085+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 69 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:24 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 532009 data_alloc: 218103808 data_used: 0
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 69 heartbeat osd_stat(store_statfs(0x4fe10c000/0x0/0x4ffc00000, data 0x4f026/0xbe000, compress 0x0/0x0/0x0, omap 0xb817, meta 0x1a247e9), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67551232 unmapped: 598016 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 69 handle_osd_map epochs [69,70], i have 69, src has [1,70]
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:11.089303+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: handle_auth_request added challenge on 0x560f31728c00
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67813376 unmapped: 17121280 heap: 84934656 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 70 handle_osd_map epochs [70,71], i have 70, src has [1,71]
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:12.089514+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 71 ms_handle_reset con 0x560f31728c00 session 0x560f31817500
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _renew_subs
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67846144 unmapped: 17088512 heap: 84934656 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: handle_auth_request added challenge on 0x560f2f5bfc00
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:13.089754+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 68059136 unmapped: 16875520 heap: 84934656 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:14.090037+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 71 heartbeat osd_stat(store_statfs(0x4fd903000/0x0/0x4ffc00000, data 0x851c53/0x8c5000, compress 0x0/0x0/0x0, omap 0xb7f0, meta 0x1a24810), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 71 handle_osd_map epochs [72,72], i have 71, src has [1,72]
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _renew_subs
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 72 ms_handle_reset con 0x560f2f5bfc00 session 0x560f3177fdc0
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 68075520 unmapped: 16859136 heap: 84934656 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:15.090260+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:24 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 585074 data_alloc: 218103808 data_used: 0
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 68075520 unmapped: 16859136 heap: 84934656 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 72 heartbeat osd_stat(store_statfs(0x4fd902000/0x0/0x4ffc00000, data 0x85323c/0x8c8000, compress 0x0/0x0/0x0, omap 0xb837, meta 0x1a247c9), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:16.090482+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 68075520 unmapped: 16859136 heap: 84934656 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:17.090717+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 68075520 unmapped: 16859136 heap: 84934656 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:18.090915+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 68075520 unmapped: 16859136 heap: 84934656 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:19.091126+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 72 handle_osd_map epochs [72,73], i have 72, src has [1,73]
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67944448 unmapped: 16990208 heap: 84934656 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:20.091373+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:24 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 587846 data_alloc: 218103808 data_used: 0
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67944448 unmapped: 16990208 heap: 84934656 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:21.091588+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67944448 unmapped: 16990208 heap: 84934656 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 73 heartbeat osd_stat(store_statfs(0x4fd8ff000/0x0/0x4ffc00000, data 0x8546ec/0x8cb000, compress 0x0/0x0/0x0, omap 0xb93b, meta 0x1a246c5), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:22.091827+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _renew_subs
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67952640 unmapped: 16982016 heap: 84934656 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 73 heartbeat osd_stat(store_statfs(0x4fd8ff000/0x0/0x4ffc00000, data 0x8546ec/0x8cb000, compress 0x0/0x0/0x0, omap 0xb93b, meta 0x1a246c5), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:23.092041+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67952640 unmapped: 16982016 heap: 84934656 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 73 heartbeat osd_stat(store_statfs(0x4fd8ff000/0x0/0x4ffc00000, data 0x8546ec/0x8cb000, compress 0x0/0x0/0x0, omap 0xb93b, meta 0x1a246c5), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:24.092244+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67952640 unmapped: 16982016 heap: 84934656 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:25.092457+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:24 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 587846 data_alloc: 218103808 data_used: 0
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 73 heartbeat osd_stat(store_statfs(0x4fd8ff000/0x0/0x4ffc00000, data 0x8546ec/0x8cb000, compress 0x0/0x0/0x0, omap 0xb93b, meta 0x1a246c5), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67952640 unmapped: 16982016 heap: 84934656 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:26.092630+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67952640 unmapped: 16982016 heap: 84934656 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:27.092858+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67952640 unmapped: 16982016 heap: 84934656 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 73 heartbeat osd_stat(store_statfs(0x4fd8ff000/0x0/0x4ffc00000, data 0x8546ec/0x8cb000, compress 0x0/0x0/0x0, omap 0xb93b, meta 0x1a246c5), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:28.093083+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67952640 unmapped: 16982016 heap: 84934656 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:29.093225+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67952640 unmapped: 16982016 heap: 84934656 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:30.093359+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:24 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 587846 data_alloc: 218103808 data_used: 0
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67952640 unmapped: 16982016 heap: 84934656 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:31.093470+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67952640 unmapped: 16982016 heap: 84934656 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 73 heartbeat osd_stat(store_statfs(0x4fd8ff000/0x0/0x4ffc00000, data 0x8546ec/0x8cb000, compress 0x0/0x0/0x0, omap 0xb93b, meta 0x1a246c5), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:32.093654+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67952640 unmapped: 16982016 heap: 84934656 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:33.093829+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67952640 unmapped: 16982016 heap: 84934656 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:34.094074+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67952640 unmapped: 16982016 heap: 84934656 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:35.094243+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:24 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 587846 data_alloc: 218103808 data_used: 0
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67952640 unmapped: 16982016 heap: 84934656 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:36.094413+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67952640 unmapped: 16982016 heap: 84934656 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:37.094590+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67952640 unmapped: 16982016 heap: 84934656 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 73 heartbeat osd_stat(store_statfs(0x4fd8ff000/0x0/0x4ffc00000, data 0x8546ec/0x8cb000, compress 0x0/0x0/0x0, omap 0xb93b, meta 0x1a246c5), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:38.094824+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67952640 unmapped: 16982016 heap: 84934656 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:39.095005+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67952640 unmapped: 16982016 heap: 84934656 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:40.095143+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:24 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 587846 data_alloc: 218103808 data_used: 0
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67952640 unmapped: 16982016 heap: 84934656 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:41.095347+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 73 heartbeat osd_stat(store_statfs(0x4fd8ff000/0x0/0x4ffc00000, data 0x8546ec/0x8cb000, compress 0x0/0x0/0x0, omap 0xb93b, meta 0x1a246c5), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67952640 unmapped: 16982016 heap: 84934656 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:42.095581+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67952640 unmapped: 16982016 heap: 84934656 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:43.095775+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67952640 unmapped: 16982016 heap: 84934656 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:44.095960+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 73 heartbeat osd_stat(store_statfs(0x4fd8ff000/0x0/0x4ffc00000, data 0x8546ec/0x8cb000, compress 0x0/0x0/0x0, omap 0xb93b, meta 0x1a246c5), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67952640 unmapped: 16982016 heap: 84934656 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:45.096123+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:24 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 587846 data_alloc: 218103808 data_used: 0
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67952640 unmapped: 16982016 heap: 84934656 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:46.096284+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67952640 unmapped: 16982016 heap: 84934656 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:47.096508+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67952640 unmapped: 16982016 heap: 84934656 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 73 heartbeat osd_stat(store_statfs(0x4fd8ff000/0x0/0x4ffc00000, data 0x8546ec/0x8cb000, compress 0x0/0x0/0x0, omap 0xb93b, meta 0x1a246c5), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:48.096767+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67952640 unmapped: 16982016 heap: 84934656 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:49.096917+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67952640 unmapped: 16982016 heap: 84934656 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:50.097130+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:24 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 587846 data_alloc: 218103808 data_used: 0
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 73 heartbeat osd_stat(store_statfs(0x4fd8ff000/0x0/0x4ffc00000, data 0x8546ec/0x8cb000, compress 0x0/0x0/0x0, omap 0xb93b, meta 0x1a246c5), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67952640 unmapped: 16982016 heap: 84934656 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:51.097279+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67952640 unmapped: 16982016 heap: 84934656 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:52.097512+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67952640 unmapped: 16982016 heap: 84934656 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:53.097813+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67952640 unmapped: 16982016 heap: 84934656 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:54.098129+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67952640 unmapped: 16982016 heap: 84934656 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 73 heartbeat osd_stat(store_statfs(0x4fd8ff000/0x0/0x4ffc00000, data 0x8546ec/0x8cb000, compress 0x0/0x0/0x0, omap 0xb93b, meta 0x1a246c5), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:55.098344+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 73 heartbeat osd_stat(store_statfs(0x4fd8ff000/0x0/0x4ffc00000, data 0x8546ec/0x8cb000, compress 0x0/0x0/0x0, omap 0xb93b, meta 0x1a246c5), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:24 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 587846 data_alloc: 218103808 data_used: 0
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67952640 unmapped: 16982016 heap: 84934656 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:56.098660+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: handle_auth_request added challenge on 0x560f2f5bec00
Jan 10 17:23:24 compute-0 ceph-osd[85764]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 46.543552399s of 46.655376434s, submitted: 33
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 68091904 unmapped: 16842752 heap: 84934656 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 73 handle_osd_map epochs [74,74], i have 73, src has [1,74]
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 74 ms_handle_reset con 0x560f2f5bec00 session 0x560f2ffde8c0
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:57.099040+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: handle_auth_request added challenge on 0x560f2f5be000
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 74 ms_handle_reset con 0x560f2f5be000 session 0x560f317d4380
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: handle_auth_request added challenge on 0x560f2f5be400
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 74 ms_handle_reset con 0x560f2f5be400 session 0x560f317f0380
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 16834560 heap: 84934656 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:58.099214+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: handle_auth_request added challenge on 0x560f2f5be000
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _renew_subs
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 74 handle_osd_map epochs [75,75], i have 74, src has [1,75]
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 16834560 heap: 84934656 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _renew_subs
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 75 handle_osd_map epochs [76,76], i have 75, src has [1,76]
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:59.099370+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 76 ms_handle_reset con 0x560f2f5be000 session 0x560f3178d880
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 16834560 heap: 84934656 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:00.099767+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: handle_auth_request added challenge on 0x560f2f5be400
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:24 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 600858 data_alloc: 218103808 data_used: 0
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 16834560 heap: 84934656 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 76 handle_osd_map epochs [76,77], i have 76, src has [1,77]
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:01.099911+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 77 ms_handle_reset con 0x560f2f5be400 session 0x560f317d41c0
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 77 heartbeat osd_stat(store_statfs(0x4fd8f4000/0x0/0x4ffc00000, data 0x8588da/0x8d6000, compress 0x0/0x0/0x0, omap 0xb51b, meta 0x1a24ae5), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: handle_auth_request added challenge on 0x560f2f5bfc00
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 77 heartbeat osd_stat(store_statfs(0x4fd8ef000/0x0/0x4ffc00000, data 0x859efb/0x8d9000, compress 0x0/0x0/0x0, omap 0xb1ab, meta 0x1a24e55), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 16801792 heap: 84934656 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:02.100106+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _renew_subs
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 77 handle_osd_map epochs [78,78], i have 77, src has [1,78]
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 78 ms_handle_reset con 0x560f2f5bfc00 session 0x560f30088e00
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _renew_subs
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 78 heartbeat osd_stat(store_statfs(0x4fd8f0000/0x0/0x4ffc00000, data 0x85b4e9/0x8da000, compress 0x0/0x0/0x0, omap 0xae93, meta 0x1a2516d), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 16793600 heap: 84934656 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:03.100380+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 16793600 heap: 84934656 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 78 heartbeat osd_stat(store_statfs(0x4fd8f0000/0x0/0x4ffc00000, data 0x85b4e9/0x8da000, compress 0x0/0x0/0x0, omap 0xae93, meta 0x1a2516d), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:04.100689+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: handle_auth_request added challenge on 0x560f2f66d400
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 78 handle_osd_map epochs [79,79], i have 78, src has [1,79]
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 79 ms_handle_reset con 0x560f2f66d400 session 0x560f317f1a40
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 79 handle_osd_map epochs [79,80], i have 79, src has [1,80]
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: handle_auth_request added challenge on 0x560f31bebc00
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 16482304 heap: 84934656 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:05.100905+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 80 handle_osd_map epochs [80,81], i have 80, src has [1,81]
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 81 ms_handle_reset con 0x560f31bebc00 session 0x560f3178c700
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:24 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 622265 data_alloc: 218103808 data_used: 0
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 68509696 unmapped: 16424960 heap: 84934656 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: handle_auth_request added challenge on 0x560f2f5be000
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:06.101302+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 81 handle_osd_map epochs [81,82], i have 81, src has [1,82]
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 82 ms_handle_reset con 0x560f2f5be000 session 0x560f3177e700
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 16384000 heap: 84934656 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: handle_auth_request added challenge on 0x560f31709c00
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:07.101466+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.323926926s of 10.448411942s, submitted: 71
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: handle_auth_request added challenge on 0x560f30369c00
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 82 handle_osd_map epochs [82,83], i have 82, src has [1,83]
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 83 ms_handle_reset con 0x560f31709c00 session 0x560f3060fa40
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 83 ms_handle_reset con 0x560f30369c00 session 0x560f30089880
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 68739072 unmapped: 16195584 heap: 84934656 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:08.101765+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 83 handle_osd_map epochs [84,84], i have 83, src has [1,84]
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: handle_auth_request added challenge on 0x560f3063fc00
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 16171008 heap: 84934656 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:09.101983+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 84 heartbeat osd_stat(store_statfs(0x4fd8d3000/0x0/0x4ffc00000, data 0x86383f/0x8f3000, compress 0x0/0x0/0x0, omap 0xb11d, meta 0x1a24ee3), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 84 handle_osd_map epochs [84,85], i have 84, src has [1,85]
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 85 ms_handle_reset con 0x560f3063fc00 session 0x560f2f35ec40
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: handle_auth_request added challenge on 0x560f3063f800
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 68943872 unmapped: 15990784 heap: 84934656 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:10.102185+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:24 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 772835 data_alloc: 218103808 data_used: 0
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 69140480 unmapped: 24190976 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 85 heartbeat osd_stat(store_statfs(0x4fc8ce000/0x0/0x4ffc00000, data 0x186646d/0x18fa000, compress 0x0/0x0/0x0, omap 0xa68f, meta 0x1a25971), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:11.102360+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 85 ms_handle_reset con 0x560f3063f800 session 0x560f317b6540
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 69009408 unmapped: 24322048 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:12.102511+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: handle_auth_request added challenge on 0x560f2f5be000
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _renew_subs
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 85 handle_osd_map epochs [86,86], i have 85, src has [1,86]
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _renew_subs
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 86 handle_osd_map epochs [87,87], i have 86, src has [1,87]
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 87 heartbeat osd_stat(store_statfs(0x4fb8cd000/0x0/0x4ffc00000, data 0x2867a3a/0x28fd000, compress 0x0/0x0/0x0, omap 0xa6e7, meta 0x1a25919), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: handle_auth_request added challenge on 0x560f30369c00
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 87 ms_handle_reset con 0x560f2f5be000 session 0x560f317b61c0
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 68968448 unmapped: 24363008 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: handle_auth_request added challenge on 0x560f31be8c00
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:13.102678+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 87 handle_osd_map epochs [87,88], i have 87, src has [1,88]
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 88 heartbeat osd_stat(store_statfs(0x4fd8c9000/0x0/0x4ffc00000, data 0x86904b/0x8ff000, compress 0x0/0x0/0x0, omap 0xa167, meta 0x1a25e99), peers [1,2] op hist [1,1])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 88 ms_handle_reset con 0x560f30369c00 session 0x560f31817c00
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 88 ms_handle_reset con 0x560f31be8c00 session 0x560f3113ec40
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 70066176 unmapped: 23265280 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:14.102970+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: handle_auth_request added challenge on 0x560f31be8800
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _renew_subs
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 88 handle_osd_map epochs [89,89], i have 88, src has [1,89]
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 89 ms_handle_reset con 0x560f31be8800 session 0x560f317428c0
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 89 heartbeat osd_stat(store_statfs(0x4fd8c6000/0x0/0x4ffc00000, data 0x86a28c/0x900000, compress 0x0/0x0/0x0, omap 0x11d7f, meta 0x1a1e281), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 23175168 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:15.103140+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: handle_auth_request added challenge on 0x560f31be8400
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 89 heartbeat osd_stat(store_statfs(0x4fd8c6000/0x0/0x4ffc00000, data 0x86a28c/0x900000, compress 0x0/0x0/0x0, omap 0x11d7f, meta 0x1a1e281), peers [1,2] op hist [1])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _renew_subs
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 89 handle_osd_map epochs [90,90], i have 89, src has [1,90]
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:24 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 663562 data_alloc: 218103808 data_used: 0
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 90 ms_handle_reset con 0x560f31be8400 session 0x560f31816e00
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 71606272 unmapped: 21725184 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:16.103323+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: handle_auth_request added challenge on 0x560f2f5be000
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 90 handle_osd_map epochs [90,91], i have 90, src has [1,91]
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 91 ms_handle_reset con 0x560f2f5be000 session 0x560f2ffdefc0
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 71712768 unmapped: 21618688 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:17.103485+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: handle_auth_request added challenge on 0x560f30369c00
Jan 10 17:23:24 compute-0 ceph-osd[85764]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.685506821s of 10.200369835s, submitted: 215
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: handle_auth_request added challenge on 0x560f31be8800
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 91 handle_osd_map epochs [91,92], i have 91, src has [1,92]
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 92 ms_handle_reset con 0x560f30369c00 session 0x560f317d5500
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 71753728 unmapped: 21577728 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:18.103642+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: handle_auth_request added challenge on 0x560f31be8c00
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 92 ms_handle_reset con 0x560f31be8800 session 0x560f316daa80
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _renew_subs
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 92 handle_osd_map epochs [93,93], i have 92, src has [1,93]
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 93 ms_handle_reset con 0x560f31be8c00 session 0x560f2ffdea80
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 71802880 unmapped: 21528576 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:19.103837+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 71802880 unmapped: 21528576 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:20.104114+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fc720000/0x0/0x4ffc00000, data 0x86feab/0x908000, compress 0x0/0x0/0x0, omap 0x1311a, meta 0x2bbcee6), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 93 handle_osd_map epochs [94,94], i have 93, src has [1,94]
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 93 handle_osd_map epochs [94,94], i have 94, src has [1,94]
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:24 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 671629 data_alloc: 218103808 data_used: 8122
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 71802880 unmapped: 21528576 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:21.104478+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 71802880 unmapped: 21528576 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:22.104754+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 94 heartbeat osd_stat(store_statfs(0x4fc71f000/0x0/0x4ffc00000, data 0x871377/0x90b000, compress 0x0/0x0/0x0, omap 0x1345f, meta 0x2bbcba1), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _renew_subs
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 94 heartbeat osd_stat(store_statfs(0x4fc71f000/0x0/0x4ffc00000, data 0x871377/0x90b000, compress 0x0/0x0/0x0, omap 0x1345f, meta 0x2bbcba1), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 71802880 unmapped: 21528576 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:23.104939+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 71802880 unmapped: 21528576 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: handle_auth_request added challenge on 0x560f31be8000
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:24.105139+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 94 ms_handle_reset con 0x560f31be8000 session 0x560f317b7340
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: handle_auth_request added challenge on 0x560f2f5be000
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: handle_auth_request added challenge on 0x560f31be8000
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 94 handle_osd_map epochs [95,95], i have 94, src has [1,95]
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 94 handle_osd_map epochs [94,95], i have 95, src has [1,95]
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 71933952 unmapped: 21397504 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:25.105301+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 95 handle_osd_map epochs [95,96], i have 95, src has [1,96]
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 96 ms_handle_reset con 0x560f31be8000 session 0x560f318161c0
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 96 ms_handle_reset con 0x560f2f5be000 session 0x560f3171bdc0
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: handle_auth_request added challenge on 0x560f31be8800
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 96 ms_handle_reset con 0x560f31be8800 session 0x560f3171ac40
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:24 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 681074 data_alloc: 218103808 data_used: 8138
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 72056832 unmapped: 21274624 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:26.105486+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 96 heartbeat osd_stat(store_statfs(0x4fc716000/0x0/0x4ffc00000, data 0x873fad/0x912000, compress 0x0/0x0/0x0, omap 0x13c02, meta 0x2bbc3fe), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 72056832 unmapped: 21274624 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:27.105662+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 72056832 unmapped: 21274624 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 96 heartbeat osd_stat(store_statfs(0x4fc716000/0x0/0x4ffc00000, data 0x873fad/0x912000, compress 0x0/0x0/0x0, omap 0x13c02, meta 0x2bbc3fe), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:28.105828+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 72056832 unmapped: 21274624 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:29.105985+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 72056832 unmapped: 21274624 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:30.106186+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 96 heartbeat osd_stat(store_statfs(0x4fc716000/0x0/0x4ffc00000, data 0x873fad/0x912000, compress 0x0/0x0/0x0, omap 0x13c02, meta 0x2bbc3fe), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:24 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 681074 data_alloc: 218103808 data_used: 8138
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 72056832 unmapped: 21274624 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:31.106393+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 72056832 unmapped: 21274624 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 96 heartbeat osd_stat(store_statfs(0x4fc716000/0x0/0x4ffc00000, data 0x873fad/0x912000, compress 0x0/0x0/0x0, omap 0x13c02, meta 0x2bbc3fe), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:32.106560+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _renew_subs
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 72056832 unmapped: 21274624 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:33.106769+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 96 heartbeat osd_stat(store_statfs(0x4fc716000/0x0/0x4ffc00000, data 0x873fad/0x912000, compress 0x0/0x0/0x0, omap 0x13c02, meta 0x2bbc3fe), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 72056832 unmapped: 21274624 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:34.106997+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 96 handle_osd_map epochs [97,97], i have 96, src has [1,97]
Jan 10 17:23:24 compute-0 ceph-osd[85764]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.961437225s of 17.086120605s, submitted: 79
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 73121792 unmapped: 20209664 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:35.107162+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:24 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 683238 data_alloc: 218103808 data_used: 8138
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 73121792 unmapped: 20209664 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:36.107300+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 73121792 unmapped: 20209664 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:37.107438+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 97 heartbeat osd_stat(store_statfs(0x4fc715000/0x0/0x4ffc00000, data 0x87545d/0x915000, compress 0x0/0x0/0x0, omap 0x13f54, meta 0x2bbc0ac), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 73121792 unmapped: 20209664 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:38.107598+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: handle_auth_request added challenge on 0x560f31be8c00
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _renew_subs
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _renew_subs
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 97 handle_osd_map epochs [99,99], i have 97, src has [1,99]
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 97 handle_osd_map epochs [98,99], i have 97, src has [1,99]
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 99 ms_handle_reset con 0x560f31be8c00 session 0x560f3171bc00
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 73211904 unmapped: 20119552 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:39.107778+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: handle_auth_request added challenge on 0x560f31728c00
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 73211904 unmapped: 20119552 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:40.107934+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 99 heartbeat osd_stat(store_statfs(0x4fc70f000/0x0/0x4ffc00000, data 0x87804b/0x91b000, compress 0x0/0x0/0x0, omap 0x14184, meta 0x2bbbe7c), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:24 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 690988 data_alloc: 218103808 data_used: 8154
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 73211904 unmapped: 20119552 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-mon[75249]: from='client.14816 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 10 17:23:24 compute-0 ceph-mon[75249]: pgmap v877: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:23:24 compute-0 ceph-mon[75249]: from='client.14818 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 17:23:24 compute-0 ceph-mon[75249]: from='client.14820 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:41.108073+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 73228288 unmapped: 20103168 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:42.108210+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: handle_auth_request added challenge on 0x560f2f66c000
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 73359360 unmapped: 19972096 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:43.108376+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: handle_auth_request added challenge on 0x560f2f66d000
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 99 handle_osd_map epochs [99,100], i have 99, src has [1,100]
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 100 ms_handle_reset con 0x560f2f66d000 session 0x560f2f652a80
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 100 ms_handle_reset con 0x560f2f66c000 session 0x560f3180bc00
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 73555968 unmapped: 19775488 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:44.108519+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 100 handle_osd_map epochs [100,101], i have 100, src has [1,101]
Jan 10 17:23:24 compute-0 ceph-osd[85764]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.925825119s of 10.007178307s, submitted: 36
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: handle_auth_request added challenge on 0x560f2f66d000
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 101 ms_handle_reset con 0x560f2f66d000 session 0x560f3180a000
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:45.108772+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 73588736 unmapped: 19742720 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: handle_auth_request added challenge on 0x560f2f66d400
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 101 ms_handle_reset con 0x560f2f66d400 session 0x560f2f652a80
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: handle_auth_request added challenge on 0x560f31be9400
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 101 heartbeat osd_stat(store_statfs(0x4fc709000/0x0/0x4ffc00000, data 0x87ac47/0x921000, compress 0x0/0x0/0x0, omap 0x14abf, meta 0x2bbb541), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:24 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 697182 data_alloc: 218103808 data_used: 8138
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:46.108933+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 73654272 unmapped: 19677184 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _renew_subs
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 101 handle_osd_map epochs [102,102], i have 101, src has [1,102]
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 102 ms_handle_reset con 0x560f31be9400 session 0x560f2f3521c0
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: handle_auth_request added challenge on 0x560f31be8c00
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:47.109067+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 73703424 unmapped: 19628032 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 102 handle_osd_map epochs [102,103], i have 102, src has [1,103]
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 103 ms_handle_reset con 0x560f31be8c00 session 0x560f303ee540
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:48.109220+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 73744384 unmapped: 19587072 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:49.109388+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 73744384 unmapped: 19587072 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 103 heartbeat osd_stat(store_statfs(0x4fc704000/0x0/0x4ffc00000, data 0x87d887/0x926000, compress 0x0/0x0/0x0, omap 0x152df, meta 0x2bbad21), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 103 handle_osd_map epochs [104,104], i have 103, src has [1,104]
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 103 handle_osd_map epochs [104,104], i have 104, src has [1,104]
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 104 heartbeat osd_stat(store_statfs(0x4fc704000/0x0/0x4ffc00000, data 0x87d887/0x926000, compress 0x0/0x0/0x0, omap 0x152df, meta 0x2bbad21), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:50.109527+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 73621504 unmapped: 19709952 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:24 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 705131 data_alloc: 218103808 data_used: 8138
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:51.109735+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 73637888 unmapped: 19693568 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:52.109896+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 73637888 unmapped: 19693568 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _renew_subs
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:53.110088+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 73637888 unmapped: 19693568 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:54.110324+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 73637888 unmapped: 19693568 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: handle_auth_request added challenge on 0x560f2f66c000
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 104 ms_handle_reset con 0x560f2f66c000 session 0x560f3178ddc0
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: handle_auth_request added challenge on 0x560f2f66d000
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 104 ms_handle_reset con 0x560f2f66d000 session 0x560f3178cfc0
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: handle_auth_request added challenge on 0x560f2f66d400
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 104 ms_handle_reset con 0x560f2f66d400 session 0x560f3178da40
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 104 handle_osd_map epochs [105,105], i have 104, src has [1,105]
Jan 10 17:23:24 compute-0 ceph-osd[85764]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.917822838s of 10.005137444s, submitted: 54
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: handle_auth_request added challenge on 0x560f31be9400
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 105 ms_handle_reset con 0x560f31be9400 session 0x560f31848380
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:55.110520+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 73646080 unmapped: 19685376 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: handle_auth_request added challenge on 0x560f31be8800
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 105 ms_handle_reset con 0x560f31be8800 session 0x560f318488c0
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 105 heartbeat osd_stat(store_statfs(0x4fc6fe000/0x0/0x4ffc00000, data 0x880374/0x92c000, compress 0x0/0x0/0x0, omap 0x1584b, meta 0x2bba7b5), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:24 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 707441 data_alloc: 218103808 data_used: 8138
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:56.110685+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 73777152 unmapped: 19554304 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: handle_auth_request added challenge on 0x560f2f66c000
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 105 ms_handle_reset con 0x560f2f66c000 session 0x560f31848c40
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: handle_auth_request added challenge on 0x560f2f66d000
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 105 ms_handle_reset con 0x560f2f66d000 session 0x560f31848fc0
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: handle_auth_request added challenge on 0x560f2f66d400
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 105 ms_handle_reset con 0x560f2f66d400 session 0x560f31849340
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:57.110841+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 19439616 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: handle_auth_request added challenge on 0x560f31be9400
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: handle_auth_request added challenge on 0x560f31be8000
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:58.111057+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 73916416 unmapped: 19415040 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:59.111231+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 73916416 unmapped: 19415040 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 105 heartbeat osd_stat(store_statfs(0x4fc6dc000/0x0/0x4ffc00000, data 0x8a4374/0x950000, compress 0x0/0x0/0x0, omap 0x15a0d, meta 0x2bba5f3), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:00.111416+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 73916416 unmapped: 19415040 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:24 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 711202 data_alloc: 218103808 data_used: 10186
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:01.111627+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 73916416 unmapped: 19415040 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: handle_auth_request added challenge on 0x560f31701800
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 105 ms_handle_reset con 0x560f31701800 session 0x560f3171a8c0
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: handle_auth_request added challenge on 0x560f31701c00
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _renew_subs
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 105 handle_osd_map epochs [106,106], i have 105, src has [1,106]
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: handle_auth_request added challenge on 0x560f31a4c000
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 106 ms_handle_reset con 0x560f31a4c000 session 0x560f31848380
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 106 ms_handle_reset con 0x560f31701c00 session 0x560f2f653a40
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: handle_auth_request added challenge on 0x560f2f66c000
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:02.111859+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74178560 unmapped: 19152896 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 106 ms_handle_reset con 0x560f2f66c000 session 0x560f303eefc0
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: handle_auth_request added challenge on 0x560f2f66d000
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _renew_subs
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 106 handle_osd_map epochs [107,107], i have 106, src has [1,107]
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 106 handle_osd_map epochs [106,107], i have 107, src has [1,107]
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 107 ms_handle_reset con 0x560f2f66d000 session 0x560f31742380
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:03.112083+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74211328 unmapped: 19120128 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 107 handle_osd_map epochs [107,108], i have 107, src has [1,108]
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:04.112351+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74211328 unmapped: 19120128 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 108 heartbeat osd_stat(store_statfs(0x4fc6cf000/0x0/0x4ffc00000, data 0x8a854b/0x959000, compress 0x0/0x0/0x0, omap 0x161d1, meta 0x2bb9e2f), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:05.112519+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74211328 unmapped: 19120128 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: handle_auth_request added challenge on 0x560f2f66d400
Jan 10 17:23:24 compute-0 ceph-osd[85764]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.317912102s of 11.386803627s, submitted: 46
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 108 ms_handle_reset con 0x560f2f66d400 session 0x560f3171a700
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:24 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 719524 data_alloc: 218103808 data_used: 10186
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:06.112729+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: handle_auth_request added challenge on 0x560f31701800
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74244096 unmapped: 19087360 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _renew_subs
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 108 handle_osd_map epochs [109,109], i have 108, src has [1,109]
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 109 ms_handle_reset con 0x560f31701800 session 0x560f317b6a80
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fc6d3000/0x0/0x4ffc00000, data 0x8a854b/0x959000, compress 0x0/0x0/0x0, omap 0x1636c, meta 0x2bb9c94), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:07.112871+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74260480 unmapped: 19070976 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:08.113039+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 109 ms_handle_reset con 0x560f31be9400 session 0x560f31bfba40
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 109 ms_handle_reset con 0x560f31be8000 session 0x560f3171ae00
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74260480 unmapped: 19070976 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: handle_auth_request added challenge on 0x560f2f66c000
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 109 handle_osd_map epochs [109,110], i have 109, src has [1,110]
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 110 ms_handle_reset con 0x560f2f66c000 session 0x560f31848c40
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:09.113195+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 19054592 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 110 heartbeat osd_stat(store_statfs(0x4fc6ef000/0x0/0x4ffc00000, data 0x8871c5/0x93b000, compress 0x0/0x0/0x0, omap 0x17b0c, meta 0x2bb84f4), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 110 handle_osd_map epochs [111,111], i have 110, src has [1,111]
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:10.113353+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 19054592 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:24 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 726486 data_alloc: 218103808 data_used: 8138
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 111 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x888691/0x93e000, compress 0x0/0x0/0x0, omap 0x17ec5, meta 0x2bb813b), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:11.113517+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 19054592 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 111 ms_handle_reset con 0x560f31728c00 session 0x560f2ffde8c0
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: handle_auth_request added challenge on 0x560f2f66d000
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 111 ms_handle_reset con 0x560f2f66d000 session 0x560f31bfb180
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:12.113657+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 19054592 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _renew_subs
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: handle_auth_request added challenge on 0x560f2f66c000
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 111 ms_handle_reset con 0x560f2f66c000 session 0x560f3180b500
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: handle_auth_request added challenge on 0x560f31be9400
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:13.113842+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 18980864 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 111 handle_osd_map epochs [112,112], i have 111, src has [1,112]
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 112 ms_handle_reset con 0x560f31be9400 session 0x560f3171b340
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:14.114087+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 18980864 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:15.114235+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 18980864 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 112 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x889cb2/0x940000, compress 0x0/0x0/0x0, omap 0x18501, meta 0x2bb7aff), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 112 handle_osd_map epochs [113,113], i have 112, src has [1,113]
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 112 handle_osd_map epochs [113,113], i have 113, src has [1,113]
Jan 10 17:23:24 compute-0 ceph-osd[85764]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.869652748s of 10.011584282s, submitted: 101
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:24 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 731600 data_alloc: 218103808 data_used: 8122
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:16.114393+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 18980864 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:17.114564+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 18980864 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:18.114816+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 18980864 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:19.114991+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 18980864 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:20.115178+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 18980864 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fc6e7000/0x0/0x4ffc00000, data 0x88b17e/0x943000, compress 0x0/0x0/0x0, omap 0x187b3, meta 0x2bb784d), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 113 handle_osd_map epochs [114,114], i have 113, src has [1,114]
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:21.115384+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:24 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 734374 data_alloc: 218103808 data_used: 8122
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 18980864 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:22.115536+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 18980864 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _renew_subs
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:23.115760+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 18980864 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:24.115993+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 18980864 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:25.116246+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 18980864 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:26.116547+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:24 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 734374 data_alloc: 218103808 data_used: 8122
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 18980864 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:27.116792+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 18980864 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:28.117034+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 18980864 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:29.117218+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 18980864 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:30.117383+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 18980864 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:31.117613+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:24 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 734374 data_alloc: 218103808 data_used: 8122
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 18980864 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:32.117840+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 18980864 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:33.118002+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 18980864 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:34.118295+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 18980864 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:35.118523+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 18980864 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:36.118811+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:24 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 734374 data_alloc: 218103808 data_used: 8122
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 18980864 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:37.119020+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 18980864 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:38.119232+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 18980864 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:39.119524+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 18980864 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:40.119791+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 18980864 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:41.120034+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:24 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 734374 data_alloc: 218103808 data_used: 8122
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 18980864 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:42.120212+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 18980864 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:43.120471+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 18980864 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:44.120771+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 18980864 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:45.120994+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 18980864 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:46.121200+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:24 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 734374 data_alloc: 218103808 data_used: 8122
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 18980864 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:47.121357+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 18980864 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:48.121554+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 18980864 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:49.121838+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 18980864 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:50.122021+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 18980864 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:51.122193+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:24 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 734374 data_alloc: 218103808 data_used: 8122
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 18980864 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:52.122393+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 18980864 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:53.122543+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 18980864 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:54.122840+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 18980864 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:55.123056+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 18980864 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:56.123254+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:24 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 734374 data_alloc: 218103808 data_used: 8122
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 18980864 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:57.123485+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 18980864 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:58.123688+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 18980864 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:59.124017+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 18980864 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:00.124208+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 18980864 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:01.124357+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:24 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 734374 data_alloc: 218103808 data_used: 8122
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 18980864 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:02.124516+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 18980864 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:03.124657+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 18980864 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:04.124947+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 18980864 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:05.125079+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 18980864 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:06.125286+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:24 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 734374 data_alloc: 218103808 data_used: 8122
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 18980864 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:07.125496+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 18980864 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:08.126389+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 18980864 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:09.126626+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 18972672 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:10.126830+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 18972672 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:11.127010+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:24 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 734374 data_alloc: 218103808 data_used: 8122
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 18972672 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:12.127218+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 18972672 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:13.127372+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 18972672 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:14.127592+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 18972672 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:15.127832+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 18972672 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:16.127993+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:24 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 734374 data_alloc: 218103808 data_used: 8122
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 18972672 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:17.128164+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 18972672 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:18.128326+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 18972672 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:19.128493+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 18972672 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:20.128667+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 18972672 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:21.128846+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:24 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 734374 data_alloc: 218103808 data_used: 8122
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 18972672 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:22.129044+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 18972672 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:23.129204+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 18972672 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:24.129386+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 18972672 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:25.129593+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 18972672 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:26.129779+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:24 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 734374 data_alloc: 218103808 data_used: 8122
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 18972672 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:27.129939+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 18972672 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:28.130064+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 18972672 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:29.130217+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 18972672 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:30.130393+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 18972672 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:31.130622+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:24 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 734374 data_alloc: 218103808 data_used: 8122
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 18972672 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:32.130783+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 18972672 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:33.130962+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 18972672 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:34.131266+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 18972672 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:35.131475+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 18972672 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:36.131666+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:24 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 734374 data_alloc: 218103808 data_used: 8122
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 18972672 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:37.131866+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 18972672 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:38.132038+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 18972672 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:39.132401+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 18972672 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:40.132561+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 18972672 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:41.132730+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:24 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 734374 data_alloc: 218103808 data_used: 8122
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 18972672 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:42.132873+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 18972672 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:43.146758+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 18972672 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:44.146957+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 18972672 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:45.147086+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 18972672 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:46.147249+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:24 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 734374 data_alloc: 218103808 data_used: 8122
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 18972672 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:47.147392+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 18972672 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:48.147529+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 18972672 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:49.147637+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 18972672 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:50.147753+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74448896 unmapped: 18882560 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: do_command 'config diff' '{prefix=config diff}'
Jan 10 17:23:24 compute-0 ceph-osd[85764]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:51.147884+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: do_command 'config show' '{prefix=config show}'
Jan 10 17:23:24 compute-0 ceph-osd[85764]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Jan 10 17:23:24 compute-0 ceph-osd[85764]: do_command 'counter dump' '{prefix=counter dump}'
Jan 10 17:23:24 compute-0 ceph-osd[85764]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Jan 10 17:23:24 compute-0 ceph-osd[85764]: do_command 'counter schema' '{prefix=counter schema}'
Jan 10 17:23:24 compute-0 ceph-osd[85764]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:23:24 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:23:24 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 734374 data_alloc: 218103808 data_used: 8122
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75055104 unmapped: 18276352 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:52.148015+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74858496 unmapped: 18472960 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:23:24 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:53.148152+0000)
Jan 10 17:23:24 compute-0 ceph-osd[85764]: do_command 'log dump' '{prefix=log dump}'
Jan 10 17:23:24 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 10 17:23:24 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "quorum_status"} v 0)
Jan 10 17:23:24 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3351586704' entity='client.admin' cmd={"prefix": "quorum_status"} : dispatch
Jan 10 17:23:24 compute-0 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.14832 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 10 17:23:24 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:23:24 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v878: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:23:24 compute-0 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.14836 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 10 17:23:24 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "versions"} v 0)
Jan 10 17:23:24 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3304602616' entity='client.admin' cmd={"prefix": "versions"} : dispatch
Jan 10 17:23:25 compute-0 ceph-mon[75249]: from='client.14822 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 17:23:25 compute-0 ceph-mon[75249]: from='client.14824 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 10 17:23:25 compute-0 ceph-mon[75249]: from='client.14828 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 10 17:23:25 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/3351586704' entity='client.admin' cmd={"prefix": "quorum_status"} : dispatch
Jan 10 17:23:25 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/3304602616' entity='client.admin' cmd={"prefix": "versions"} : dispatch
Jan 10 17:23:25 compute-0 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.14838 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 10 17:23:25 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0)
Jan 10 17:23:25 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1056028521' entity='client.admin' cmd={"prefix": "health", "detail": "detail", "format": "json-pretty"} : dispatch
Jan 10 17:23:25 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0)
Jan 10 17:23:25 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/396990753' entity='client.admin' cmd={"prefix": "osd tree", "format": "json-pretty"} : dispatch
Jan 10 17:23:26 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 10 17:23:26 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 10 17:23:26 compute-0 ceph-mon[75249]: from='client.14832 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 10 17:23:26 compute-0 ceph-mon[75249]: pgmap v878: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:23:26 compute-0 ceph-mon[75249]: from='client.14836 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 10 17:23:26 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/1056028521' entity='client.admin' cmd={"prefix": "health", "detail": "detail", "format": "json-pretty"} : dispatch
Jan 10 17:23:26 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/396990753' entity='client.admin' cmd={"prefix": "osd tree", "format": "json-pretty"} : dispatch
Jan 10 17:23:26 compute-0 ceph-mon[75249]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 10 17:23:26 compute-0 ceph-mon[75249]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 10 17:23:26 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 10 17:23:26 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 10 17:23:26 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump"} v 0)
Jan 10 17:23:26 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3383855005' entity='client.admin' cmd={"prefix": "config dump"} : dispatch
Jan 10 17:23:26 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v879: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:23:26 compute-0 systemd[1]: Starting Hostname Service...
Jan 10 17:23:26 compute-0 systemd[1]: Started Hostname Service.
Jan 10 17:23:27 compute-0 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.14854 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 17:23:27 compute-0 ceph-mon[75249]: from='client.14838 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 10 17:23:27 compute-0 ceph-mon[75249]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 10 17:23:27 compute-0 ceph-mon[75249]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 10 17:23:27 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/3383855005' entity='client.admin' cmd={"prefix": "config dump"} : dispatch
Jan 10 17:23:27 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0)
Jan 10 17:23:27 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1737837954' entity='client.admin' cmd={"prefix": "df", "detail": "detail"} : dispatch
Jan 10 17:23:28 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df"} v 0)
Jan 10 17:23:28 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3409343500' entity='client.admin' cmd={"prefix": "df"} : dispatch
Jan 10 17:23:28 compute-0 ceph-mon[75249]: pgmap v879: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:23:28 compute-0 ceph-mon[75249]: from='client.14854 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 17:23:28 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/1737837954' entity='client.admin' cmd={"prefix": "df", "detail": "detail"} : dispatch
Jan 10 17:23:28 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/3409343500' entity='client.admin' cmd={"prefix": "df"} : dispatch
Jan 10 17:23:28 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs dump"} v 0)
Jan 10 17:23:28 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3883057359' entity='client.admin' cmd={"prefix": "fs dump"} : dispatch
Jan 10 17:23:28 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v880: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:23:29 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs ls"} v 0)
Jan 10 17:23:29 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1312049913' entity='client.admin' cmd={"prefix": "fs ls"} : dispatch
Jan 10 17:23:29 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/3883057359' entity='client.admin' cmd={"prefix": "fs dump"} : dispatch
Jan 10 17:23:29 compute-0 ceph-mon[75249]: pgmap v880: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:23:29 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/1312049913' entity='client.admin' cmd={"prefix": "fs ls"} : dispatch
Jan 10 17:23:29 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:23:29 compute-0 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.14864 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 17:23:30 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds stat"} v 0)
Jan 10 17:23:30 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4014052051' entity='client.admin' cmd={"prefix": "mds stat"} : dispatch
Jan 10 17:23:30 compute-0 ceph-mon[75249]: from='client.14864 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 17:23:30 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/4014052051' entity='client.admin' cmd={"prefix": "mds stat"} : dispatch
Jan 10 17:23:30 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v881: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:23:30 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump"} v 0)
Jan 10 17:23:30 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/529770529' entity='client.admin' cmd={"prefix": "mon dump"} : dispatch
Jan 10 17:23:31 compute-0 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.14870 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 17:23:31 compute-0 ceph-mon[75249]: pgmap v881: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:23:31 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/529770529' entity='client.admin' cmd={"prefix": "mon dump"} : dispatch
Jan 10 17:23:31 compute-0 ceph-mon[75249]: from='client.14870 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 17:23:31 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd blocklist ls"} v 0)
Jan 10 17:23:31 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3417096954' entity='client.admin' cmd={"prefix": "osd blocklist ls"} : dispatch
Jan 10 17:23:32 compute-0 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.14874 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 17:23:32 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/3417096954' entity='client.admin' cmd={"prefix": "osd blocklist ls"} : dispatch
Jan 10 17:23:32 compute-0 ceph-mon[75249]: from='client.14874 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 17:23:32 compute-0 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.14876 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 17:23:32 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v882: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:23:33 compute-0 ceph-mon[75249]: from='client.14876 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 17:23:33 compute-0 ceph-mon[75249]: pgmap v882: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:23:33 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd dump"} v 0)
Jan 10 17:23:33 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/340149793' entity='client.admin' cmd={"prefix": "osd dump"} : dispatch
Jan 10 17:23:33 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd numa-status"} v 0)
Jan 10 17:23:33 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2318603715' entity='client.admin' cmd={"prefix": "osd numa-status"} : dispatch
Jan 10 17:23:34 compute-0 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.14882 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 17:23:34 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/340149793' entity='client.admin' cmd={"prefix": "osd dump"} : dispatch
Jan 10 17:23:34 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/2318603715' entity='client.admin' cmd={"prefix": "osd numa-status"} : dispatch
Jan 10 17:23:34 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:23:34 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v883: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:23:34 compute-0 ovs-appctl[248211]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Jan 10 17:23:34 compute-0 ovs-appctl[248219]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Jan 10 17:23:34 compute-0 ovs-appctl[248228]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Jan 10 17:23:34 compute-0 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.14884 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 17:23:34 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:23:34 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 10 17:23:34 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:23:34 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 5.365931724612428e-07 of space, bias 1.0, pg target 0.00016097795173837282 quantized to 32 (current 32)
Jan 10 17:23:34 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:23:34 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.1924810223865999e-07 of space, bias 1.0, pg target 3.5774430671597993e-05 quantized to 32 (current 32)
Jan 10 17:23:34 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:23:34 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 17:23:34 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:23:34 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000668695260671586 of space, bias 1.0, pg target 0.2006085782014758 quantized to 32 (current 32)
Jan 10 17:23:34 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:23:34 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0462037643091811e-06 of space, bias 4.0, pg target 0.0012554445171710175 quantized to 16 (current 16)
Jan 10 17:23:34 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:23:34 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 17:23:35 compute-0 ceph-mon[75249]: from='client.14882 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 17:23:35 compute-0 ceph-mon[75249]: pgmap v883: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:23:35 compute-0 ceph-mon[75249]: from='client.14884 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 17:23:35 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail"} v 0)
Jan 10 17:23:35 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3036990990' entity='client.admin' cmd={"prefix": "osd pool ls", "detail": "detail"} : dispatch
Jan 10 17:23:35 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd stat"} v 0)
Jan 10 17:23:35 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2521583879' entity='client.admin' cmd={"prefix": "osd stat"} : dispatch
Jan 10 17:23:36 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 10 17:23:36 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4249780750' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 10 17:23:36 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 10 17:23:36 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4249780750' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 10 17:23:36 compute-0 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.14890 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 17:23:36 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/3036990990' entity='client.admin' cmd={"prefix": "osd pool ls", "detail": "detail"} : dispatch
Jan 10 17:23:36 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/2521583879' entity='client.admin' cmd={"prefix": "osd stat"} : dispatch
Jan 10 17:23:36 compute-0 ceph-mon[75249]: from='client.? 192.168.122.10:0/4249780750' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 10 17:23:36 compute-0 ceph-mon[75249]: from='client.? 192.168.122.10:0/4249780750' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 10 17:23:36 compute-0 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.14896 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 17:23:36 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v884: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:23:37 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0)
Jan 10 17:23:37 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/667362873' entity='client.admin' cmd={"prefix": "status"} : dispatch
Jan 10 17:23:37 compute-0 ceph-mon[75249]: from='client.14890 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 17:23:37 compute-0 ceph-mon[75249]: from='client.14896 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 17:23:37 compute-0 ceph-mon[75249]: pgmap v884: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:23:37 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/667362873' entity='client.admin' cmd={"prefix": "status"} : dispatch
Jan 10 17:23:37 compute-0 ceph-mon[75249]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #42. Immutable memtables: 0.
Jan 10 17:23:37 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:23:37.360592) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 10 17:23:37 compute-0 ceph-mon[75249]: rocksdb: [db/flush_job.cc:856] [default] [JOB 19] Flushing memtable with next log file: 42
Jan 10 17:23:37 compute-0 ceph-mon[75249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768065817360757, "job": 19, "event": "flush_started", "num_memtables": 1, "num_entries": 630, "num_deletes": 251, "total_data_size": 370216, "memory_usage": 383144, "flush_reason": "Manual Compaction"}
Jan 10 17:23:37 compute-0 ceph-mon[75249]: rocksdb: [db/flush_job.cc:885] [default] [JOB 19] Level-0 flush table #43: started
Jan 10 17:23:37 compute-0 ceph-mon[75249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768065817365712, "cf_name": "default", "job": 19, "event": "table_file_creation", "file_number": 43, "file_size": 364185, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 17730, "largest_seqno": 18359, "table_properties": {"data_size": 360807, "index_size": 1158, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 9135, "raw_average_key_size": 20, "raw_value_size": 353615, "raw_average_value_size": 784, "num_data_blocks": 52, "num_entries": 451, "num_filter_entries": 451, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768065789, "oldest_key_time": 1768065789, "file_creation_time": 1768065817, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f71f9c2-f3c5-4fc3-bcd9-6ffe346ae9d4", "db_session_id": "VPFJD76VNV79HUMFHEYZ", "orig_file_number": 43, "seqno_to_time_mapping": "N/A"}}
Jan 10 17:23:37 compute-0 ceph-mon[75249]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 19] Flush lasted 5114 microseconds, and 2307 cpu microseconds.
Jan 10 17:23:37 compute-0 ceph-mon[75249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 10 17:23:37 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:23:37.365760) [db/flush_job.cc:967] [default] [JOB 19] Level-0 flush table #43: 364185 bytes OK
Jan 10 17:23:37 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:23:37.365791) [db/memtable_list.cc:519] [default] Level-0 commit table #43 started
Jan 10 17:23:37 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:23:37.367876) [db/memtable_list.cc:722] [default] Level-0 commit table #43: memtable #1 done
Jan 10 17:23:37 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:23:37.367898) EVENT_LOG_v1 {"time_micros": 1768065817367894, "job": 19, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 10 17:23:37 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:23:37.367918) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 10 17:23:37 compute-0 ceph-mon[75249]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 19] Try to delete WAL files size 366587, prev total WAL file size 366587, number of live WAL files 2.
Jan 10 17:23:37 compute-0 ceph-mon[75249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000039.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 10 17:23:37 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:23:37.368522) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031323535' seq:72057594037927935, type:22 .. '7061786F730031353037' seq:0, type:0; will stop at (end)
Jan 10 17:23:37 compute-0 ceph-mon[75249]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 20] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 10 17:23:37 compute-0 ceph-mon[75249]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 19 Base level 0, inputs: [43(355KB)], [41(5678KB)]
Jan 10 17:23:37 compute-0 ceph-mon[75249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768065817368618, "job": 20, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [43], "files_L6": [41], "score": -1, "input_data_size": 6179054, "oldest_snapshot_seqno": -1}
Jan 10 17:23:37 compute-0 ceph-mon[75249]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 20] Generated table #44: 3901 keys, 4996323 bytes, temperature: kUnknown
Jan 10 17:23:37 compute-0 ceph-mon[75249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768065817406946, "cf_name": "default", "job": 20, "event": "table_file_creation", "file_number": 44, "file_size": 4996323, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 4969961, "index_size": 15501, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9797, "raw_key_size": 93035, "raw_average_key_size": 23, "raw_value_size": 4899588, "raw_average_value_size": 1255, "num_data_blocks": 659, "num_entries": 3901, "num_filter_entries": 3901, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768064235, "oldest_key_time": 0, "file_creation_time": 1768065817, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f71f9c2-f3c5-4fc3-bcd9-6ffe346ae9d4", "db_session_id": "VPFJD76VNV79HUMFHEYZ", "orig_file_number": 44, "seqno_to_time_mapping": "N/A"}}
Jan 10 17:23:37 compute-0 ceph-mon[75249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 10 17:23:37 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:23:37.407214) [db/compaction/compaction_job.cc:1663] [default] [JOB 20] Compacted 1@0 + 1@6 files to L6 => 4996323 bytes
Jan 10 17:23:37 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:23:37.408686) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 160.8 rd, 130.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 5.5 +0.0 blob) out(4.8 +0.0 blob), read-write-amplify(30.7) write-amplify(13.7) OK, records in: 4410, records dropped: 509 output_compression: NoCompression
Jan 10 17:23:37 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:23:37.408717) EVENT_LOG_v1 {"time_micros": 1768065817408708, "job": 20, "event": "compaction_finished", "compaction_time_micros": 38420, "compaction_time_cpu_micros": 20244, "output_level": 6, "num_output_files": 1, "total_output_size": 4996323, "num_input_records": 4410, "num_output_records": 3901, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 10 17:23:37 compute-0 ceph-mon[75249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000043.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 10 17:23:37 compute-0 ceph-mon[75249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768065817408880, "job": 20, "event": "table_file_deletion", "file_number": 43}
Jan 10 17:23:37 compute-0 ceph-mon[75249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000041.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 10 17:23:37 compute-0 ceph-mon[75249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768065817409727, "job": 20, "event": "table_file_deletion", "file_number": 41}
Jan 10 17:23:37 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:23:37.368342) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 10 17:23:37 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:23:37.409796) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 10 17:23:37 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:23:37.409803) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 10 17:23:37 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:23:37.409804) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 10 17:23:37 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:23:37.409806) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 10 17:23:37 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:23:37.409808) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 10 17:23:37 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "time-sync-status"} v 0)
Jan 10 17:23:37 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3584958714' entity='client.admin' cmd={"prefix": "time-sync-status"} : dispatch
Jan 10 17:23:38 compute-0 ceph-mgr[75538]: [balancer INFO root] Optimize plan auto_2026-01-10_17:23:38
Jan 10 17:23:38 compute-0 ceph-mgr[75538]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 10 17:23:38 compute-0 ceph-mgr[75538]: [balancer INFO root] do_upmap
Jan 10 17:23:38 compute-0 ceph-mgr[75538]: [balancer INFO root] pools ['images', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.mgr', 'vms', 'volumes', 'backups']
Jan 10 17:23:38 compute-0 ceph-mgr[75538]: [balancer INFO root] prepared 0/10 upmap changes
Jan 10 17:23:38 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json-pretty"} v 0)
Jan 10 17:23:38 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2494879747' entity='client.admin' cmd={"prefix": "config dump", "format": "json-pretty"} : dispatch
Jan 10 17:23:38 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/3584958714' entity='client.admin' cmd={"prefix": "time-sync-status"} : dispatch
Jan 10 17:23:38 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/2494879747' entity='client.admin' cmd={"prefix": "config dump", "format": "json-pretty"} : dispatch
Jan 10 17:23:38 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v885: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:23:38 compute-0 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.14904 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 10 17:23:39 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:23:39 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:23:39 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:23:39 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:23:39 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:23:39 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:23:39 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "detail": "detail", "format": "json-pretty"} v 0)
Jan 10 17:23:39 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2277035618' entity='client.admin' cmd={"prefix": "df", "detail": "detail", "format": "json-pretty"} : dispatch
Jan 10 17:23:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 10 17:23:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 10 17:23:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 10 17:23:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 10 17:23:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 10 17:23:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 10 17:23:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 10 17:23:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 10 17:23:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 10 17:23:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 10 17:23:39 compute-0 ceph-mon[75249]: pgmap v885: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:23:39 compute-0 ceph-mon[75249]: from='client.14904 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 10 17:23:39 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/2277035618' entity='client.admin' cmd={"prefix": "df", "detail": "detail", "format": "json-pretty"} : dispatch
Jan 10 17:23:39 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:23:39 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json-pretty"} v 0)
Jan 10 17:23:39 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/380763255' entity='client.admin' cmd={"prefix": "df", "format": "json-pretty"} : dispatch
Jan 10 17:23:40 compute-0 podman[249310]: 2026-01-10 17:23:40.091529065 +0000 UTC m=+0.084079497 container health_status 4a66317a3a3660e903224f5e823ead0a57597ae17c6ac34919f8a3aeea25124f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true)
Jan 10 17:23:40 compute-0 podman[249313]: 2026-01-10 17:23:40.134100878 +0000 UTC m=+0.116332576 container health_status a2049b55215718ead6719d161f51721cb7ba61ad8a59a8a1174e05b17008fa6f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 10 17:23:40 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs dump", "format": "json-pretty"} v 0)
Jan 10 17:23:40 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3086304210' entity='client.admin' cmd={"prefix": "fs dump", "format": "json-pretty"} : dispatch
Jan 10 17:23:40 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/380763255' entity='client.admin' cmd={"prefix": "df", "format": "json-pretty"} : dispatch
Jan 10 17:23:40 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/3086304210' entity='client.admin' cmd={"prefix": "fs dump", "format": "json-pretty"} : dispatch
Jan 10 17:23:40 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v886: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:23:40 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs ls", "format": "json-pretty"} v 0)
Jan 10 17:23:40 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1364892107' entity='client.admin' cmd={"prefix": "fs ls", "format": "json-pretty"} : dispatch
Jan 10 17:23:41 compute-0 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.14914 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 10 17:23:41 compute-0 ceph-mon[75249]: pgmap v886: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:23:41 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/1364892107' entity='client.admin' cmd={"prefix": "fs ls", "format": "json-pretty"} : dispatch
Jan 10 17:23:42 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds stat", "format": "json-pretty"} v 0)
Jan 10 17:23:42 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2614579653' entity='client.admin' cmd={"prefix": "mds stat", "format": "json-pretty"} : dispatch
Jan 10 17:23:42 compute-0 ceph-mon[75249]: from='client.14914 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 10 17:23:42 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/2614579653' entity='client.admin' cmd={"prefix": "mds stat", "format": "json-pretty"} : dispatch
Jan 10 17:23:42 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json-pretty"} v 0)
Jan 10 17:23:42 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2506531732' entity='client.admin' cmd={"prefix": "mon dump", "format": "json-pretty"} : dispatch
Jan 10 17:23:42 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v887: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:23:43 compute-0 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.14920 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 10 17:23:43 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/2506531732' entity='client.admin' cmd={"prefix": "mon dump", "format": "json-pretty"} : dispatch
Jan 10 17:23:43 compute-0 ceph-mon[75249]: pgmap v887: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:23:43 compute-0 ceph-mon[75249]: from='client.14920 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 10 17:23:43 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json-pretty"} v 0)
Jan 10 17:23:43 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2777967145' entity='client.admin' cmd={"prefix": "osd blocklist ls", "format": "json-pretty"} : dispatch
Jan 10 17:23:44 compute-0 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.14924 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 10 17:23:44 compute-0 virtqemud[236762]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Jan 10 17:23:44 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/2777967145' entity='client.admin' cmd={"prefix": "osd blocklist ls", "format": "json-pretty"} : dispatch
Jan 10 17:23:44 compute-0 ceph-mon[75249]: from='client.14924 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 10 17:23:44 compute-0 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.14926 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 10 17:23:44 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:23:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] _maybe_adjust
Jan 10 17:23:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:23:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 10 17:23:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:23:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 5.365931724612428e-07 of space, bias 1.0, pg target 0.00016097795173837282 quantized to 32 (current 32)
Jan 10 17:23:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:23:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.1924810223865999e-07 of space, bias 1.0, pg target 3.5774430671597993e-05 quantized to 32 (current 32)
Jan 10 17:23:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:23:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 17:23:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:23:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000668695260671586 of space, bias 1.0, pg target 0.2006085782014758 quantized to 32 (current 32)
Jan 10 17:23:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:23:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0462037643091811e-06 of space, bias 4.0, pg target 0.0012554445171710175 quantized to 16 (current 16)
Jan 10 17:23:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:23:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 17:23:44 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v888: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:23:45 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd dump", "format": "json-pretty"} v 0)
Jan 10 17:23:45 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/978993727' entity='client.admin' cmd={"prefix": "osd dump", "format": "json-pretty"} : dispatch
Jan 10 17:23:45 compute-0 ceph-mon[75249]: from='client.14926 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 10 17:23:45 compute-0 ceph-mon[75249]: pgmap v888: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:23:45 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/978993727' entity='client.admin' cmd={"prefix": "osd dump", "format": "json-pretty"} : dispatch
Jan 10 17:23:45 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd numa-status", "format": "json-pretty"} v 0)
Jan 10 17:23:45 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/317581347' entity='client.admin' cmd={"prefix": "osd numa-status", "format": "json-pretty"} : dispatch
Jan 10 17:23:45 compute-0 systemd[1]: Starting Time & Date Service...
Jan 10 17:23:45 compute-0 systemd[1]: Started Time & Date Service.
Jan 10 17:23:45 compute-0 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.14932 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 10 17:23:46 compute-0 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.14934 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 10 17:23:46 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:23:46 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 10 17:23:46 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:23:46 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 5.365931724612428e-07 of space, bias 1.0, pg target 0.00016097795173837282 quantized to 32 (current 32)
Jan 10 17:23:46 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:23:46 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.1924810223865999e-07 of space, bias 1.0, pg target 3.5774430671597993e-05 quantized to 32 (current 32)
Jan 10 17:23:46 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:23:46 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 17:23:46 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:23:46 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000668695260671586 of space, bias 1.0, pg target 0.2006085782014758 quantized to 32 (current 32)
Jan 10 17:23:46 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:23:46 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0462037643091811e-06 of space, bias 4.0, pg target 0.0012554445171710175 quantized to 16 (current 16)
Jan 10 17:23:46 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:23:46 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 17:23:46 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/317581347' entity='client.admin' cmd={"prefix": "osd numa-status", "format": "json-pretty"} : dispatch
Jan 10 17:23:46 compute-0 ceph-mon[75249]: from='client.14932 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 10 17:23:46 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v889: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:23:46 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"} v 0)
Jan 10 17:23:46 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3525233326' entity='client.admin' cmd={"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"} : dispatch
Jan 10 17:23:47 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd stat", "format": "json-pretty"} v 0)
Jan 10 17:23:47 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3861085720' entity='client.admin' cmd={"prefix": "osd stat", "format": "json-pretty"} : dispatch
Jan 10 17:23:47 compute-0 ceph-mon[75249]: from='client.14934 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 10 17:23:47 compute-0 ceph-mon[75249]: pgmap v889: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:23:47 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/3525233326' entity='client.admin' cmd={"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"} : dispatch
Jan 10 17:23:47 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/3861085720' entity='client.admin' cmd={"prefix": "osd stat", "format": "json-pretty"} : dispatch
Jan 10 17:23:47 compute-0 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.14940 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 10 17:23:48 compute-0 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.14942 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 10 17:23:48 compute-0 ceph-mon[75249]: from='client.14940 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 10 17:23:48 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v890: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:23:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:23:48.939 152671 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 10 17:23:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:23:48.945 152671 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.007s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 10 17:23:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:23:48.946 152671 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 10 17:23:48 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Jan 10 17:23:48 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3526893489' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Jan 10 17:23:49 compute-0 ceph-osd[86809]: bluestore.MempoolThread fragmentation_score=0.000149 took=0.000078s
Jan 10 17:23:49 compute-0 ceph-osd[85764]: bluestore.MempoolThread fragmentation_score=0.000123 took=0.000055s
Jan 10 17:23:49 compute-0 ceph-osd[87867]: bluestore.MempoolThread fragmentation_score=0.000195 took=0.000113s
Jan 10 17:23:49 compute-0 sudo[250248]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 17:23:49 compute-0 sudo[250248]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:23:49 compute-0 sudo[250248]: pam_unix(sudo:session): session closed for user root
Jan 10 17:23:49 compute-0 sudo[250273]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 10 17:23:49 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:23:49 compute-0 sudo[250273]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:23:49 compute-0 ceph-mon[75249]: from='client.14942 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 10 17:23:49 compute-0 ceph-mon[75249]: pgmap v890: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:23:49 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/3526893489' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Jan 10 17:23:49 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "time-sync-status", "format": "json-pretty"} v 0)
Jan 10 17:23:49 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3253430208' entity='client.admin' cmd={"prefix": "time-sync-status", "format": "json-pretty"} : dispatch
Jan 10 17:23:50 compute-0 sudo[250273]: pam_unix(sudo:session): session closed for user root
Jan 10 17:23:50 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 17:23:50 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 17:23:50 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 10 17:23:50 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 10 17:23:50 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 10 17:23:50 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:23:50 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 10 17:23:50 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 10 17:23:50 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 10 17:23:50 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 10 17:23:50 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 17:23:50 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 17:23:50 compute-0 sudo[250331]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 17:23:50 compute-0 sudo[250331]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:23:50 compute-0 sudo[250331]: pam_unix(sudo:session): session closed for user root
Jan 10 17:23:50 compute-0 sudo[250356]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 10 17:23:50 compute-0 sudo[250356]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:23:50 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/3253430208' entity='client.admin' cmd={"prefix": "time-sync-status", "format": "json-pretty"} : dispatch
Jan 10 17:23:50 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 17:23:50 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 10 17:23:50 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:23:50 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 10 17:23:50 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 10 17:23:50 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 17:23:50 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v891: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:23:50 compute-0 podman[250393]: 2026-01-10 17:23:50.699666983 +0000 UTC m=+0.065584432 container create 13baafd6261f7f942b53dbea9941c7fdd705a0514f892dab0e894ae65ed8c9a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_beaver, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 10 17:23:50 compute-0 systemd[1]: Started libpod-conmon-13baafd6261f7f942b53dbea9941c7fdd705a0514f892dab0e894ae65ed8c9a4.scope.
Jan 10 17:23:50 compute-0 podman[250393]: 2026-01-10 17:23:50.680370145 +0000 UTC m=+0.046287624 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:23:50 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:23:50 compute-0 podman[250393]: 2026-01-10 17:23:50.858274213 +0000 UTC m=+0.224191762 container init 13baafd6261f7f942b53dbea9941c7fdd705a0514f892dab0e894ae65ed8c9a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_beaver, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 17:23:50 compute-0 podman[250393]: 2026-01-10 17:23:50.867228237 +0000 UTC m=+0.233145726 container start 13baafd6261f7f942b53dbea9941c7fdd705a0514f892dab0e894ae65ed8c9a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_beaver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 17:23:50 compute-0 podman[250393]: 2026-01-10 17:23:50.871296227 +0000 UTC m=+0.237213706 container attach 13baafd6261f7f942b53dbea9941c7fdd705a0514f892dab0e894ae65ed8c9a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_beaver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 10 17:23:50 compute-0 vigorous_beaver[250409]: 167 167
Jan 10 17:23:50 compute-0 systemd[1]: libpod-13baafd6261f7f942b53dbea9941c7fdd705a0514f892dab0e894ae65ed8c9a4.scope: Deactivated successfully.
Jan 10 17:23:50 compute-0 conmon[250409]: conmon 13baafd6261f7f942b53 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-13baafd6261f7f942b53dbea9941c7fdd705a0514f892dab0e894ae65ed8c9a4.scope/container/memory.events
Jan 10 17:23:50 compute-0 podman[250393]: 2026-01-10 17:23:50.884516956 +0000 UTC m=+0.250434415 container died 13baafd6261f7f942b53dbea9941c7fdd705a0514f892dab0e894ae65ed8c9a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_beaver, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 17:23:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-59a3e62bd0f4316a98d02a8e92baba7cbb0e6a3a522a92f08630153f0a460262-merged.mount: Deactivated successfully.
Jan 10 17:23:50 compute-0 podman[250393]: 2026-01-10 17:23:50.929616634 +0000 UTC m=+0.295534103 container remove 13baafd6261f7f942b53dbea9941c7fdd705a0514f892dab0e894ae65ed8c9a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_beaver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 10 17:23:50 compute-0 systemd[1]: libpod-conmon-13baafd6261f7f942b53dbea9941c7fdd705a0514f892dab0e894ae65ed8c9a4.scope: Deactivated successfully.
Jan 10 17:23:51 compute-0 podman[250433]: 2026-01-10 17:23:51.130759297 +0000 UTC m=+0.061369278 container create 8ac679a03c850d83ca0a60f7187b1a5e090e976ca340b72179b39abb1cc6b398 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_albattani, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 10 17:23:51 compute-0 systemd[1]: Started libpod-conmon-8ac679a03c850d83ca0a60f7187b1a5e090e976ca340b72179b39abb1cc6b398.scope.
Jan 10 17:23:51 compute-0 podman[250433]: 2026-01-10 17:23:51.106258685 +0000 UTC m=+0.036868736 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:23:51 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:23:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4265ba742bd41ffd261ca9b5e9e25877f691a17e0e1306f16377cea4f6884b4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 17:23:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4265ba742bd41ffd261ca9b5e9e25877f691a17e0e1306f16377cea4f6884b4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 17:23:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4265ba742bd41ffd261ca9b5e9e25877f691a17e0e1306f16377cea4f6884b4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 17:23:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4265ba742bd41ffd261ca9b5e9e25877f691a17e0e1306f16377cea4f6884b4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 17:23:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4265ba742bd41ffd261ca9b5e9e25877f691a17e0e1306f16377cea4f6884b4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 10 17:23:51 compute-0 podman[250433]: 2026-01-10 17:23:51.247846034 +0000 UTC m=+0.178456045 container init 8ac679a03c850d83ca0a60f7187b1a5e090e976ca340b72179b39abb1cc6b398 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_albattani, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 10 17:23:51 compute-0 podman[250433]: 2026-01-10 17:23:51.259137147 +0000 UTC m=+0.189747148 container start 8ac679a03c850d83ca0a60f7187b1a5e090e976ca340b72179b39abb1cc6b398 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_albattani, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 10 17:23:51 compute-0 podman[250433]: 2026-01-10 17:23:51.263717782 +0000 UTC m=+0.194327803 container attach 8ac679a03c850d83ca0a60f7187b1a5e090e976ca340b72179b39abb1cc6b398 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_albattani, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 10 17:23:51 compute-0 ceph-mon[75249]: pgmap v891: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:23:51 compute-0 practical_albattani[250450]: --> passed data devices: 0 physical, 3 LVM
Jan 10 17:23:51 compute-0 practical_albattani[250450]: --> All data devices are unavailable
Jan 10 17:23:51 compute-0 systemd[1]: libpod-8ac679a03c850d83ca0a60f7187b1a5e090e976ca340b72179b39abb1cc6b398.scope: Deactivated successfully.
Jan 10 17:23:51 compute-0 podman[250433]: 2026-01-10 17:23:51.942851579 +0000 UTC m=+0.873461560 container died 8ac679a03c850d83ca0a60f7187b1a5e090e976ca340b72179b39abb1cc6b398 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_albattani, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 10 17:23:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-c4265ba742bd41ffd261ca9b5e9e25877f691a17e0e1306f16377cea4f6884b4-merged.mount: Deactivated successfully.
Jan 10 17:23:52 compute-0 podman[250433]: 2026-01-10 17:23:52.007276096 +0000 UTC m=+0.937886107 container remove 8ac679a03c850d83ca0a60f7187b1a5e090e976ca340b72179b39abb1cc6b398 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_albattani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 17:23:52 compute-0 systemd[1]: libpod-conmon-8ac679a03c850d83ca0a60f7187b1a5e090e976ca340b72179b39abb1cc6b398.scope: Deactivated successfully.
Jan 10 17:23:52 compute-0 sudo[250356]: pam_unix(sudo:session): session closed for user root
Jan 10 17:23:52 compute-0 sudo[250482]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 17:23:52 compute-0 sudo[250482]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:23:52 compute-0 sudo[250482]: pam_unix(sudo:session): session closed for user root
Jan 10 17:23:52 compute-0 sudo[250507]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 -- lvm list --format json
Jan 10 17:23:52 compute-0 sudo[250507]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:23:52 compute-0 podman[250544]: 2026-01-10 17:23:52.493015939 +0000 UTC m=+0.032093136 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:23:52 compute-0 podman[250544]: 2026-01-10 17:23:52.623160791 +0000 UTC m=+0.162237998 container create 0ca3892314096a3f291411aae7bb977e51c0ac523ea4f399ea1e151930a7f12b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_allen, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 10 17:23:52 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v892: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:23:52 compute-0 systemd[1]: Started libpod-conmon-0ca3892314096a3f291411aae7bb977e51c0ac523ea4f399ea1e151930a7f12b.scope.
Jan 10 17:23:52 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:23:52 compute-0 podman[250544]: 2026-01-10 17:23:52.740488156 +0000 UTC m=+0.279565423 container init 0ca3892314096a3f291411aae7bb977e51c0ac523ea4f399ea1e151930a7f12b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_allen, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 10 17:23:52 compute-0 podman[250544]: 2026-01-10 17:23:52.748937325 +0000 UTC m=+0.288014512 container start 0ca3892314096a3f291411aae7bb977e51c0ac523ea4f399ea1e151930a7f12b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_allen, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 10 17:23:52 compute-0 podman[250544]: 2026-01-10 17:23:52.752825519 +0000 UTC m=+0.291902746 container attach 0ca3892314096a3f291411aae7bb977e51c0ac523ea4f399ea1e151930a7f12b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_allen, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 10 17:23:52 compute-0 zealous_allen[250560]: 167 167
Jan 10 17:23:52 compute-0 systemd[1]: libpod-0ca3892314096a3f291411aae7bb977e51c0ac523ea4f399ea1e151930a7f12b.scope: Deactivated successfully.
Jan 10 17:23:52 compute-0 podman[250544]: 2026-01-10 17:23:52.756269621 +0000 UTC m=+0.295346808 container died 0ca3892314096a3f291411aae7bb977e51c0ac523ea4f399ea1e151930a7f12b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_allen, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 17:23:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-0634eff3022f700784436c4c798dd98f301dbd0eae575915d9ea3ef81bb26d21-merged.mount: Deactivated successfully.
Jan 10 17:23:52 compute-0 podman[250544]: 2026-01-10 17:23:52.80072918 +0000 UTC m=+0.339806367 container remove 0ca3892314096a3f291411aae7bb977e51c0ac523ea4f399ea1e151930a7f12b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_allen, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 17:23:52 compute-0 systemd[1]: libpod-conmon-0ca3892314096a3f291411aae7bb977e51c0ac523ea4f399ea1e151930a7f12b.scope: Deactivated successfully.
Jan 10 17:23:53 compute-0 podman[250583]: 2026-01-10 17:23:53.041920302 +0000 UTC m=+0.089347242 container create 5120a3a50c73b48de51d36395deceaa1be77d8536f95b05456d577959d2636ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_pare, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 17:23:53 compute-0 systemd[1]: Started libpod-conmon-5120a3a50c73b48de51d36395deceaa1be77d8536f95b05456d577959d2636ef.scope.
Jan 10 17:23:53 compute-0 podman[250583]: 2026-01-10 17:23:53.012530726 +0000 UTC m=+0.059957746 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:23:53 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:23:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4899583f0cc1e9362126a94487dab04a0fb59980a539c851e20951ca721319a4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 17:23:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4899583f0cc1e9362126a94487dab04a0fb59980a539c851e20951ca721319a4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 17:23:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4899583f0cc1e9362126a94487dab04a0fb59980a539c851e20951ca721319a4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 17:23:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4899583f0cc1e9362126a94487dab04a0fb59980a539c851e20951ca721319a4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 17:23:53 compute-0 podman[250583]: 2026-01-10 17:23:53.16546292 +0000 UTC m=+0.212889870 container init 5120a3a50c73b48de51d36395deceaa1be77d8536f95b05456d577959d2636ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_pare, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 10 17:23:53 compute-0 podman[250583]: 2026-01-10 17:23:53.173347372 +0000 UTC m=+0.220774272 container start 5120a3a50c73b48de51d36395deceaa1be77d8536f95b05456d577959d2636ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_pare, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 17:23:53 compute-0 podman[250583]: 2026-01-10 17:23:53.176991689 +0000 UTC m=+0.224418689 container attach 5120a3a50c73b48de51d36395deceaa1be77d8536f95b05456d577959d2636ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_pare, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 10 17:23:53 compute-0 nova_compute[237049]: 2026-01-10 17:23:53.347 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:23:53 compute-0 nova_compute[237049]: 2026-01-10 17:23:53.353 237053 DEBUG nova.compute.manager [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 10 17:23:53 compute-0 nova_compute[237049]: 2026-01-10 17:23:53.353 237053 DEBUG nova.compute.manager [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 10 17:23:53 compute-0 nova_compute[237049]: 2026-01-10 17:23:53.399 237053 DEBUG nova.compute.manager [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 10 17:23:53 compute-0 nova_compute[237049]: 2026-01-10 17:23:53.402 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:23:53 compute-0 epic_pare[250600]: {
Jan 10 17:23:53 compute-0 epic_pare[250600]:     "0": [
Jan 10 17:23:53 compute-0 epic_pare[250600]:         {
Jan 10 17:23:53 compute-0 epic_pare[250600]:             "devices": [
Jan 10 17:23:53 compute-0 epic_pare[250600]:                 "/dev/loop3"
Jan 10 17:23:53 compute-0 epic_pare[250600]:             ],
Jan 10 17:23:53 compute-0 epic_pare[250600]:             "lv_name": "ceph_lv0",
Jan 10 17:23:53 compute-0 epic_pare[250600]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 10 17:23:53 compute-0 epic_pare[250600]:             "lv_size": "21470642176",
Jan 10 17:23:53 compute-0 epic_pare[250600]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9aa1dcc9-88f4-49c0-be40-744313964d3e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 17:23:53 compute-0 epic_pare[250600]:             "lv_uuid": "fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2",
Jan 10 17:23:53 compute-0 epic_pare[250600]:             "name": "ceph_lv0",
Jan 10 17:23:53 compute-0 epic_pare[250600]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 10 17:23:53 compute-0 epic_pare[250600]:             "tags": {
Jan 10 17:23:53 compute-0 epic_pare[250600]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 10 17:23:53 compute-0 epic_pare[250600]:                 "ceph.block_uuid": "fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2",
Jan 10 17:23:53 compute-0 epic_pare[250600]:                 "ceph.cephx_lockbox_secret": "",
Jan 10 17:23:53 compute-0 epic_pare[250600]:                 "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 17:23:53 compute-0 epic_pare[250600]:                 "ceph.cluster_name": "ceph",
Jan 10 17:23:53 compute-0 epic_pare[250600]:                 "ceph.crush_device_class": "",
Jan 10 17:23:53 compute-0 epic_pare[250600]:                 "ceph.encrypted": "0",
Jan 10 17:23:53 compute-0 epic_pare[250600]:                 "ceph.objectstore": "bluestore",
Jan 10 17:23:53 compute-0 epic_pare[250600]:                 "ceph.osd_fsid": "9aa1dcc9-88f4-49c0-be40-744313964d3e",
Jan 10 17:23:53 compute-0 epic_pare[250600]:                 "ceph.osd_id": "0",
Jan 10 17:23:53 compute-0 epic_pare[250600]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 10 17:23:53 compute-0 epic_pare[250600]:                 "ceph.type": "block",
Jan 10 17:23:53 compute-0 epic_pare[250600]:                 "ceph.vdo": "0",
Jan 10 17:23:53 compute-0 epic_pare[250600]:                 "ceph.with_tpm": "0"
Jan 10 17:23:53 compute-0 epic_pare[250600]:             },
Jan 10 17:23:53 compute-0 epic_pare[250600]:             "type": "block",
Jan 10 17:23:53 compute-0 epic_pare[250600]:             "vg_name": "ceph_vg0"
Jan 10 17:23:53 compute-0 epic_pare[250600]:         }
Jan 10 17:23:53 compute-0 epic_pare[250600]:     ],
Jan 10 17:23:53 compute-0 epic_pare[250600]:     "1": [
Jan 10 17:23:53 compute-0 epic_pare[250600]:         {
Jan 10 17:23:53 compute-0 epic_pare[250600]:             "devices": [
Jan 10 17:23:53 compute-0 epic_pare[250600]:                 "/dev/loop4"
Jan 10 17:23:53 compute-0 epic_pare[250600]:             ],
Jan 10 17:23:53 compute-0 epic_pare[250600]:             "lv_name": "ceph_lv1",
Jan 10 17:23:53 compute-0 epic_pare[250600]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 10 17:23:53 compute-0 epic_pare[250600]:             "lv_size": "21470642176",
Jan 10 17:23:53 compute-0 epic_pare[250600]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e8e31518-65ae-476c-891c-e2fc550d0a1c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 17:23:53 compute-0 epic_pare[250600]:             "lv_uuid": "jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE",
Jan 10 17:23:53 compute-0 epic_pare[250600]:             "name": "ceph_lv1",
Jan 10 17:23:53 compute-0 epic_pare[250600]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 10 17:23:53 compute-0 epic_pare[250600]:             "tags": {
Jan 10 17:23:53 compute-0 epic_pare[250600]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 10 17:23:53 compute-0 epic_pare[250600]:                 "ceph.block_uuid": "jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE",
Jan 10 17:23:53 compute-0 epic_pare[250600]:                 "ceph.cephx_lockbox_secret": "",
Jan 10 17:23:53 compute-0 epic_pare[250600]:                 "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 17:23:53 compute-0 epic_pare[250600]:                 "ceph.cluster_name": "ceph",
Jan 10 17:23:53 compute-0 epic_pare[250600]:                 "ceph.crush_device_class": "",
Jan 10 17:23:53 compute-0 epic_pare[250600]:                 "ceph.encrypted": "0",
Jan 10 17:23:53 compute-0 epic_pare[250600]:                 "ceph.objectstore": "bluestore",
Jan 10 17:23:53 compute-0 epic_pare[250600]:                 "ceph.osd_fsid": "e8e31518-65ae-476c-891c-e2fc550d0a1c",
Jan 10 17:23:53 compute-0 epic_pare[250600]:                 "ceph.osd_id": "1",
Jan 10 17:23:53 compute-0 epic_pare[250600]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 10 17:23:53 compute-0 epic_pare[250600]:                 "ceph.type": "block",
Jan 10 17:23:53 compute-0 epic_pare[250600]:                 "ceph.vdo": "0",
Jan 10 17:23:53 compute-0 epic_pare[250600]:                 "ceph.with_tpm": "0"
Jan 10 17:23:53 compute-0 epic_pare[250600]:             },
Jan 10 17:23:53 compute-0 epic_pare[250600]:             "type": "block",
Jan 10 17:23:53 compute-0 epic_pare[250600]:             "vg_name": "ceph_vg1"
Jan 10 17:23:53 compute-0 epic_pare[250600]:         }
Jan 10 17:23:53 compute-0 epic_pare[250600]:     ],
Jan 10 17:23:53 compute-0 epic_pare[250600]:     "2": [
Jan 10 17:23:53 compute-0 epic_pare[250600]:         {
Jan 10 17:23:53 compute-0 epic_pare[250600]:             "devices": [
Jan 10 17:23:53 compute-0 epic_pare[250600]:                 "/dev/loop5"
Jan 10 17:23:53 compute-0 epic_pare[250600]:             ],
Jan 10 17:23:53 compute-0 epic_pare[250600]:             "lv_name": "ceph_lv2",
Jan 10 17:23:53 compute-0 epic_pare[250600]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 10 17:23:53 compute-0 epic_pare[250600]:             "lv_size": "21470642176",
Jan 10 17:23:53 compute-0 epic_pare[250600]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=87473727-6468-4f68-8371-e0bf60edaa43,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 17:23:53 compute-0 epic_pare[250600]:             "lv_uuid": "47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL",
Jan 10 17:23:53 compute-0 epic_pare[250600]:             "name": "ceph_lv2",
Jan 10 17:23:53 compute-0 epic_pare[250600]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 10 17:23:53 compute-0 epic_pare[250600]:             "tags": {
Jan 10 17:23:53 compute-0 epic_pare[250600]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 10 17:23:53 compute-0 epic_pare[250600]:                 "ceph.block_uuid": "47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL",
Jan 10 17:23:53 compute-0 epic_pare[250600]:                 "ceph.cephx_lockbox_secret": "",
Jan 10 17:23:53 compute-0 epic_pare[250600]:                 "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 17:23:53 compute-0 epic_pare[250600]:                 "ceph.cluster_name": "ceph",
Jan 10 17:23:53 compute-0 epic_pare[250600]:                 "ceph.crush_device_class": "",
Jan 10 17:23:53 compute-0 epic_pare[250600]:                 "ceph.encrypted": "0",
Jan 10 17:23:53 compute-0 epic_pare[250600]:                 "ceph.objectstore": "bluestore",
Jan 10 17:23:53 compute-0 epic_pare[250600]:                 "ceph.osd_fsid": "87473727-6468-4f68-8371-e0bf60edaa43",
Jan 10 17:23:53 compute-0 epic_pare[250600]:                 "ceph.osd_id": "2",
Jan 10 17:23:53 compute-0 epic_pare[250600]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 10 17:23:53 compute-0 epic_pare[250600]:                 "ceph.type": "block",
Jan 10 17:23:53 compute-0 epic_pare[250600]:                 "ceph.vdo": "0",
Jan 10 17:23:53 compute-0 epic_pare[250600]:                 "ceph.with_tpm": "0"
Jan 10 17:23:53 compute-0 epic_pare[250600]:             },
Jan 10 17:23:53 compute-0 epic_pare[250600]:             "type": "block",
Jan 10 17:23:53 compute-0 epic_pare[250600]:             "vg_name": "ceph_vg2"
Jan 10 17:23:53 compute-0 epic_pare[250600]:         }
Jan 10 17:23:53 compute-0 epic_pare[250600]:     ]
Jan 10 17:23:53 compute-0 epic_pare[250600]: }
Jan 10 17:23:53 compute-0 systemd[1]: libpod-5120a3a50c73b48de51d36395deceaa1be77d8536f95b05456d577959d2636ef.scope: Deactivated successfully.
Jan 10 17:23:53 compute-0 podman[250583]: 2026-01-10 17:23:53.509655815 +0000 UTC m=+0.557082755 container died 5120a3a50c73b48de51d36395deceaa1be77d8536f95b05456d577959d2636ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_pare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 10 17:23:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-4899583f0cc1e9362126a94487dab04a0fb59980a539c851e20951ca721319a4-merged.mount: Deactivated successfully.
Jan 10 17:23:53 compute-0 podman[250583]: 2026-01-10 17:23:53.571053742 +0000 UTC m=+0.618480692 container remove 5120a3a50c73b48de51d36395deceaa1be77d8536f95b05456d577959d2636ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_pare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 17:23:53 compute-0 systemd[1]: libpod-conmon-5120a3a50c73b48de51d36395deceaa1be77d8536f95b05456d577959d2636ef.scope: Deactivated successfully.
Jan 10 17:23:53 compute-0 sudo[250507]: pam_unix(sudo:session): session closed for user root
Jan 10 17:23:53 compute-0 sudo[250622]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 17:23:53 compute-0 sudo[250622]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:23:53 compute-0 ceph-mon[75249]: pgmap v892: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:23:53 compute-0 sudo[250622]: pam_unix(sudo:session): session closed for user root
Jan 10 17:23:53 compute-0 sudo[250647]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 -- raw list --format json
Jan 10 17:23:53 compute-0 sudo[250647]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:23:54 compute-0 podman[250685]: 2026-01-10 17:23:54.196773806 +0000 UTC m=+0.022220535 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:23:54 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:23:54 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v893: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:23:54 compute-0 podman[250685]: 2026-01-10 17:23:54.744089813 +0000 UTC m=+0.569536562 container create 9f7824aa5bf7db762a27a600c8ae7553f2e72679215b01d9bc92fbe3b8564920 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_golick, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 10 17:23:54 compute-0 systemd[1]: Started libpod-conmon-9f7824aa5bf7db762a27a600c8ae7553f2e72679215b01d9bc92fbe3b8564920.scope.
Jan 10 17:23:54 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:23:54 compute-0 podman[250685]: 2026-01-10 17:23:54.854694189 +0000 UTC m=+0.680140998 container init 9f7824aa5bf7db762a27a600c8ae7553f2e72679215b01d9bc92fbe3b8564920 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_golick, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 10 17:23:54 compute-0 podman[250685]: 2026-01-10 17:23:54.869016081 +0000 UTC m=+0.694462800 container start 9f7824aa5bf7db762a27a600c8ae7553f2e72679215b01d9bc92fbe3b8564920 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_golick, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 10 17:23:54 compute-0 podman[250685]: 2026-01-10 17:23:54.873319648 +0000 UTC m=+0.698766397 container attach 9f7824aa5bf7db762a27a600c8ae7553f2e72679215b01d9bc92fbe3b8564920 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_golick, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 10 17:23:54 compute-0 funny_golick[250701]: 167 167
Jan 10 17:23:54 compute-0 systemd[1]: libpod-9f7824aa5bf7db762a27a600c8ae7553f2e72679215b01d9bc92fbe3b8564920.scope: Deactivated successfully.
Jan 10 17:23:54 compute-0 conmon[250701]: conmon 9f7824aa5bf7db762a27 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9f7824aa5bf7db762a27a600c8ae7553f2e72679215b01d9bc92fbe3b8564920.scope/container/memory.events
Jan 10 17:23:54 compute-0 podman[250685]: 2026-01-10 17:23:54.885820566 +0000 UTC m=+0.711267335 container died 9f7824aa5bf7db762a27a600c8ae7553f2e72679215b01d9bc92fbe3b8564920 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_golick, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 17:23:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-070f9ea5eaa7bee6d49d3fac3f62113c6c51d552c97bfd14d0eec8133b1ff06d-merged.mount: Deactivated successfully.
Jan 10 17:23:54 compute-0 podman[250685]: 2026-01-10 17:23:54.938457186 +0000 UTC m=+0.763903905 container remove 9f7824aa5bf7db762a27a600c8ae7553f2e72679215b01d9bc92fbe3b8564920 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_golick, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 10 17:23:54 compute-0 systemd[1]: libpod-conmon-9f7824aa5bf7db762a27a600c8ae7553f2e72679215b01d9bc92fbe3b8564920.scope: Deactivated successfully.
Jan 10 17:23:55 compute-0 podman[250725]: 2026-01-10 17:23:55.166002935 +0000 UTC m=+0.050782016 container create 7b3088a4941c0f2a593dd288cda14a2f1818720751dd310a1866e4409d75b7dc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_goldwasser, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 10 17:23:55 compute-0 systemd[1]: Started libpod-conmon-7b3088a4941c0f2a593dd288cda14a2f1818720751dd310a1866e4409d75b7dc.scope.
Jan 10 17:23:55 compute-0 podman[250725]: 2026-01-10 17:23:55.143129332 +0000 UTC m=+0.027908453 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:23:55 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:23:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1947d2dc9d1f40613255e53c0e80abefa75eb30d450e3d7a8f286926a1a2be1f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 17:23:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1947d2dc9d1f40613255e53c0e80abefa75eb30d450e3d7a8f286926a1a2be1f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 17:23:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1947d2dc9d1f40613255e53c0e80abefa75eb30d450e3d7a8f286926a1a2be1f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 17:23:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1947d2dc9d1f40613255e53c0e80abefa75eb30d450e3d7a8f286926a1a2be1f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 17:23:55 compute-0 podman[250725]: 2026-01-10 17:23:55.287140962 +0000 UTC m=+0.171920043 container init 7b3088a4941c0f2a593dd288cda14a2f1818720751dd310a1866e4409d75b7dc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_goldwasser, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 10 17:23:55 compute-0 podman[250725]: 2026-01-10 17:23:55.296144757 +0000 UTC m=+0.180923858 container start 7b3088a4941c0f2a593dd288cda14a2f1818720751dd310a1866e4409d75b7dc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_goldwasser, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 17:23:55 compute-0 podman[250725]: 2026-01-10 17:23:55.300086023 +0000 UTC m=+0.184865104 container attach 7b3088a4941c0f2a593dd288cda14a2f1818720751dd310a1866e4409d75b7dc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_goldwasser, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 10 17:23:55 compute-0 nova_compute[237049]: 2026-01-10 17:23:55.346 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:23:55 compute-0 ceph-mon[75249]: pgmap v893: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:23:56 compute-0 lvm[250823]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 10 17:23:56 compute-0 lvm[250823]: VG ceph_vg2 finished
Jan 10 17:23:56 compute-0 lvm[250822]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 10 17:23:56 compute-0 lvm[250822]: VG ceph_vg1 finished
Jan 10 17:23:56 compute-0 lvm[250819]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 10 17:23:56 compute-0 lvm[250819]: VG ceph_vg0 finished
Jan 10 17:23:56 compute-0 ecstatic_goldwasser[250742]: {}
Jan 10 17:23:56 compute-0 systemd[1]: libpod-7b3088a4941c0f2a593dd288cda14a2f1818720751dd310a1866e4409d75b7dc.scope: Deactivated successfully.
Jan 10 17:23:56 compute-0 systemd[1]: libpod-7b3088a4941c0f2a593dd288cda14a2f1818720751dd310a1866e4409d75b7dc.scope: Consumed 1.551s CPU time.
Jan 10 17:23:56 compute-0 podman[250725]: 2026-01-10 17:23:56.292930575 +0000 UTC m=+1.177709756 container died 7b3088a4941c0f2a593dd288cda14a2f1818720751dd310a1866e4409d75b7dc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_goldwasser, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 10 17:23:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-1947d2dc9d1f40613255e53c0e80abefa75eb30d450e3d7a8f286926a1a2be1f-merged.mount: Deactivated successfully.
Jan 10 17:23:56 compute-0 nova_compute[237049]: 2026-01-10 17:23:56.340 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:23:56 compute-0 nova_compute[237049]: 2026-01-10 17:23:56.345 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:23:56 compute-0 podman[250725]: 2026-01-10 17:23:56.35660533 +0000 UTC m=+1.241384451 container remove 7b3088a4941c0f2a593dd288cda14a2f1818720751dd310a1866e4409d75b7dc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_goldwasser, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 17:23:56 compute-0 systemd[1]: libpod-conmon-7b3088a4941c0f2a593dd288cda14a2f1818720751dd310a1866e4409d75b7dc.scope: Deactivated successfully.
Jan 10 17:23:56 compute-0 sudo[250647]: pam_unix(sudo:session): session closed for user root
Jan 10 17:23:56 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 10 17:23:56 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:23:56 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 10 17:23:56 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:23:56 compute-0 sudo[250838]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 10 17:23:56 compute-0 sudo[250838]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:23:56 compute-0 sudo[250838]: pam_unix(sudo:session): session closed for user root
Jan 10 17:23:56 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v894: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:23:57 compute-0 nova_compute[237049]: 2026-01-10 17:23:57.346 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:23:57 compute-0 nova_compute[237049]: 2026-01-10 17:23:57.346 237053 DEBUG nova.compute.manager [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 10 17:23:57 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:23:57 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:23:57 compute-0 ceph-mon[75249]: pgmap v894: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:23:58 compute-0 nova_compute[237049]: 2026-01-10 17:23:58.335 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:23:58 compute-0 nova_compute[237049]: 2026-01-10 17:23:58.355 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:23:58 compute-0 nova_compute[237049]: 2026-01-10 17:23:58.383 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 10 17:23:58 compute-0 nova_compute[237049]: 2026-01-10 17:23:58.384 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 10 17:23:58 compute-0 nova_compute[237049]: 2026-01-10 17:23:58.384 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 10 17:23:58 compute-0 nova_compute[237049]: 2026-01-10 17:23:58.385 237053 DEBUG nova.compute.resource_tracker [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 10 17:23:58 compute-0 nova_compute[237049]: 2026-01-10 17:23:58.386 237053 DEBUG oslo_concurrency.processutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 10 17:23:58 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v895: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:23:58 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 10 17:23:58 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1703577139' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 10 17:23:58 compute-0 nova_compute[237049]: 2026-01-10 17:23:58.964 237053 DEBUG oslo_concurrency.processutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.579s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 10 17:23:59 compute-0 nova_compute[237049]: 2026-01-10 17:23:59.140 237053 WARNING nova.virt.libvirt.driver [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 10 17:23:59 compute-0 nova_compute[237049]: 2026-01-10 17:23:59.142 237053 DEBUG nova.compute.resource_tracker [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5015MB free_disk=59.988249060697854GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 10 17:23:59 compute-0 nova_compute[237049]: 2026-01-10 17:23:59.142 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 10 17:23:59 compute-0 nova_compute[237049]: 2026-01-10 17:23:59.143 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 10 17:23:59 compute-0 nova_compute[237049]: 2026-01-10 17:23:59.213 237053 DEBUG nova.compute.resource_tracker [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 10 17:23:59 compute-0 nova_compute[237049]: 2026-01-10 17:23:59.215 237053 DEBUG nova.compute.resource_tracker [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 10 17:23:59 compute-0 nova_compute[237049]: 2026-01-10 17:23:59.234 237053 DEBUG oslo_concurrency.processutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 10 17:23:59 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:24:00 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 10 17:24:00 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1244966285' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 10 17:24:00 compute-0 ceph-mon[75249]: pgmap v895: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:24:00 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/1703577139' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 10 17:24:00 compute-0 nova_compute[237049]: 2026-01-10 17:24:00.192 237053 DEBUG oslo_concurrency.processutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.957s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 10 17:24:00 compute-0 nova_compute[237049]: 2026-01-10 17:24:00.200 237053 DEBUG nova.compute.provider_tree [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Inventory has not changed in ProviderTree for provider: 5f85855c-8a9b-43b5-ae49-f5846b9dcebe update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 10 17:24:00 compute-0 nova_compute[237049]: 2026-01-10 17:24:00.219 237053 DEBUG nova.scheduler.client.report [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Inventory has not changed for provider 5f85855c-8a9b-43b5-ae49-f5846b9dcebe based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 10 17:24:00 compute-0 nova_compute[237049]: 2026-01-10 17:24:00.222 237053 DEBUG nova.compute.resource_tracker [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 10 17:24:00 compute-0 nova_compute[237049]: 2026-01-10 17:24:00.222 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.079s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 10 17:24:00 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v896: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:24:01 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/1244966285' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 10 17:24:01 compute-0 nova_compute[237049]: 2026-01-10 17:24:01.213 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:24:01 compute-0 nova_compute[237049]: 2026-01-10 17:24:01.214 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:24:02 compute-0 ceph-mon[75249]: pgmap v896: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:24:02 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v897: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:24:03 compute-0 ceph-mon[75249]: pgmap v897: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:24:04 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:24:04 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v898: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:24:05 compute-0 ceph-mon[75249]: pgmap v898: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:24:06 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v899: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:24:07 compute-0 ceph-mon[75249]: pgmap v899: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:24:08 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v900: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:24:09 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:24:09 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:24:09 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:24:09 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:24:09 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:24:09 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:24:09 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:24:09 compute-0 ceph-mon[75249]: pgmap v900: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:24:10 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v901: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:24:11 compute-0 podman[250908]: 2026-01-10 17:24:11.099340565 +0000 UTC m=+0.089234448 container health_status 4a66317a3a3660e903224f5e823ead0a57597ae17c6ac34919f8a3aeea25124f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 10 17:24:11 compute-0 podman[250909]: 2026-01-10 17:24:11.161201507 +0000 UTC m=+0.149786532 container health_status a2049b55215718ead6719d161f51721cb7ba61ad8a59a8a1174e05b17008fa6f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 10 17:24:11 compute-0 ceph-mon[75249]: pgmap v901: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:24:12 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v902: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:24:13 compute-0 sudo[243757]: pam_unix(sudo:session): session closed for user root
Jan 10 17:24:13 compute-0 sshd-session[243756]: Received disconnect from 192.168.122.10 port 34978:11: disconnected by user
Jan 10 17:24:13 compute-0 sshd-session[243756]: Disconnected from user zuul 192.168.122.10 port 34978
Jan 10 17:24:13 compute-0 sshd-session[243753]: pam_unix(sshd:session): session closed for user zuul
Jan 10 17:24:13 compute-0 systemd[1]: session-52.scope: Deactivated successfully.
Jan 10 17:24:13 compute-0 systemd[1]: session-52.scope: Consumed 2min 48.282s CPU time, 688.6M memory peak, read 271.2M from disk, written 65.7M to disk.
Jan 10 17:24:13 compute-0 systemd-logind[798]: Session 52 logged out. Waiting for processes to exit.
Jan 10 17:24:13 compute-0 systemd-logind[798]: Removed session 52.
Jan 10 17:24:13 compute-0 sshd-session[250952]: Accepted publickey for zuul from 192.168.122.10 port 58306 ssh2: ECDSA SHA256:YYROLJW/JwZAyyZtyl+88gzuUs1GqrQIhGb+AzXg9yc
Jan 10 17:24:13 compute-0 systemd-logind[798]: New session 53 of user zuul.
Jan 10 17:24:13 compute-0 systemd[1]: Started Session 53 of User zuul.
Jan 10 17:24:13 compute-0 sshd-session[250952]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 10 17:24:13 compute-0 sudo[250956]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/cat /var/tmp/sos-osp/sosreport-compute-0-2026-01-10-xromwdu.tar.xz
Jan 10 17:24:13 compute-0 sudo[250956]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:24:13 compute-0 sudo[250956]: pam_unix(sudo:session): session closed for user root
Jan 10 17:24:13 compute-0 sshd-session[250955]: Received disconnect from 192.168.122.10 port 58306:11: disconnected by user
Jan 10 17:24:13 compute-0 sshd-session[250955]: Disconnected from user zuul 192.168.122.10 port 58306
Jan 10 17:24:13 compute-0 sshd-session[250952]: pam_unix(sshd:session): session closed for user zuul
Jan 10 17:24:13 compute-0 systemd[1]: session-53.scope: Deactivated successfully.
Jan 10 17:24:13 compute-0 systemd-logind[798]: Session 53 logged out. Waiting for processes to exit.
Jan 10 17:24:13 compute-0 systemd-logind[798]: Removed session 53.
Jan 10 17:24:13 compute-0 sshd-session[250981]: Accepted publickey for zuul from 192.168.122.10 port 58314 ssh2: ECDSA SHA256:YYROLJW/JwZAyyZtyl+88gzuUs1GqrQIhGb+AzXg9yc
Jan 10 17:24:13 compute-0 systemd-logind[798]: New session 54 of user zuul.
Jan 10 17:24:13 compute-0 systemd[1]: Started Session 54 of User zuul.
Jan 10 17:24:13 compute-0 sshd-session[250981]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 10 17:24:13 compute-0 ceph-mon[75249]: pgmap v902: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:24:13 compute-0 sudo[250985]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/rm -rf /var/tmp/sos-osp
Jan 10 17:24:13 compute-0 sudo[250985]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:24:13 compute-0 sudo[250985]: pam_unix(sudo:session): session closed for user root
Jan 10 17:24:13 compute-0 sshd-session[250984]: Received disconnect from 192.168.122.10 port 58314:11: disconnected by user
Jan 10 17:24:13 compute-0 sshd-session[250984]: Disconnected from user zuul 192.168.122.10 port 58314
Jan 10 17:24:13 compute-0 sshd-session[250981]: pam_unix(sshd:session): session closed for user zuul
Jan 10 17:24:13 compute-0 systemd[1]: session-54.scope: Deactivated successfully.
Jan 10 17:24:13 compute-0 systemd-logind[798]: Session 54 logged out. Waiting for processes to exit.
Jan 10 17:24:13 compute-0 systemd-logind[798]: Removed session 54.
Jan 10 17:24:14 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:24:14 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v903: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:24:15 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Jan 10 17:24:15 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 10 17:24:15 compute-0 ceph-mon[75249]: pgmap v903: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:24:16 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v904: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:24:17 compute-0 ceph-mon[75249]: pgmap v904: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:24:18 compute-0 sshd-session[251014]: Invalid user admin from 216.36.124.133 port 55570
Jan 10 17:24:18 compute-0 sshd-session[251014]: Connection closed by invalid user admin 216.36.124.133 port 55570 [preauth]
Jan 10 17:24:18 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v905: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:24:19 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:24:19 compute-0 ceph-mon[75249]: pgmap v905: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:24:20 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v906: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:24:21 compute-0 ceph-mon[75249]: pgmap v906: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:24:22 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v907: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:24:23 compute-0 ceph-mon[75249]: pgmap v907: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:24:24 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:24:24 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v908: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:24:25 compute-0 ceph-mon[75249]: pgmap v908: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:24:26 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v909: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:24:27 compute-0 ceph-mon[75249]: pgmap v909: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:24:28 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v910: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:24:29 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:24:29 compute-0 ceph-mon[75249]: pgmap v910: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:24:30 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v911: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:24:31 compute-0 ceph-mon[75249]: pgmap v911: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:24:32 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v912: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:24:33 compute-0 ceph-mon[75249]: pgmap v912: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:24:34 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:24:34 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v913: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:24:36 compute-0 ceph-mon[75249]: pgmap v913: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:24:36 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 10 17:24:36 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/172550410' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 10 17:24:36 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 10 17:24:36 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/172550410' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 10 17:24:36 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v914: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:24:37 compute-0 ceph-mon[75249]: from='client.? 192.168.122.10:0/172550410' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 10 17:24:37 compute-0 ceph-mon[75249]: from='client.? 192.168.122.10:0/172550410' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 10 17:24:38 compute-0 ceph-mon[75249]: pgmap v914: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:24:38 compute-0 ceph-mgr[75538]: [balancer INFO root] Optimize plan auto_2026-01-10_17:24:38
Jan 10 17:24:38 compute-0 ceph-mgr[75538]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 10 17:24:38 compute-0 ceph-mgr[75538]: [balancer INFO root] do_upmap
Jan 10 17:24:38 compute-0 ceph-mgr[75538]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.data', 'volumes', 'backups', 'cephfs.cephfs.meta', '.mgr', 'images']
Jan 10 17:24:38 compute-0 ceph-mgr[75538]: [balancer INFO root] prepared 0/10 upmap changes
Jan 10 17:24:38 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v915: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:24:39 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:24:39 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:24:39 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:24:39 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:24:39 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:24:39 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:24:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 10 17:24:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 10 17:24:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 10 17:24:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 10 17:24:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 10 17:24:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 10 17:24:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 10 17:24:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 10 17:24:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 10 17:24:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 10 17:24:39 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:24:40 compute-0 ceph-mon[75249]: pgmap v915: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:24:40 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v916: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:24:42 compute-0 ceph-mon[75249]: pgmap v916: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:24:42 compute-0 podman[251016]: 2026-01-10 17:24:42.097821825 +0000 UTC m=+0.091268057 container health_status 4a66317a3a3660e903224f5e823ead0a57597ae17c6ac34919f8a3aeea25124f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent)
Jan 10 17:24:42 compute-0 podman[251017]: 2026-01-10 17:24:42.186360713 +0000 UTC m=+0.180924793 container health_status a2049b55215718ead6719d161f51721cb7ba61ad8a59a8a1174e05b17008fa6f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 10 17:24:42 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v917: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:24:44 compute-0 ceph-mon[75249]: pgmap v917: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:24:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] _maybe_adjust
Jan 10 17:24:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:24:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 10 17:24:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:24:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 5.365931724612428e-07 of space, bias 1.0, pg target 0.00016097795173837282 quantized to 32 (current 32)
Jan 10 17:24:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:24:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.1924810223865999e-07 of space, bias 1.0, pg target 3.5774430671597993e-05 quantized to 32 (current 32)
Jan 10 17:24:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:24:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 17:24:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:24:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000668695260671586 of space, bias 1.0, pg target 0.2006085782014758 quantized to 32 (current 32)
Jan 10 17:24:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:24:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0462037643091811e-06 of space, bias 4.0, pg target 0.0012554445171710175 quantized to 16 (current 16)
Jan 10 17:24:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:24:44 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:24:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 17:24:44 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v918: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:24:46 compute-0 ceph-mon[75249]: pgmap v918: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:24:46 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v919: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:24:48 compute-0 ceph-mon[75249]: pgmap v919: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:24:48 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v920: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:24:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:24:48.940 152671 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 10 17:24:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:24:48.943 152671 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 10 17:24:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:24:48.943 152671 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 10 17:24:49 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:24:50 compute-0 ceph-mon[75249]: pgmap v920: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:24:50 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v921: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:24:52 compute-0 ceph-mon[75249]: pgmap v921: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:24:52 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v922: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:24:53 compute-0 ceph-mon[75249]: pgmap v922: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:24:53 compute-0 nova_compute[237049]: 2026-01-10 17:24:53.347 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:24:53 compute-0 nova_compute[237049]: 2026-01-10 17:24:53.348 237053 DEBUG nova.compute.manager [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 10 17:24:53 compute-0 nova_compute[237049]: 2026-01-10 17:24:53.348 237053 DEBUG nova.compute.manager [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 10 17:24:53 compute-0 nova_compute[237049]: 2026-01-10 17:24:53.369 237053 DEBUG nova.compute.manager [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 10 17:24:53 compute-0 nova_compute[237049]: 2026-01-10 17:24:53.370 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:24:54 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:24:54 compute-0 ceph-mon[75249]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #45. Immutable memtables: 0.
Jan 10 17:24:54 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:24:54.613445) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 10 17:24:54 compute-0 ceph-mon[75249]: rocksdb: [db/flush_job.cc:856] [default] [JOB 21] Flushing memtable with next log file: 45
Jan 10 17:24:54 compute-0 ceph-mon[75249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768065894613669, "job": 21, "event": "flush_started", "num_memtables": 1, "num_entries": 905, "num_deletes": 255, "total_data_size": 787677, "memory_usage": 804248, "flush_reason": "Manual Compaction"}
Jan 10 17:24:54 compute-0 ceph-mon[75249]: rocksdb: [db/flush_job.cc:885] [default] [JOB 21] Level-0 flush table #46: started
Jan 10 17:24:54 compute-0 ceph-mon[75249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768065894623807, "cf_name": "default", "job": 21, "event": "table_file_creation", "file_number": 46, "file_size": 776512, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18360, "largest_seqno": 19264, "table_properties": {"data_size": 772034, "index_size": 2066, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1349, "raw_key_size": 9710, "raw_average_key_size": 18, "raw_value_size": 762925, "raw_average_value_size": 1469, "num_data_blocks": 94, "num_entries": 519, "num_filter_entries": 519, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768065818, "oldest_key_time": 1768065818, "file_creation_time": 1768065894, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f71f9c2-f3c5-4fc3-bcd9-6ffe346ae9d4", "db_session_id": "VPFJD76VNV79HUMFHEYZ", "orig_file_number": 46, "seqno_to_time_mapping": "N/A"}}
Jan 10 17:24:54 compute-0 ceph-mon[75249]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 21] Flush lasted 10372 microseconds, and 5951 cpu microseconds.
Jan 10 17:24:54 compute-0 ceph-mon[75249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 10 17:24:54 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:24:54.623887) [db/flush_job.cc:967] [default] [JOB 21] Level-0 flush table #46: 776512 bytes OK
Jan 10 17:24:54 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:24:54.623926) [db/memtable_list.cc:519] [default] Level-0 commit table #46 started
Jan 10 17:24:54 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:24:54.625466) [db/memtable_list.cc:722] [default] Level-0 commit table #46: memtable #1 done
Jan 10 17:24:54 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:24:54.625495) EVENT_LOG_v1 {"time_micros": 1768065894625490, "job": 21, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 10 17:24:54 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:24:54.625525) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 10 17:24:54 compute-0 ceph-mon[75249]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 21] Try to delete WAL files size 783223, prev total WAL file size 783223, number of live WAL files 2.
Jan 10 17:24:54 compute-0 ceph-mon[75249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000042.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 10 17:24:54 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:24:54.626455) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00323530' seq:72057594037927935, type:22 .. '6C6F676D00353031' seq:0, type:0; will stop at (end)
Jan 10 17:24:54 compute-0 ceph-mon[75249]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 22] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 10 17:24:54 compute-0 ceph-mon[75249]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 21 Base level 0, inputs: [46(758KB)], [44(4879KB)]
Jan 10 17:24:54 compute-0 ceph-mon[75249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768065894626625, "job": 22, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [46], "files_L6": [44], "score": -1, "input_data_size": 5772835, "oldest_snapshot_seqno": -1}
Jan 10 17:24:54 compute-0 ceph-mon[75249]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 22] Generated table #47: 3898 keys, 5675644 bytes, temperature: kUnknown
Jan 10 17:24:54 compute-0 ceph-mon[75249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768065894673058, "cf_name": "default", "job": 22, "event": "table_file_creation", "file_number": 47, "file_size": 5675644, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 5647578, "index_size": 17204, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9797, "raw_key_size": 94145, "raw_average_key_size": 24, "raw_value_size": 5575602, "raw_average_value_size": 1430, "num_data_blocks": 731, "num_entries": 3898, "num_filter_entries": 3898, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768064235, "oldest_key_time": 0, "file_creation_time": 1768065894, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f71f9c2-f3c5-4fc3-bcd9-6ffe346ae9d4", "db_session_id": "VPFJD76VNV79HUMFHEYZ", "orig_file_number": 47, "seqno_to_time_mapping": "N/A"}}
Jan 10 17:24:54 compute-0 ceph-mon[75249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 10 17:24:54 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:24:54.674023) [db/compaction/compaction_job.cc:1663] [default] [JOB 22] Compacted 1@0 + 1@6 files to L6 => 5675644 bytes
Jan 10 17:24:54 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:24:54.675830) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 124.1 rd, 122.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 4.8 +0.0 blob) out(5.4 +0.0 blob), read-write-amplify(14.7) write-amplify(7.3) OK, records in: 4420, records dropped: 522 output_compression: NoCompression
Jan 10 17:24:54 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:24:54.675900) EVENT_LOG_v1 {"time_micros": 1768065894675859, "job": 22, "event": "compaction_finished", "compaction_time_micros": 46522, "compaction_time_cpu_micros": 23948, "output_level": 6, "num_output_files": 1, "total_output_size": 5675644, "num_input_records": 4420, "num_output_records": 3898, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 10 17:24:54 compute-0 ceph-mon[75249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000046.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 10 17:24:54 compute-0 ceph-mon[75249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768065894676413, "job": 22, "event": "table_file_deletion", "file_number": 46}
Jan 10 17:24:54 compute-0 ceph-mon[75249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000044.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 10 17:24:54 compute-0 ceph-mon[75249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768065894678019, "job": 22, "event": "table_file_deletion", "file_number": 44}
Jan 10 17:24:54 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:24:54.626063) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 10 17:24:54 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:24:54.678103) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 10 17:24:54 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:24:54.678111) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 10 17:24:54 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:24:54.678113) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 10 17:24:54 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:24:54.678115) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 10 17:24:54 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:24:54.678117) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 10 17:24:54 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v923: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:24:55 compute-0 ceph-mon[75249]: pgmap v923: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:24:56 compute-0 nova_compute[237049]: 2026-01-10 17:24:56.345 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:24:56 compute-0 sudo[251061]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 17:24:56 compute-0 sudo[251061]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:24:56 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v924: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:24:56 compute-0 sudo[251061]: pam_unix(sudo:session): session closed for user root
Jan 10 17:24:56 compute-0 sudo[251086]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 10 17:24:56 compute-0 sudo[251086]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:24:57 compute-0 nova_compute[237049]: 2026-01-10 17:24:57.335 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:24:57 compute-0 sudo[251086]: pam_unix(sudo:session): session closed for user root
Jan 10 17:24:57 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 17:24:57 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 17:24:57 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 10 17:24:57 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 10 17:24:57 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 10 17:24:57 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:24:57 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 10 17:24:57 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 10 17:24:57 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 10 17:24:57 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 10 17:24:57 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 17:24:57 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 17:24:57 compute-0 sudo[251143]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 17:24:57 compute-0 sudo[251143]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:24:57 compute-0 sudo[251143]: pam_unix(sudo:session): session closed for user root
Jan 10 17:24:57 compute-0 sudo[251168]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 10 17:24:57 compute-0 sudo[251168]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:24:57 compute-0 ceph-mon[75249]: pgmap v924: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:24:57 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 17:24:57 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 10 17:24:57 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:24:57 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 10 17:24:57 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 10 17:24:57 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 17:24:58 compute-0 podman[251205]: 2026-01-10 17:24:58.003544658 +0000 UTC m=+0.052665524 container create 5b4cbaa9e0d0719794f8d596d9fb53ccbe41e8b55107f0c0a4a50d3bf9e44a25 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_almeida, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 17:24:58 compute-0 systemd[1]: Started libpod-conmon-5b4cbaa9e0d0719794f8d596d9fb53ccbe41e8b55107f0c0a4a50d3bf9e44a25.scope.
Jan 10 17:24:58 compute-0 podman[251205]: 2026-01-10 17:24:57.980440552 +0000 UTC m=+0.029561438 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:24:58 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:24:58 compute-0 podman[251205]: 2026-01-10 17:24:58.104600239 +0000 UTC m=+0.153721155 container init 5b4cbaa9e0d0719794f8d596d9fb53ccbe41e8b55107f0c0a4a50d3bf9e44a25 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_almeida, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 17:24:58 compute-0 podman[251205]: 2026-01-10 17:24:58.118847484 +0000 UTC m=+0.167968390 container start 5b4cbaa9e0d0719794f8d596d9fb53ccbe41e8b55107f0c0a4a50d3bf9e44a25 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_almeida, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 17:24:58 compute-0 podman[251205]: 2026-01-10 17:24:58.123615913 +0000 UTC m=+0.172736829 container attach 5b4cbaa9e0d0719794f8d596d9fb53ccbe41e8b55107f0c0a4a50d3bf9e44a25 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_almeida, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 17:24:58 compute-0 vigorous_almeida[251221]: 167 167
Jan 10 17:24:58 compute-0 systemd[1]: libpod-5b4cbaa9e0d0719794f8d596d9fb53ccbe41e8b55107f0c0a4a50d3bf9e44a25.scope: Deactivated successfully.
Jan 10 17:24:58 compute-0 podman[251205]: 2026-01-10 17:24:58.130999927 +0000 UTC m=+0.180120813 container died 5b4cbaa9e0d0719794f8d596d9fb53ccbe41e8b55107f0c0a4a50d3bf9e44a25 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_almeida, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 17:24:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-c6b9ddf5ea833cbcd0639ee8c7ce2fbb34482b8c8283264a0568bf1d16d9f8f4-merged.mount: Deactivated successfully.
Jan 10 17:24:58 compute-0 podman[251205]: 2026-01-10 17:24:58.182677696 +0000 UTC m=+0.231798592 container remove 5b4cbaa9e0d0719794f8d596d9fb53ccbe41e8b55107f0c0a4a50d3bf9e44a25 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_almeida, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 10 17:24:58 compute-0 systemd[1]: libpod-conmon-5b4cbaa9e0d0719794f8d596d9fb53ccbe41e8b55107f0c0a4a50d3bf9e44a25.scope: Deactivated successfully.
Jan 10 17:24:58 compute-0 nova_compute[237049]: 2026-01-10 17:24:58.345 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:24:58 compute-0 nova_compute[237049]: 2026-01-10 17:24:58.345 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:24:58 compute-0 nova_compute[237049]: 2026-01-10 17:24:58.384 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 10 17:24:58 compute-0 nova_compute[237049]: 2026-01-10 17:24:58.385 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 10 17:24:58 compute-0 nova_compute[237049]: 2026-01-10 17:24:58.385 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 10 17:24:58 compute-0 nova_compute[237049]: 2026-01-10 17:24:58.385 237053 DEBUG nova.compute.resource_tracker [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 10 17:24:58 compute-0 nova_compute[237049]: 2026-01-10 17:24:58.386 237053 DEBUG oslo_concurrency.processutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 10 17:24:58 compute-0 podman[251245]: 2026-01-10 17:24:58.38619774 +0000 UTC m=+0.062612043 container create 5c34f02b8191ef7eb431cd2c8eb1429d7970cefabe483274edf93bd4c25f7101 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_hypatia, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 17:24:58 compute-0 systemd[1]: Started libpod-conmon-5c34f02b8191ef7eb431cd2c8eb1429d7970cefabe483274edf93bd4c25f7101.scope.
Jan 10 17:24:58 compute-0 podman[251245]: 2026-01-10 17:24:58.362642252 +0000 UTC m=+0.039056625 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:24:58 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:24:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af6b73575b87618101ccfb5ba3ab1e4daf468c525c54cdb2436c7b7a876635fe/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 17:24:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af6b73575b87618101ccfb5ba3ab1e4daf468c525c54cdb2436c7b7a876635fe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 17:24:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af6b73575b87618101ccfb5ba3ab1e4daf468c525c54cdb2436c7b7a876635fe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 17:24:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af6b73575b87618101ccfb5ba3ab1e4daf468c525c54cdb2436c7b7a876635fe/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 17:24:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af6b73575b87618101ccfb5ba3ab1e4daf468c525c54cdb2436c7b7a876635fe/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 10 17:24:58 compute-0 podman[251245]: 2026-01-10 17:24:58.507455344 +0000 UTC m=+0.183869657 container init 5c34f02b8191ef7eb431cd2c8eb1429d7970cefabe483274edf93bd4c25f7101 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_hypatia, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 10 17:24:58 compute-0 podman[251245]: 2026-01-10 17:24:58.516589201 +0000 UTC m=+0.193003494 container start 5c34f02b8191ef7eb431cd2c8eb1429d7970cefabe483274edf93bd4c25f7101 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_hypatia, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 17:24:58 compute-0 podman[251245]: 2026-01-10 17:24:58.527483413 +0000 UTC m=+0.203897746 container attach 5c34f02b8191ef7eb431cd2c8eb1429d7970cefabe483274edf93bd4c25f7101 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_hypatia, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 17:24:58 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v925: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:24:58 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 10 17:24:58 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2508670544' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 10 17:24:58 compute-0 nova_compute[237049]: 2026-01-10 17:24:58.927 237053 DEBUG oslo_concurrency.processutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.542s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 10 17:24:59 compute-0 sad_hypatia[251262]: --> passed data devices: 0 physical, 3 LVM
Jan 10 17:24:59 compute-0 sad_hypatia[251262]: --> All data devices are unavailable
Jan 10 17:24:59 compute-0 systemd[1]: libpod-5c34f02b8191ef7eb431cd2c8eb1429d7970cefabe483274edf93bd4c25f7101.scope: Deactivated successfully.
Jan 10 17:24:59 compute-0 podman[251245]: 2026-01-10 17:24:59.078942615 +0000 UTC m=+0.755356948 container died 5c34f02b8191ef7eb431cd2c8eb1429d7970cefabe483274edf93bd4c25f7101 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_hypatia, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 10 17:24:59 compute-0 nova_compute[237049]: 2026-01-10 17:24:59.090 237053 WARNING nova.virt.libvirt.driver [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 10 17:24:59 compute-0 nova_compute[237049]: 2026-01-10 17:24:59.092 237053 DEBUG nova.compute.resource_tracker [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5073MB free_disk=59.988249060697854GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 10 17:24:59 compute-0 nova_compute[237049]: 2026-01-10 17:24:59.093 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 10 17:24:59 compute-0 nova_compute[237049]: 2026-01-10 17:24:59.093 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 10 17:24:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-af6b73575b87618101ccfb5ba3ab1e4daf468c525c54cdb2436c7b7a876635fe-merged.mount: Deactivated successfully.
Jan 10 17:24:59 compute-0 podman[251245]: 2026-01-10 17:24:59.133628979 +0000 UTC m=+0.810043272 container remove 5c34f02b8191ef7eb431cd2c8eb1429d7970cefabe483274edf93bd4c25f7101 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_hypatia, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 10 17:24:59 compute-0 systemd[1]: libpod-conmon-5c34f02b8191ef7eb431cd2c8eb1429d7970cefabe483274edf93bd4c25f7101.scope: Deactivated successfully.
Jan 10 17:24:59 compute-0 sudo[251168]: pam_unix(sudo:session): session closed for user root
Jan 10 17:24:59 compute-0 nova_compute[237049]: 2026-01-10 17:24:59.199 237053 DEBUG nova.compute.resource_tracker [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 10 17:24:59 compute-0 nova_compute[237049]: 2026-01-10 17:24:59.200 237053 DEBUG nova.compute.resource_tracker [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 10 17:24:59 compute-0 nova_compute[237049]: 2026-01-10 17:24:59.222 237053 DEBUG oslo_concurrency.processutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 10 17:24:59 compute-0 sudo[251315]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 17:24:59 compute-0 sudo[251315]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:24:59 compute-0 sudo[251315]: pam_unix(sudo:session): session closed for user root
Jan 10 17:24:59 compute-0 sudo[251341]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 -- lvm list --format json
Jan 10 17:24:59 compute-0 sudo[251341]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:24:59 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:24:59 compute-0 podman[251396]: 2026-01-10 17:24:59.671499112 +0000 UTC m=+0.053443033 container create 2b297a1e4cdfad0c2f9047f5c840a1c1905c1cd5eef1ee2190d38757e81e79b2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_franklin, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 17:24:59 compute-0 systemd[1]: Started libpod-conmon-2b297a1e4cdfad0c2f9047f5c840a1c1905c1cd5eef1ee2190d38757e81e79b2.scope.
Jan 10 17:24:59 compute-0 podman[251396]: 2026-01-10 17:24:59.648055688 +0000 UTC m=+0.029999659 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:24:59 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:24:59 compute-0 ceph-mon[75249]: pgmap v925: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:24:59 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/2508670544' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 10 17:24:59 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 10 17:24:59 compute-0 podman[251396]: 2026-01-10 17:24:59.778979102 +0000 UTC m=+0.160923113 container init 2b297a1e4cdfad0c2f9047f5c840a1c1905c1cd5eef1ee2190d38757e81e79b2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_franklin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 10 17:24:59 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2332406694' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 10 17:24:59 compute-0 podman[251396]: 2026-01-10 17:24:59.790374706 +0000 UTC m=+0.172318668 container start 2b297a1e4cdfad0c2f9047f5c840a1c1905c1cd5eef1ee2190d38757e81e79b2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_franklin, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 10 17:24:59 compute-0 podman[251396]: 2026-01-10 17:24:59.794267654 +0000 UTC m=+0.176211595 container attach 2b297a1e4cdfad0c2f9047f5c840a1c1905c1cd5eef1ee2190d38757e81e79b2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_franklin, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True)
Jan 10 17:24:59 compute-0 zen_franklin[251413]: 167 167
Jan 10 17:24:59 compute-0 systemd[1]: libpod-2b297a1e4cdfad0c2f9047f5c840a1c1905c1cd5eef1ee2190d38757e81e79b2.scope: Deactivated successfully.
Jan 10 17:24:59 compute-0 podman[251396]: 2026-01-10 17:24:59.801039732 +0000 UTC m=+0.182983693 container died 2b297a1e4cdfad0c2f9047f5c840a1c1905c1cd5eef1ee2190d38757e81e79b2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_franklin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 10 17:24:59 compute-0 nova_compute[237049]: 2026-01-10 17:24:59.802 237053 DEBUG oslo_concurrency.processutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.580s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 10 17:24:59 compute-0 nova_compute[237049]: 2026-01-10 17:24:59.810 237053 DEBUG nova.compute.provider_tree [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Inventory has not changed in ProviderTree for provider: 5f85855c-8a9b-43b5-ae49-f5846b9dcebe update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 10 17:24:59 compute-0 nova_compute[237049]: 2026-01-10 17:24:59.827 237053 DEBUG nova.scheduler.client.report [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Inventory has not changed for provider 5f85855c-8a9b-43b5-ae49-f5846b9dcebe based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 10 17:24:59 compute-0 nova_compute[237049]: 2026-01-10 17:24:59.829 237053 DEBUG nova.compute.resource_tracker [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 10 17:24:59 compute-0 nova_compute[237049]: 2026-01-10 17:24:59.829 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.736s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 10 17:24:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-99a210c0be63853c9515ca69d50a29ac5ac87519313687ac02cf6345857c107c-merged.mount: Deactivated successfully.
Jan 10 17:24:59 compute-0 podman[251396]: 2026-01-10 17:24:59.857836349 +0000 UTC m=+0.239780310 container remove 2b297a1e4cdfad0c2f9047f5c840a1c1905c1cd5eef1ee2190d38757e81e79b2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_franklin, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 17:24:59 compute-0 systemd[1]: libpod-conmon-2b297a1e4cdfad0c2f9047f5c840a1c1905c1cd5eef1ee2190d38757e81e79b2.scope: Deactivated successfully.
Jan 10 17:25:00 compute-0 podman[251441]: 2026-01-10 17:25:00.120366286 +0000 UTC m=+0.077522354 container create 72c463650b14385c65bbab80a9be4b4736d939943d368f2abe6561bf4644d225 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_bell, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 17:25:00 compute-0 systemd[1]: Started libpod-conmon-72c463650b14385c65bbab80a9be4b4736d939943d368f2abe6561bf4644d225.scope.
Jan 10 17:25:00 compute-0 podman[251441]: 2026-01-10 17:25:00.089139147 +0000 UTC m=+0.046295265 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:25:00 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:25:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6e28ab8f6ca6abe0f0780abfc69e5ed816fae0a317925d451aec315a564d2fc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 17:25:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6e28ab8f6ca6abe0f0780abfc69e5ed816fae0a317925d451aec315a564d2fc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 17:25:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6e28ab8f6ca6abe0f0780abfc69e5ed816fae0a317925d451aec315a564d2fc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 17:25:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6e28ab8f6ca6abe0f0780abfc69e5ed816fae0a317925d451aec315a564d2fc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 17:25:00 compute-0 podman[251441]: 2026-01-10 17:25:00.235312022 +0000 UTC m=+0.192468090 container init 72c463650b14385c65bbab80a9be4b4736d939943d368f2abe6561bf4644d225 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_bell, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 17:25:00 compute-0 podman[251441]: 2026-01-10 17:25:00.252254185 +0000 UTC m=+0.209410223 container start 72c463650b14385c65bbab80a9be4b4736d939943d368f2abe6561bf4644d225 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_bell, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 17:25:00 compute-0 podman[251441]: 2026-01-10 17:25:00.256256925 +0000 UTC m=+0.213412963 container attach 72c463650b14385c65bbab80a9be4b4736d939943d368f2abe6561bf4644d225 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_bell, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 17:25:00 compute-0 eloquent_bell[251457]: {
Jan 10 17:25:00 compute-0 eloquent_bell[251457]:     "0": [
Jan 10 17:25:00 compute-0 eloquent_bell[251457]:         {
Jan 10 17:25:00 compute-0 eloquent_bell[251457]:             "devices": [
Jan 10 17:25:00 compute-0 eloquent_bell[251457]:                 "/dev/loop3"
Jan 10 17:25:00 compute-0 eloquent_bell[251457]:             ],
Jan 10 17:25:00 compute-0 eloquent_bell[251457]:             "lv_name": "ceph_lv0",
Jan 10 17:25:00 compute-0 eloquent_bell[251457]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 10 17:25:00 compute-0 eloquent_bell[251457]:             "lv_size": "21470642176",
Jan 10 17:25:00 compute-0 eloquent_bell[251457]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9aa1dcc9-88f4-49c0-be40-744313964d3e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 17:25:00 compute-0 eloquent_bell[251457]:             "lv_uuid": "fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2",
Jan 10 17:25:00 compute-0 eloquent_bell[251457]:             "name": "ceph_lv0",
Jan 10 17:25:00 compute-0 eloquent_bell[251457]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 10 17:25:00 compute-0 eloquent_bell[251457]:             "tags": {
Jan 10 17:25:00 compute-0 eloquent_bell[251457]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 10 17:25:00 compute-0 eloquent_bell[251457]:                 "ceph.block_uuid": "fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2",
Jan 10 17:25:00 compute-0 eloquent_bell[251457]:                 "ceph.cephx_lockbox_secret": "",
Jan 10 17:25:00 compute-0 eloquent_bell[251457]:                 "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 17:25:00 compute-0 eloquent_bell[251457]:                 "ceph.cluster_name": "ceph",
Jan 10 17:25:00 compute-0 eloquent_bell[251457]:                 "ceph.crush_device_class": "",
Jan 10 17:25:00 compute-0 eloquent_bell[251457]:                 "ceph.encrypted": "0",
Jan 10 17:25:00 compute-0 eloquent_bell[251457]:                 "ceph.objectstore": "bluestore",
Jan 10 17:25:00 compute-0 eloquent_bell[251457]:                 "ceph.osd_fsid": "9aa1dcc9-88f4-49c0-be40-744313964d3e",
Jan 10 17:25:00 compute-0 eloquent_bell[251457]:                 "ceph.osd_id": "0",
Jan 10 17:25:00 compute-0 eloquent_bell[251457]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 10 17:25:00 compute-0 eloquent_bell[251457]:                 "ceph.type": "block",
Jan 10 17:25:00 compute-0 eloquent_bell[251457]:                 "ceph.vdo": "0",
Jan 10 17:25:00 compute-0 eloquent_bell[251457]:                 "ceph.with_tpm": "0"
Jan 10 17:25:00 compute-0 eloquent_bell[251457]:             },
Jan 10 17:25:00 compute-0 eloquent_bell[251457]:             "type": "block",
Jan 10 17:25:00 compute-0 eloquent_bell[251457]:             "vg_name": "ceph_vg0"
Jan 10 17:25:00 compute-0 eloquent_bell[251457]:         }
Jan 10 17:25:00 compute-0 eloquent_bell[251457]:     ],
Jan 10 17:25:00 compute-0 eloquent_bell[251457]:     "1": [
Jan 10 17:25:00 compute-0 eloquent_bell[251457]:         {
Jan 10 17:25:00 compute-0 eloquent_bell[251457]:             "devices": [
Jan 10 17:25:00 compute-0 eloquent_bell[251457]:                 "/dev/loop4"
Jan 10 17:25:00 compute-0 eloquent_bell[251457]:             ],
Jan 10 17:25:00 compute-0 eloquent_bell[251457]:             "lv_name": "ceph_lv1",
Jan 10 17:25:00 compute-0 eloquent_bell[251457]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 10 17:25:00 compute-0 eloquent_bell[251457]:             "lv_size": "21470642176",
Jan 10 17:25:00 compute-0 eloquent_bell[251457]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e8e31518-65ae-476c-891c-e2fc550d0a1c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 17:25:00 compute-0 eloquent_bell[251457]:             "lv_uuid": "jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE",
Jan 10 17:25:00 compute-0 eloquent_bell[251457]:             "name": "ceph_lv1",
Jan 10 17:25:00 compute-0 eloquent_bell[251457]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 10 17:25:00 compute-0 eloquent_bell[251457]:             "tags": {
Jan 10 17:25:00 compute-0 eloquent_bell[251457]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 10 17:25:00 compute-0 eloquent_bell[251457]:                 "ceph.block_uuid": "jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE",
Jan 10 17:25:00 compute-0 eloquent_bell[251457]:                 "ceph.cephx_lockbox_secret": "",
Jan 10 17:25:00 compute-0 eloquent_bell[251457]:                 "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 17:25:00 compute-0 eloquent_bell[251457]:                 "ceph.cluster_name": "ceph",
Jan 10 17:25:00 compute-0 eloquent_bell[251457]:                 "ceph.crush_device_class": "",
Jan 10 17:25:00 compute-0 eloquent_bell[251457]:                 "ceph.encrypted": "0",
Jan 10 17:25:00 compute-0 eloquent_bell[251457]:                 "ceph.objectstore": "bluestore",
Jan 10 17:25:00 compute-0 eloquent_bell[251457]:                 "ceph.osd_fsid": "e8e31518-65ae-476c-891c-e2fc550d0a1c",
Jan 10 17:25:00 compute-0 eloquent_bell[251457]:                 "ceph.osd_id": "1",
Jan 10 17:25:00 compute-0 eloquent_bell[251457]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 10 17:25:00 compute-0 eloquent_bell[251457]:                 "ceph.type": "block",
Jan 10 17:25:00 compute-0 eloquent_bell[251457]:                 "ceph.vdo": "0",
Jan 10 17:25:00 compute-0 eloquent_bell[251457]:                 "ceph.with_tpm": "0"
Jan 10 17:25:00 compute-0 eloquent_bell[251457]:             },
Jan 10 17:25:00 compute-0 eloquent_bell[251457]:             "type": "block",
Jan 10 17:25:00 compute-0 eloquent_bell[251457]:             "vg_name": "ceph_vg1"
Jan 10 17:25:00 compute-0 eloquent_bell[251457]:         }
Jan 10 17:25:00 compute-0 eloquent_bell[251457]:     ],
Jan 10 17:25:00 compute-0 eloquent_bell[251457]:     "2": [
Jan 10 17:25:00 compute-0 eloquent_bell[251457]:         {
Jan 10 17:25:00 compute-0 eloquent_bell[251457]:             "devices": [
Jan 10 17:25:00 compute-0 eloquent_bell[251457]:                 "/dev/loop5"
Jan 10 17:25:00 compute-0 eloquent_bell[251457]:             ],
Jan 10 17:25:00 compute-0 eloquent_bell[251457]:             "lv_name": "ceph_lv2",
Jan 10 17:25:00 compute-0 eloquent_bell[251457]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 10 17:25:00 compute-0 eloquent_bell[251457]:             "lv_size": "21470642176",
Jan 10 17:25:00 compute-0 eloquent_bell[251457]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=87473727-6468-4f68-8371-e0bf60edaa43,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 17:25:00 compute-0 eloquent_bell[251457]:             "lv_uuid": "47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL",
Jan 10 17:25:00 compute-0 eloquent_bell[251457]:             "name": "ceph_lv2",
Jan 10 17:25:00 compute-0 eloquent_bell[251457]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 10 17:25:00 compute-0 eloquent_bell[251457]:             "tags": {
Jan 10 17:25:00 compute-0 eloquent_bell[251457]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 10 17:25:00 compute-0 eloquent_bell[251457]:                 "ceph.block_uuid": "47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL",
Jan 10 17:25:00 compute-0 eloquent_bell[251457]:                 "ceph.cephx_lockbox_secret": "",
Jan 10 17:25:00 compute-0 eloquent_bell[251457]:                 "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 17:25:00 compute-0 eloquent_bell[251457]:                 "ceph.cluster_name": "ceph",
Jan 10 17:25:00 compute-0 eloquent_bell[251457]:                 "ceph.crush_device_class": "",
Jan 10 17:25:00 compute-0 eloquent_bell[251457]:                 "ceph.encrypted": "0",
Jan 10 17:25:00 compute-0 eloquent_bell[251457]:                 "ceph.objectstore": "bluestore",
Jan 10 17:25:00 compute-0 eloquent_bell[251457]:                 "ceph.osd_fsid": "87473727-6468-4f68-8371-e0bf60edaa43",
Jan 10 17:25:00 compute-0 eloquent_bell[251457]:                 "ceph.osd_id": "2",
Jan 10 17:25:00 compute-0 eloquent_bell[251457]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 10 17:25:00 compute-0 eloquent_bell[251457]:                 "ceph.type": "block",
Jan 10 17:25:00 compute-0 eloquent_bell[251457]:                 "ceph.vdo": "0",
Jan 10 17:25:00 compute-0 eloquent_bell[251457]:                 "ceph.with_tpm": "0"
Jan 10 17:25:00 compute-0 eloquent_bell[251457]:             },
Jan 10 17:25:00 compute-0 eloquent_bell[251457]:             "type": "block",
Jan 10 17:25:00 compute-0 eloquent_bell[251457]:             "vg_name": "ceph_vg2"
Jan 10 17:25:00 compute-0 eloquent_bell[251457]:         }
Jan 10 17:25:00 compute-0 eloquent_bell[251457]:     ]
Jan 10 17:25:00 compute-0 eloquent_bell[251457]: }
Jan 10 17:25:00 compute-0 systemd[1]: libpod-72c463650b14385c65bbab80a9be4b4736d939943d368f2abe6561bf4644d225.scope: Deactivated successfully.
Jan 10 17:25:00 compute-0 podman[251441]: 2026-01-10 17:25:00.598494609 +0000 UTC m=+0.555650677 container died 72c463650b14385c65bbab80a9be4b4736d939943d368f2abe6561bf4644d225 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_bell, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 10 17:25:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-d6e28ab8f6ca6abe0f0780abfc69e5ed816fae0a317925d451aec315a564d2fc-merged.mount: Deactivated successfully.
Jan 10 17:25:00 compute-0 podman[251441]: 2026-01-10 17:25:00.658328202 +0000 UTC m=+0.615484290 container remove 72c463650b14385c65bbab80a9be4b4736d939943d368f2abe6561bf4644d225 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_bell, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 10 17:25:00 compute-0 systemd[1]: libpod-conmon-72c463650b14385c65bbab80a9be4b4736d939943d368f2abe6561bf4644d225.scope: Deactivated successfully.
Jan 10 17:25:00 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v926: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:25:00 compute-0 sudo[251341]: pam_unix(sudo:session): session closed for user root
Jan 10 17:25:00 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/2332406694' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 10 17:25:00 compute-0 sudo[251481]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 17:25:00 compute-0 sudo[251481]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:25:00 compute-0 sudo[251481]: pam_unix(sudo:session): session closed for user root
Jan 10 17:25:00 compute-0 nova_compute[237049]: 2026-01-10 17:25:00.830 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:25:00 compute-0 nova_compute[237049]: 2026-01-10 17:25:00.832 237053 DEBUG nova.compute.manager [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 10 17:25:00 compute-0 sudo[251506]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 -- raw list --format json
Jan 10 17:25:00 compute-0 sudo[251506]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:25:01 compute-0 podman[251543]: 2026-01-10 17:25:01.233127766 +0000 UTC m=+0.053316661 container create 0b316f21655dcf75254fa6cca23d8bfdb6c1c979f051a72123a7c9841bfab978 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_wiles, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 17:25:01 compute-0 systemd[1]: Started libpod-conmon-0b316f21655dcf75254fa6cca23d8bfdb6c1c979f051a72123a7c9841bfab978.scope.
Jan 10 17:25:01 compute-0 podman[251543]: 2026-01-10 17:25:01.205157308 +0000 UTC m=+0.025346243 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:25:01 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:25:01 compute-0 podman[251543]: 2026-01-10 17:25:01.319686294 +0000 UTC m=+0.139875199 container init 0b316f21655dcf75254fa6cca23d8bfdb6c1c979f051a72123a7c9841bfab978 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_wiles, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 17:25:01 compute-0 podman[251543]: 2026-01-10 17:25:01.325553801 +0000 UTC m=+0.145742656 container start 0b316f21655dcf75254fa6cca23d8bfdb6c1c979f051a72123a7c9841bfab978 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_wiles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 17:25:01 compute-0 heuristic_wiles[251559]: 167 167
Jan 10 17:25:01 compute-0 podman[251543]: 2026-01-10 17:25:01.330878393 +0000 UTC m=+0.151067358 container attach 0b316f21655dcf75254fa6cca23d8bfdb6c1c979f051a72123a7c9841bfab978 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_wiles, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 17:25:01 compute-0 systemd[1]: libpod-0b316f21655dcf75254fa6cca23d8bfdb6c1c979f051a72123a7c9841bfab978.scope: Deactivated successfully.
Jan 10 17:25:01 compute-0 podman[251543]: 2026-01-10 17:25:01.331922989 +0000 UTC m=+0.152111864 container died 0b316f21655dcf75254fa6cca23d8bfdb6c1c979f051a72123a7c9841bfab978 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_wiles, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 17:25:01 compute-0 nova_compute[237049]: 2026-01-10 17:25:01.347 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:25:01 compute-0 nova_compute[237049]: 2026-01-10 17:25:01.349 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:25:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-ad0999f48bb4cd9eb69a99ed839fa73a20f8be915aaefb0f1842ead4f8c2f92f-merged.mount: Deactivated successfully.
Jan 10 17:25:01 compute-0 podman[251543]: 2026-01-10 17:25:01.374116982 +0000 UTC m=+0.194305847 container remove 0b316f21655dcf75254fa6cca23d8bfdb6c1c979f051a72123a7c9841bfab978 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_wiles, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 17:25:01 compute-0 systemd[1]: libpod-conmon-0b316f21655dcf75254fa6cca23d8bfdb6c1c979f051a72123a7c9841bfab978.scope: Deactivated successfully.
Jan 10 17:25:01 compute-0 podman[251585]: 2026-01-10 17:25:01.612863895 +0000 UTC m=+0.075323609 container create f4e55ff7eeca91cc5167bcb3cffad4272f2f3975f38908ec321c02c6527634d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_perlman, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 17:25:01 compute-0 systemd[1]: Started libpod-conmon-f4e55ff7eeca91cc5167bcb3cffad4272f2f3975f38908ec321c02c6527634d3.scope.
Jan 10 17:25:01 compute-0 podman[251585]: 2026-01-10 17:25:01.583448382 +0000 UTC m=+0.045908146 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:25:01 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:25:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/656efbe1a6db8bda1fe3dc69b90b7406ab99c06b12a75d6ca558a1e3cacdf46e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 17:25:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/656efbe1a6db8bda1fe3dc69b90b7406ab99c06b12a75d6ca558a1e3cacdf46e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 17:25:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/656efbe1a6db8bda1fe3dc69b90b7406ab99c06b12a75d6ca558a1e3cacdf46e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 17:25:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/656efbe1a6db8bda1fe3dc69b90b7406ab99c06b12a75d6ca558a1e3cacdf46e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 17:25:01 compute-0 podman[251585]: 2026-01-10 17:25:01.712290875 +0000 UTC m=+0.174750619 container init f4e55ff7eeca91cc5167bcb3cffad4272f2f3975f38908ec321c02c6527634d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_perlman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 17:25:01 compute-0 podman[251585]: 2026-01-10 17:25:01.727688299 +0000 UTC m=+0.190147993 container start f4e55ff7eeca91cc5167bcb3cffad4272f2f3975f38908ec321c02c6527634d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_perlman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 10 17:25:01 compute-0 podman[251585]: 2026-01-10 17:25:01.731645637 +0000 UTC m=+0.194105361 container attach f4e55ff7eeca91cc5167bcb3cffad4272f2f3975f38908ec321c02c6527634d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_perlman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 10 17:25:01 compute-0 ceph-mon[75249]: pgmap v926: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:25:02 compute-0 lvm[251680]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 10 17:25:02 compute-0 lvm[251680]: VG ceph_vg1 finished
Jan 10 17:25:02 compute-0 lvm[251679]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 10 17:25:02 compute-0 lvm[251679]: VG ceph_vg0 finished
Jan 10 17:25:02 compute-0 lvm[251682]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 10 17:25:02 compute-0 lvm[251682]: VG ceph_vg2 finished
Jan 10 17:25:02 compute-0 wonderful_perlman[251601]: {}
Jan 10 17:25:02 compute-0 systemd[1]: libpod-f4e55ff7eeca91cc5167bcb3cffad4272f2f3975f38908ec321c02c6527634d3.scope: Deactivated successfully.
Jan 10 17:25:02 compute-0 podman[251585]: 2026-01-10 17:25:02.611022316 +0000 UTC m=+1.073482010 container died f4e55ff7eeca91cc5167bcb3cffad4272f2f3975f38908ec321c02c6527634d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_perlman, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 10 17:25:02 compute-0 systemd[1]: libpod-f4e55ff7eeca91cc5167bcb3cffad4272f2f3975f38908ec321c02c6527634d3.scope: Consumed 1.454s CPU time.
Jan 10 17:25:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-656efbe1a6db8bda1fe3dc69b90b7406ab99c06b12a75d6ca558a1e3cacdf46e-merged.mount: Deactivated successfully.
Jan 10 17:25:02 compute-0 podman[251585]: 2026-01-10 17:25:02.658672354 +0000 UTC m=+1.121132038 container remove f4e55ff7eeca91cc5167bcb3cffad4272f2f3975f38908ec321c02c6527634d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_perlman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 17:25:02 compute-0 systemd[1]: libpod-conmon-f4e55ff7eeca91cc5167bcb3cffad4272f2f3975f38908ec321c02c6527634d3.scope: Deactivated successfully.
Jan 10 17:25:02 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v927: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:25:02 compute-0 sudo[251506]: pam_unix(sudo:session): session closed for user root
Jan 10 17:25:02 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 10 17:25:02 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:25:02 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 10 17:25:02 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:25:02 compute-0 sudo[251696]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 10 17:25:02 compute-0 sudo[251696]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:25:02 compute-0 sudo[251696]: pam_unix(sudo:session): session closed for user root
Jan 10 17:25:03 compute-0 ceph-mon[75249]: pgmap v927: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:25:03 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:25:03 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:25:04 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:25:04 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v928: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:25:05 compute-0 ceph-mon[75249]: pgmap v928: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:25:06 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v929: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:25:07 compute-0 ceph-mon[75249]: pgmap v929: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:25:08 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v930: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:25:09 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:25:09 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:25:09 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:25:09 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:25:09 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:25:09 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:25:09 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:25:09 compute-0 ceph-mon[75249]: pgmap v930: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:25:10 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v931: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:25:11 compute-0 ceph-mon[75249]: pgmap v931: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:25:12 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v932: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:25:13 compute-0 podman[251721]: 2026-01-10 17:25:13.080070714 +0000 UTC m=+0.075709749 container health_status 4a66317a3a3660e903224f5e823ead0a57597ae17c6ac34919f8a3aeea25124f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 10 17:25:13 compute-0 podman[251722]: 2026-01-10 17:25:13.121013025 +0000 UTC m=+0.111079151 container health_status a2049b55215718ead6719d161f51721cb7ba61ad8a59a8a1174e05b17008fa6f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Jan 10 17:25:13 compute-0 ceph-mon[75249]: pgmap v932: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:25:14 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:25:14 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v933: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:25:16 compute-0 ceph-mon[75249]: pgmap v933: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:25:16 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v934: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:25:18 compute-0 ceph-mon[75249]: pgmap v934: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:25:18 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v935: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:25:19 compute-0 ceph-mon[75249]: pgmap v935: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:25:19 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:25:20 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v936: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:25:21 compute-0 ceph-mon[75249]: pgmap v936: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:25:22 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v937: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:25:23 compute-0 ceph-mon[75249]: pgmap v937: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:25:24 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:25:24 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v938: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:25:25 compute-0 ceph-mon[75249]: pgmap v938: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:25:26 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v939: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:25:27 compute-0 ceph-mon[75249]: pgmap v939: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:25:28 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v940: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:25:29 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:25:29 compute-0 ceph-mon[75249]: pgmap v940: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:25:30 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v941: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:25:31 compute-0 ceph-mon[75249]: pgmap v941: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:25:32 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v942: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:25:33 compute-0 ceph-mon[75249]: pgmap v942: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:25:34 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:25:34 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v943: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:25:35 compute-0 ceph-mon[75249]: pgmap v943: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:25:36 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 10 17:25:36 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3961119227' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 10 17:25:36 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 10 17:25:36 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3961119227' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 10 17:25:36 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v944: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:25:36 compute-0 ceph-mon[75249]: from='client.? 192.168.122.10:0/3961119227' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 10 17:25:36 compute-0 ceph-mon[75249]: from='client.? 192.168.122.10:0/3961119227' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 10 17:25:37 compute-0 ceph-mon[75249]: pgmap v944: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:25:38 compute-0 ceph-mgr[75538]: [balancer INFO root] Optimize plan auto_2026-01-10_17:25:38
Jan 10 17:25:38 compute-0 ceph-mgr[75538]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 10 17:25:38 compute-0 ceph-mgr[75538]: [balancer INFO root] do_upmap
Jan 10 17:25:38 compute-0 ceph-mgr[75538]: [balancer INFO root] pools ['images', 'volumes', 'vms', 'cephfs.cephfs.data', 'backups', 'cephfs.cephfs.meta', '.mgr']
Jan 10 17:25:38 compute-0 ceph-mgr[75538]: [balancer INFO root] prepared 0/10 upmap changes
Jan 10 17:25:38 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v945: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:25:39 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:25:39 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:25:39 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:25:39 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:25:39 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:25:39 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:25:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 10 17:25:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 10 17:25:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 10 17:25:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 10 17:25:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 10 17:25:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 10 17:25:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 10 17:25:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 10 17:25:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 10 17:25:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 10 17:25:39 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:25:39 compute-0 ceph-mon[75249]: pgmap v945: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:25:40 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v946: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:25:41 compute-0 ceph-mon[75249]: pgmap v946: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:25:42 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v947: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:25:43 compute-0 ceph-mon[75249]: pgmap v947: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:25:44 compute-0 podman[251767]: 2026-01-10 17:25:44.09231447 +0000 UTC m=+0.084324190 container health_status 4a66317a3a3660e903224f5e823ead0a57597ae17c6ac34919f8a3aeea25124f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 10 17:25:44 compute-0 podman[251768]: 2026-01-10 17:25:44.134885779 +0000 UTC m=+0.120723801 container health_status a2049b55215718ead6719d161f51721cb7ba61ad8a59a8a1174e05b17008fa6f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 10 17:25:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] _maybe_adjust
Jan 10 17:25:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:25:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 10 17:25:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:25:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 5.365931724612428e-07 of space, bias 1.0, pg target 0.00016097795173837282 quantized to 32 (current 32)
Jan 10 17:25:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:25:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.1924810223865999e-07 of space, bias 1.0, pg target 3.5774430671597993e-05 quantized to 32 (current 32)
Jan 10 17:25:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:25:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 17:25:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:25:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000668695260671586 of space, bias 1.0, pg target 0.2006085782014758 quantized to 32 (current 32)
Jan 10 17:25:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:25:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0462037643091811e-06 of space, bias 4.0, pg target 0.0012554445171710175 quantized to 16 (current 16)
Jan 10 17:25:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:25:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 17:25:44 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:25:44 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v948: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:25:45 compute-0 ceph-mon[75249]: pgmap v948: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:25:46 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v949: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:25:47 compute-0 ceph-mon[75249]: pgmap v949: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:25:48 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v950: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:25:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:25:48.942 152671 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 10 17:25:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:25:48.943 152671 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 10 17:25:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:25:48.943 152671 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 10 17:25:49 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:25:49 compute-0 sshd-session[251811]: Connection closed by authenticating user root 216.36.124.133 port 56712 [preauth]
Jan 10 17:25:49 compute-0 ceph-mon[75249]: pgmap v950: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:25:50 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v951: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:25:51 compute-0 ceph-mon[75249]: pgmap v951: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:25:52 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v952: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:25:53 compute-0 nova_compute[237049]: 2026-01-10 17:25:53.348 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:25:53 compute-0 nova_compute[237049]: 2026-01-10 17:25:53.349 237053 DEBUG nova.compute.manager [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 10 17:25:53 compute-0 nova_compute[237049]: 2026-01-10 17:25:53.350 237053 DEBUG nova.compute.manager [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 10 17:25:53 compute-0 nova_compute[237049]: 2026-01-10 17:25:53.372 237053 DEBUG nova.compute.manager [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 10 17:25:53 compute-0 nova_compute[237049]: 2026-01-10 17:25:53.373 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:25:53 compute-0 ceph-mon[75249]: pgmap v952: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:25:54 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:25:54 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v953: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:25:55 compute-0 nova_compute[237049]: 2026-01-10 17:25:55.347 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:25:55 compute-0 ceph-mon[75249]: pgmap v953: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:25:56 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v954: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:25:57 compute-0 nova_compute[237049]: 2026-01-10 17:25:57.358 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:25:57 compute-0 ceph-mon[75249]: pgmap v954: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:25:58 compute-0 nova_compute[237049]: 2026-01-10 17:25:58.336 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:25:58 compute-0 nova_compute[237049]: 2026-01-10 17:25:58.345 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:25:58 compute-0 nova_compute[237049]: 2026-01-10 17:25:58.379 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 10 17:25:58 compute-0 nova_compute[237049]: 2026-01-10 17:25:58.380 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 10 17:25:58 compute-0 nova_compute[237049]: 2026-01-10 17:25:58.381 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 10 17:25:58 compute-0 nova_compute[237049]: 2026-01-10 17:25:58.381 237053 DEBUG nova.compute.resource_tracker [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 10 17:25:58 compute-0 nova_compute[237049]: 2026-01-10 17:25:58.382 237053 DEBUG oslo_concurrency.processutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 10 17:25:58 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v955: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:25:58 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 10 17:25:58 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2131678779' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 10 17:25:58 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/2131678779' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 10 17:25:58 compute-0 nova_compute[237049]: 2026-01-10 17:25:58.959 237053 DEBUG oslo_concurrency.processutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.576s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 10 17:25:59 compute-0 nova_compute[237049]: 2026-01-10 17:25:59.139 237053 WARNING nova.virt.libvirt.driver [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 10 17:25:59 compute-0 nova_compute[237049]: 2026-01-10 17:25:59.140 237053 DEBUG nova.compute.resource_tracker [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5132MB free_disk=59.988249060697854GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 10 17:25:59 compute-0 nova_compute[237049]: 2026-01-10 17:25:59.141 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 10 17:25:59 compute-0 nova_compute[237049]: 2026-01-10 17:25:59.141 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 10 17:25:59 compute-0 nova_compute[237049]: 2026-01-10 17:25:59.431 237053 DEBUG nova.compute.resource_tracker [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 10 17:25:59 compute-0 nova_compute[237049]: 2026-01-10 17:25:59.432 237053 DEBUG nova.compute.resource_tracker [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 10 17:25:59 compute-0 nova_compute[237049]: 2026-01-10 17:25:59.535 237053 DEBUG nova.scheduler.client.report [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Refreshing inventories for resource provider 5f85855c-8a9b-43b5-ae49-f5846b9dcebe _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 10 17:25:59 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:25:59 compute-0 nova_compute[237049]: 2026-01-10 17:25:59.627 237053 DEBUG nova.scheduler.client.report [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Updating ProviderTree inventory for provider 5f85855c-8a9b-43b5-ae49-f5846b9dcebe from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 10 17:25:59 compute-0 nova_compute[237049]: 2026-01-10 17:25:59.628 237053 DEBUG nova.compute.provider_tree [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Updating inventory in ProviderTree for provider 5f85855c-8a9b-43b5-ae49-f5846b9dcebe with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 10 17:25:59 compute-0 nova_compute[237049]: 2026-01-10 17:25:59.646 237053 DEBUG nova.scheduler.client.report [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Refreshing aggregate associations for resource provider 5f85855c-8a9b-43b5-ae49-f5846b9dcebe, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 10 17:25:59 compute-0 nova_compute[237049]: 2026-01-10 17:25:59.681 237053 DEBUG nova.scheduler.client.report [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Refreshing trait associations for resource provider 5f85855c-8a9b-43b5-ae49-f5846b9dcebe, traits: COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AVX,HW_CPU_X86_CLMUL,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_MMX,HW_CPU_X86_SSE,HW_CPU_X86_F16C,COMPUTE_ACCELERATORS,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_ABM,HW_CPU_X86_SHA,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE41,HW_CPU_X86_AMD_SVM,HW_CPU_X86_FMA3,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_BMI2,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_RESCUE_BFV,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NODE,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_FDC,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_BMI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE42,HW_CPU_X86_AVX2,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_SOCKET_PCI_NUMA_AFFINITY _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 10 17:25:59 compute-0 nova_compute[237049]: 2026-01-10 17:25:59.715 237053 DEBUG oslo_concurrency.processutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 10 17:25:59 compute-0 ceph-mon[75249]: pgmap v955: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:26:00 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 10 17:26:00 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3564144019' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 10 17:26:00 compute-0 nova_compute[237049]: 2026-01-10 17:26:00.266 237053 DEBUG oslo_concurrency.processutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.550s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 10 17:26:00 compute-0 nova_compute[237049]: 2026-01-10 17:26:00.273 237053 DEBUG nova.compute.provider_tree [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Inventory has not changed in ProviderTree for provider: 5f85855c-8a9b-43b5-ae49-f5846b9dcebe update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 10 17:26:00 compute-0 nova_compute[237049]: 2026-01-10 17:26:00.293 237053 DEBUG nova.scheduler.client.report [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Inventory has not changed for provider 5f85855c-8a9b-43b5-ae49-f5846b9dcebe based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 10 17:26:00 compute-0 nova_compute[237049]: 2026-01-10 17:26:00.295 237053 DEBUG nova.compute.resource_tracker [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 10 17:26:00 compute-0 nova_compute[237049]: 2026-01-10 17:26:00.295 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.154s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 10 17:26:00 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v956: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:26:00 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/3564144019' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 10 17:26:01 compute-0 nova_compute[237049]: 2026-01-10 17:26:01.296 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:26:01 compute-0 nova_compute[237049]: 2026-01-10 17:26:01.296 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:26:01 compute-0 nova_compute[237049]: 2026-01-10 17:26:01.296 237053 DEBUG nova.compute.manager [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 10 17:26:01 compute-0 nova_compute[237049]: 2026-01-10 17:26:01.335 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:26:01 compute-0 nova_compute[237049]: 2026-01-10 17:26:01.357 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:26:01 compute-0 nova_compute[237049]: 2026-01-10 17:26:01.357 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:26:01 compute-0 nova_compute[237049]: 2026-01-10 17:26:01.357 237053 DEBUG nova.compute.manager [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 10 17:26:01 compute-0 nova_compute[237049]: 2026-01-10 17:26:01.372 237053 DEBUG nova.compute.manager [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 10 17:26:01 compute-0 nova_compute[237049]: 2026-01-10 17:26:01.372 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:26:01 compute-0 nova_compute[237049]: 2026-01-10 17:26:01.372 237053 DEBUG nova.compute.manager [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 10 17:26:01 compute-0 ceph-mon[75249]: pgmap v956: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:26:02 compute-0 nova_compute[237049]: 2026-01-10 17:26:02.371 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:26:02 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v957: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:26:02 compute-0 sudo[251857]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 17:26:02 compute-0 sudo[251857]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:26:02 compute-0 sudo[251857]: pam_unix(sudo:session): session closed for user root
Jan 10 17:26:02 compute-0 sudo[251882]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 10 17:26:03 compute-0 sudo[251882]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:26:03 compute-0 sudo[251882]: pam_unix(sudo:session): session closed for user root
Jan 10 17:26:03 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 17:26:03 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 17:26:03 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 10 17:26:03 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 10 17:26:03 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 10 17:26:03 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:26:03 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 10 17:26:03 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 10 17:26:03 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 10 17:26:03 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 10 17:26:03 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 17:26:03 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 17:26:03 compute-0 sudo[251938]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 17:26:03 compute-0 sudo[251938]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:26:03 compute-0 sudo[251938]: pam_unix(sudo:session): session closed for user root
Jan 10 17:26:03 compute-0 sudo[251963]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 10 17:26:03 compute-0 sudo[251963]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:26:04 compute-0 ceph-mon[75249]: pgmap v957: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:26:04 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 17:26:04 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 10 17:26:04 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:26:04 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 10 17:26:04 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 10 17:26:04 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 17:26:04 compute-0 podman[252000]: 2026-01-10 17:26:04.276908751 +0000 UTC m=+0.066527204 container create 228f81e414dc1ade98faff7ce019be228319fa0f1c5c4a0ccebe888307298d16 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_hypatia, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 10 17:26:04 compute-0 systemd[1]: Started libpod-conmon-228f81e414dc1ade98faff7ce019be228319fa0f1c5c4a0ccebe888307298d16.scope.
Jan 10 17:26:04 compute-0 podman[252000]: 2026-01-10 17:26:04.248320997 +0000 UTC m=+0.037939530 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:26:04 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:26:04 compute-0 podman[252000]: 2026-01-10 17:26:04.387159801 +0000 UTC m=+0.176778374 container init 228f81e414dc1ade98faff7ce019be228319fa0f1c5c4a0ccebe888307298d16 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_hypatia, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 17:26:04 compute-0 podman[252000]: 2026-01-10 17:26:04.398887981 +0000 UTC m=+0.188506474 container start 228f81e414dc1ade98faff7ce019be228319fa0f1c5c4a0ccebe888307298d16 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_hypatia, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 17:26:04 compute-0 podman[252000]: 2026-01-10 17:26:04.403210295 +0000 UTC m=+0.192828748 container attach 228f81e414dc1ade98faff7ce019be228319fa0f1c5c4a0ccebe888307298d16 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_hypatia, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 17:26:04 compute-0 priceless_hypatia[252016]: 167 167
Jan 10 17:26:04 compute-0 systemd[1]: libpod-228f81e414dc1ade98faff7ce019be228319fa0f1c5c4a0ccebe888307298d16.scope: Deactivated successfully.
Jan 10 17:26:04 compute-0 conmon[252016]: conmon 228f81e414dc1ade98fa <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-228f81e414dc1ade98faff7ce019be228319fa0f1c5c4a0ccebe888307298d16.scope/container/memory.events
Jan 10 17:26:04 compute-0 podman[252000]: 2026-01-10 17:26:04.407754074 +0000 UTC m=+0.197372527 container died 228f81e414dc1ade98faff7ce019be228319fa0f1c5c4a0ccebe888307298d16 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_hypatia, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 10 17:26:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-6429e01d6737acb949aeeddf04a0c1c6fd9f25a332d8a0ac343a9a9ff2499de7-merged.mount: Deactivated successfully.
Jan 10 17:26:04 compute-0 podman[252000]: 2026-01-10 17:26:04.459903892 +0000 UTC m=+0.249522375 container remove 228f81e414dc1ade98faff7ce019be228319fa0f1c5c4a0ccebe888307298d16 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_hypatia, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 17:26:04 compute-0 systemd[1]: libpod-conmon-228f81e414dc1ade98faff7ce019be228319fa0f1c5c4a0ccebe888307298d16.scope: Deactivated successfully.
Jan 10 17:26:04 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:26:04 compute-0 podman[252040]: 2026-01-10 17:26:04.651380676 +0000 UTC m=+0.062349514 container create 4396f997696e8fccab130e6d79ae5570bf52dade578879f6536a0c6e4dfc98c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_lewin, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 10 17:26:04 compute-0 systemd[1]: Started libpod-conmon-4396f997696e8fccab130e6d79ae5570bf52dade578879f6536a0c6e4dfc98c4.scope.
Jan 10 17:26:04 compute-0 podman[252040]: 2026-01-10 17:26:04.617367622 +0000 UTC m=+0.028336500 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:26:04 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:26:04 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v958: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:26:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9bb54280264417b4cb92d095a1966255271ea6bc98183b050225fb54948fdf9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 17:26:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9bb54280264417b4cb92d095a1966255271ea6bc98183b050225fb54948fdf9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 17:26:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9bb54280264417b4cb92d095a1966255271ea6bc98183b050225fb54948fdf9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 17:26:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9bb54280264417b4cb92d095a1966255271ea6bc98183b050225fb54948fdf9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 17:26:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9bb54280264417b4cb92d095a1966255271ea6bc98183b050225fb54948fdf9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 10 17:26:04 compute-0 podman[252040]: 2026-01-10 17:26:04.75806713 +0000 UTC m=+0.169036008 container init 4396f997696e8fccab130e6d79ae5570bf52dade578879f6536a0c6e4dfc98c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_lewin, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 17:26:04 compute-0 podman[252040]: 2026-01-10 17:26:04.766438301 +0000 UTC m=+0.177407179 container start 4396f997696e8fccab130e6d79ae5570bf52dade578879f6536a0c6e4dfc98c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_lewin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 10 17:26:04 compute-0 podman[252040]: 2026-01-10 17:26:04.770118319 +0000 UTC m=+0.181087197 container attach 4396f997696e8fccab130e6d79ae5570bf52dade578879f6536a0c6e4dfc98c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_lewin, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 17:26:05 compute-0 pensive_lewin[252056]: --> passed data devices: 0 physical, 3 LVM
Jan 10 17:26:05 compute-0 pensive_lewin[252056]: --> All data devices are unavailable
Jan 10 17:26:05 compute-0 systemd[1]: libpod-4396f997696e8fccab130e6d79ae5570bf52dade578879f6536a0c6e4dfc98c4.scope: Deactivated successfully.
Jan 10 17:26:05 compute-0 podman[252040]: 2026-01-10 17:26:05.3549659 +0000 UTC m=+0.765934808 container died 4396f997696e8fccab130e6d79ae5570bf52dade578879f6536a0c6e4dfc98c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_lewin, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 10 17:26:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-c9bb54280264417b4cb92d095a1966255271ea6bc98183b050225fb54948fdf9-merged.mount: Deactivated successfully.
Jan 10 17:26:05 compute-0 podman[252040]: 2026-01-10 17:26:05.413273936 +0000 UTC m=+0.824242774 container remove 4396f997696e8fccab130e6d79ae5570bf52dade578879f6536a0c6e4dfc98c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_lewin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 17:26:05 compute-0 systemd[1]: libpod-conmon-4396f997696e8fccab130e6d79ae5570bf52dade578879f6536a0c6e4dfc98c4.scope: Deactivated successfully.
Jan 10 17:26:05 compute-0 sudo[251963]: pam_unix(sudo:session): session closed for user root
Jan 10 17:26:05 compute-0 sudo[252088]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 17:26:05 compute-0 sudo[252088]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:26:05 compute-0 sudo[252088]: pam_unix(sudo:session): session closed for user root
Jan 10 17:26:05 compute-0 sudo[252113]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 -- lvm list --format json
Jan 10 17:26:05 compute-0 sudo[252113]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:26:05 compute-0 podman[252150]: 2026-01-10 17:26:05.988341884 +0000 UTC m=+0.069015494 container create 6630bf3635e97f965bea0ff30d681bfc08885261ea5f21743f54ab52333b42a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_brahmagupta, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 17:26:06 compute-0 ceph-mon[75249]: pgmap v958: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:26:06 compute-0 systemd[1]: Started libpod-conmon-6630bf3635e97f965bea0ff30d681bfc08885261ea5f21743f54ab52333b42a2.scope.
Jan 10 17:26:06 compute-0 podman[252150]: 2026-01-10 17:26:05.957328931 +0000 UTC m=+0.038002601 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:26:06 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:26:06 compute-0 podman[252150]: 2026-01-10 17:26:06.097931417 +0000 UTC m=+0.178605077 container init 6630bf3635e97f965bea0ff30d681bfc08885261ea5f21743f54ab52333b42a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_brahmagupta, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 10 17:26:06 compute-0 podman[252150]: 2026-01-10 17:26:06.110460787 +0000 UTC m=+0.191134397 container start 6630bf3635e97f965bea0ff30d681bfc08885261ea5f21743f54ab52333b42a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_brahmagupta, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 17:26:06 compute-0 podman[252150]: 2026-01-10 17:26:06.115262062 +0000 UTC m=+0.195935662 container attach 6630bf3635e97f965bea0ff30d681bfc08885261ea5f21743f54ab52333b42a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_brahmagupta, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 17:26:06 compute-0 wizardly_brahmagupta[252166]: 167 167
Jan 10 17:26:06 compute-0 systemd[1]: libpod-6630bf3635e97f965bea0ff30d681bfc08885261ea5f21743f54ab52333b42a2.scope: Deactivated successfully.
Jan 10 17:26:06 compute-0 podman[252171]: 2026-01-10 17:26:06.18200123 +0000 UTC m=+0.041813752 container died 6630bf3635e97f965bea0ff30d681bfc08885261ea5f21743f54ab52333b42a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_brahmagupta, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 10 17:26:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-4adaa00ece30159d563ade629b54fafcbad18425d99f283cd3a8019099e764e0-merged.mount: Deactivated successfully.
Jan 10 17:26:06 compute-0 podman[252171]: 2026-01-10 17:26:06.227026258 +0000 UTC m=+0.086838800 container remove 6630bf3635e97f965bea0ff30d681bfc08885261ea5f21743f54ab52333b42a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_brahmagupta, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 10 17:26:06 compute-0 systemd[1]: libpod-conmon-6630bf3635e97f965bea0ff30d681bfc08885261ea5f21743f54ab52333b42a2.scope: Deactivated successfully.
Jan 10 17:26:06 compute-0 podman[252193]: 2026-01-10 17:26:06.509998921 +0000 UTC m=+0.069940065 container create 8a54305c06ee901303e112576ef96c97ef9edd859e1165546f7363c18e450949 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_franklin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True)
Jan 10 17:26:06 compute-0 systemd[1]: Started libpod-conmon-8a54305c06ee901303e112576ef96c97ef9edd859e1165546f7363c18e450949.scope.
Jan 10 17:26:06 compute-0 podman[252193]: 2026-01-10 17:26:06.479565403 +0000 UTC m=+0.039506587 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:26:06 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:26:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/910086c56d4c1cd9bf9bb96b3ff73b34e1bca20036c015347fc7f28cfe12fe2d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 17:26:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/910086c56d4c1cd9bf9bb96b3ff73b34e1bca20036c015347fc7f28cfe12fe2d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 17:26:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/910086c56d4c1cd9bf9bb96b3ff73b34e1bca20036c015347fc7f28cfe12fe2d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 17:26:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/910086c56d4c1cd9bf9bb96b3ff73b34e1bca20036c015347fc7f28cfe12fe2d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 17:26:06 compute-0 podman[252193]: 2026-01-10 17:26:06.624161774 +0000 UTC m=+0.184102898 container init 8a54305c06ee901303e112576ef96c97ef9edd859e1165546f7363c18e450949 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_franklin, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 10 17:26:06 compute-0 podman[252193]: 2026-01-10 17:26:06.63694334 +0000 UTC m=+0.196884474 container start 8a54305c06ee901303e112576ef96c97ef9edd859e1165546f7363c18e450949 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_franklin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 10 17:26:06 compute-0 podman[252193]: 2026-01-10 17:26:06.642249427 +0000 UTC m=+0.202190541 container attach 8a54305c06ee901303e112576ef96c97ef9edd859e1165546f7363c18e450949 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_franklin, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 10 17:26:06 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v959: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:26:06 compute-0 focused_franklin[252210]: {
Jan 10 17:26:06 compute-0 focused_franklin[252210]:     "0": [
Jan 10 17:26:06 compute-0 focused_franklin[252210]:         {
Jan 10 17:26:06 compute-0 focused_franklin[252210]:             "devices": [
Jan 10 17:26:06 compute-0 focused_franklin[252210]:                 "/dev/loop3"
Jan 10 17:26:06 compute-0 focused_franklin[252210]:             ],
Jan 10 17:26:06 compute-0 focused_franklin[252210]:             "lv_name": "ceph_lv0",
Jan 10 17:26:06 compute-0 focused_franklin[252210]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 10 17:26:06 compute-0 focused_franklin[252210]:             "lv_size": "21470642176",
Jan 10 17:26:06 compute-0 focused_franklin[252210]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9aa1dcc9-88f4-49c0-be40-744313964d3e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 17:26:06 compute-0 focused_franklin[252210]:             "lv_uuid": "fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2",
Jan 10 17:26:06 compute-0 focused_franklin[252210]:             "name": "ceph_lv0",
Jan 10 17:26:06 compute-0 focused_franklin[252210]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 10 17:26:06 compute-0 focused_franklin[252210]:             "tags": {
Jan 10 17:26:06 compute-0 focused_franklin[252210]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 10 17:26:06 compute-0 focused_franklin[252210]:                 "ceph.block_uuid": "fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2",
Jan 10 17:26:06 compute-0 focused_franklin[252210]:                 "ceph.cephx_lockbox_secret": "",
Jan 10 17:26:06 compute-0 focused_franklin[252210]:                 "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 17:26:06 compute-0 focused_franklin[252210]:                 "ceph.cluster_name": "ceph",
Jan 10 17:26:06 compute-0 focused_franklin[252210]:                 "ceph.crush_device_class": "",
Jan 10 17:26:06 compute-0 focused_franklin[252210]:                 "ceph.encrypted": "0",
Jan 10 17:26:06 compute-0 focused_franklin[252210]:                 "ceph.objectstore": "bluestore",
Jan 10 17:26:06 compute-0 focused_franklin[252210]:                 "ceph.osd_fsid": "9aa1dcc9-88f4-49c0-be40-744313964d3e",
Jan 10 17:26:06 compute-0 focused_franklin[252210]:                 "ceph.osd_id": "0",
Jan 10 17:26:06 compute-0 focused_franklin[252210]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 10 17:26:06 compute-0 focused_franklin[252210]:                 "ceph.type": "block",
Jan 10 17:26:06 compute-0 focused_franklin[252210]:                 "ceph.vdo": "0",
Jan 10 17:26:06 compute-0 focused_franklin[252210]:                 "ceph.with_tpm": "0"
Jan 10 17:26:06 compute-0 focused_franklin[252210]:             },
Jan 10 17:26:06 compute-0 focused_franklin[252210]:             "type": "block",
Jan 10 17:26:06 compute-0 focused_franklin[252210]:             "vg_name": "ceph_vg0"
Jan 10 17:26:06 compute-0 focused_franklin[252210]:         }
Jan 10 17:26:06 compute-0 focused_franklin[252210]:     ],
Jan 10 17:26:06 compute-0 focused_franklin[252210]:     "1": [
Jan 10 17:26:06 compute-0 focused_franklin[252210]:         {
Jan 10 17:26:06 compute-0 focused_franklin[252210]:             "devices": [
Jan 10 17:26:06 compute-0 focused_franklin[252210]:                 "/dev/loop4"
Jan 10 17:26:06 compute-0 focused_franklin[252210]:             ],
Jan 10 17:26:06 compute-0 focused_franklin[252210]:             "lv_name": "ceph_lv1",
Jan 10 17:26:06 compute-0 focused_franklin[252210]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 10 17:26:06 compute-0 focused_franklin[252210]:             "lv_size": "21470642176",
Jan 10 17:26:06 compute-0 focused_franklin[252210]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e8e31518-65ae-476c-891c-e2fc550d0a1c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 17:26:06 compute-0 focused_franklin[252210]:             "lv_uuid": "jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE",
Jan 10 17:26:06 compute-0 focused_franklin[252210]:             "name": "ceph_lv1",
Jan 10 17:26:06 compute-0 focused_franklin[252210]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 10 17:26:06 compute-0 focused_franklin[252210]:             "tags": {
Jan 10 17:26:06 compute-0 focused_franklin[252210]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 10 17:26:06 compute-0 focused_franklin[252210]:                 "ceph.block_uuid": "jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE",
Jan 10 17:26:06 compute-0 focused_franklin[252210]:                 "ceph.cephx_lockbox_secret": "",
Jan 10 17:26:06 compute-0 focused_franklin[252210]:                 "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 17:26:06 compute-0 focused_franklin[252210]:                 "ceph.cluster_name": "ceph",
Jan 10 17:26:06 compute-0 focused_franklin[252210]:                 "ceph.crush_device_class": "",
Jan 10 17:26:06 compute-0 focused_franklin[252210]:                 "ceph.encrypted": "0",
Jan 10 17:26:06 compute-0 focused_franklin[252210]:                 "ceph.objectstore": "bluestore",
Jan 10 17:26:06 compute-0 focused_franklin[252210]:                 "ceph.osd_fsid": "e8e31518-65ae-476c-891c-e2fc550d0a1c",
Jan 10 17:26:06 compute-0 focused_franklin[252210]:                 "ceph.osd_id": "1",
Jan 10 17:26:06 compute-0 focused_franklin[252210]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 10 17:26:06 compute-0 focused_franklin[252210]:                 "ceph.type": "block",
Jan 10 17:26:06 compute-0 focused_franklin[252210]:                 "ceph.vdo": "0",
Jan 10 17:26:06 compute-0 focused_franklin[252210]:                 "ceph.with_tpm": "0"
Jan 10 17:26:06 compute-0 focused_franklin[252210]:             },
Jan 10 17:26:06 compute-0 focused_franklin[252210]:             "type": "block",
Jan 10 17:26:06 compute-0 focused_franklin[252210]:             "vg_name": "ceph_vg1"
Jan 10 17:26:06 compute-0 focused_franklin[252210]:         }
Jan 10 17:26:06 compute-0 focused_franklin[252210]:     ],
Jan 10 17:26:06 compute-0 focused_franklin[252210]:     "2": [
Jan 10 17:26:06 compute-0 focused_franklin[252210]:         {
Jan 10 17:26:06 compute-0 focused_franklin[252210]:             "devices": [
Jan 10 17:26:06 compute-0 focused_franklin[252210]:                 "/dev/loop5"
Jan 10 17:26:06 compute-0 focused_franklin[252210]:             ],
Jan 10 17:26:06 compute-0 focused_franklin[252210]:             "lv_name": "ceph_lv2",
Jan 10 17:26:06 compute-0 focused_franklin[252210]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 10 17:26:06 compute-0 focused_franklin[252210]:             "lv_size": "21470642176",
Jan 10 17:26:06 compute-0 focused_franklin[252210]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=87473727-6468-4f68-8371-e0bf60edaa43,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 17:26:06 compute-0 focused_franklin[252210]:             "lv_uuid": "47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL",
Jan 10 17:26:06 compute-0 focused_franklin[252210]:             "name": "ceph_lv2",
Jan 10 17:26:06 compute-0 focused_franklin[252210]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 10 17:26:06 compute-0 focused_franklin[252210]:             "tags": {
Jan 10 17:26:06 compute-0 focused_franklin[252210]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 10 17:26:06 compute-0 focused_franklin[252210]:                 "ceph.block_uuid": "47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL",
Jan 10 17:26:06 compute-0 focused_franklin[252210]:                 "ceph.cephx_lockbox_secret": "",
Jan 10 17:26:06 compute-0 focused_franklin[252210]:                 "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 17:26:06 compute-0 focused_franklin[252210]:                 "ceph.cluster_name": "ceph",
Jan 10 17:26:06 compute-0 focused_franklin[252210]:                 "ceph.crush_device_class": "",
Jan 10 17:26:06 compute-0 focused_franklin[252210]:                 "ceph.encrypted": "0",
Jan 10 17:26:06 compute-0 focused_franklin[252210]:                 "ceph.objectstore": "bluestore",
Jan 10 17:26:06 compute-0 focused_franklin[252210]:                 "ceph.osd_fsid": "87473727-6468-4f68-8371-e0bf60edaa43",
Jan 10 17:26:06 compute-0 focused_franklin[252210]:                 "ceph.osd_id": "2",
Jan 10 17:26:06 compute-0 focused_franklin[252210]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 10 17:26:06 compute-0 focused_franklin[252210]:                 "ceph.type": "block",
Jan 10 17:26:06 compute-0 focused_franklin[252210]:                 "ceph.vdo": "0",
Jan 10 17:26:06 compute-0 focused_franklin[252210]:                 "ceph.with_tpm": "0"
Jan 10 17:26:06 compute-0 focused_franklin[252210]:             },
Jan 10 17:26:06 compute-0 focused_franklin[252210]:             "type": "block",
Jan 10 17:26:06 compute-0 focused_franklin[252210]:             "vg_name": "ceph_vg2"
Jan 10 17:26:06 compute-0 focused_franklin[252210]:         }
Jan 10 17:26:06 compute-0 focused_franklin[252210]:     ]
Jan 10 17:26:06 compute-0 focused_franklin[252210]: }
Jan 10 17:26:07 compute-0 systemd[1]: libpod-8a54305c06ee901303e112576ef96c97ef9edd859e1165546f7363c18e450949.scope: Deactivated successfully.
Jan 10 17:26:07 compute-0 podman[252193]: 2026-01-10 17:26:07.049367364 +0000 UTC m=+0.609308498 container died 8a54305c06ee901303e112576ef96c97ef9edd859e1165546f7363c18e450949 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_franklin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 10 17:26:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-910086c56d4c1cd9bf9bb96b3ff73b34e1bca20036c015347fc7f28cfe12fe2d-merged.mount: Deactivated successfully.
Jan 10 17:26:07 compute-0 podman[252193]: 2026-01-10 17:26:07.09723406 +0000 UTC m=+0.657175164 container remove 8a54305c06ee901303e112576ef96c97ef9edd859e1165546f7363c18e450949 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_franklin, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 10 17:26:07 compute-0 systemd[1]: libpod-conmon-8a54305c06ee901303e112576ef96c97ef9edd859e1165546f7363c18e450949.scope: Deactivated successfully.
Jan 10 17:26:07 compute-0 sudo[252113]: pam_unix(sudo:session): session closed for user root
Jan 10 17:26:07 compute-0 sudo[252229]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 17:26:07 compute-0 sudo[252229]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:26:07 compute-0 sudo[252229]: pam_unix(sudo:session): session closed for user root
Jan 10 17:26:07 compute-0 sudo[252254]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 -- raw list --format json
Jan 10 17:26:07 compute-0 sudo[252254]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:26:07 compute-0 podman[252291]: 2026-01-10 17:26:07.646663543 +0000 UTC m=+0.067163158 container create 936e9d8985470efd9e7bb64acfe6356d84dca4e1fa1b09f1a4b6105e1a642312 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_jemison, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 10 17:26:07 compute-0 systemd[1]: Started libpod-conmon-936e9d8985470efd9e7bb64acfe6356d84dca4e1fa1b09f1a4b6105e1a642312.scope.
Jan 10 17:26:07 compute-0 podman[252291]: 2026-01-10 17:26:07.621558292 +0000 UTC m=+0.042057997 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:26:07 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:26:07 compute-0 podman[252291]: 2026-01-10 17:26:07.741409822 +0000 UTC m=+0.161909527 container init 936e9d8985470efd9e7bb64acfe6356d84dca4e1fa1b09f1a4b6105e1a642312 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_jemison, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 10 17:26:07 compute-0 podman[252291]: 2026-01-10 17:26:07.749823753 +0000 UTC m=+0.170323378 container start 936e9d8985470efd9e7bb64acfe6356d84dca4e1fa1b09f1a4b6105e1a642312 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_jemison, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 10 17:26:07 compute-0 podman[252291]: 2026-01-10 17:26:07.754395873 +0000 UTC m=+0.174895528 container attach 936e9d8985470efd9e7bb64acfe6356d84dca4e1fa1b09f1a4b6105e1a642312 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_jemison, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 10 17:26:07 compute-0 magical_jemison[252307]: 167 167
Jan 10 17:26:07 compute-0 systemd[1]: libpod-936e9d8985470efd9e7bb64acfe6356d84dca4e1fa1b09f1a4b6105e1a642312.scope: Deactivated successfully.
Jan 10 17:26:07 compute-0 conmon[252307]: conmon 936e9d8985470efd9e7b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-936e9d8985470efd9e7bb64acfe6356d84dca4e1fa1b09f1a4b6105e1a642312.scope/container/memory.events
Jan 10 17:26:07 compute-0 podman[252291]: 2026-01-10 17:26:07.757345523 +0000 UTC m=+0.177845148 container died 936e9d8985470efd9e7bb64acfe6356d84dca4e1fa1b09f1a4b6105e1a642312 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_jemison, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 17:26:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-5d712ac7845423d852992a9f064f03bd87ab51e0acde2c08ac7f15a5b9ce93a8-merged.mount: Deactivated successfully.
Jan 10 17:26:07 compute-0 podman[252291]: 2026-01-10 17:26:07.807135135 +0000 UTC m=+0.227634790 container remove 936e9d8985470efd9e7bb64acfe6356d84dca4e1fa1b09f1a4b6105e1a642312 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_jemison, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 10 17:26:07 compute-0 systemd[1]: libpod-conmon-936e9d8985470efd9e7bb64acfe6356d84dca4e1fa1b09f1a4b6105e1a642312.scope: Deactivated successfully.
Jan 10 17:26:08 compute-0 ceph-mon[75249]: pgmap v959: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:26:08 compute-0 podman[252329]: 2026-01-10 17:26:08.033005173 +0000 UTC m=+0.073267885 container create e2b095c8a695c99c2b7a2edd1537b1f12969a8c80e4374ae5cabc7d975c59f27 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_panini, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 10 17:26:08 compute-0 systemd[1]: Started libpod-conmon-e2b095c8a695c99c2b7a2edd1537b1f12969a8c80e4374ae5cabc7d975c59f27.scope.
Jan 10 17:26:08 compute-0 podman[252329]: 2026-01-10 17:26:08.004641294 +0000 UTC m=+0.044904056 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:26:08 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:26:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bb1cd858ebb30ab9007a8e553774e91331ffb247a5154ba67ac5f01f6334191/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 17:26:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bb1cd858ebb30ab9007a8e553774e91331ffb247a5154ba67ac5f01f6334191/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 17:26:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bb1cd858ebb30ab9007a8e553774e91331ffb247a5154ba67ac5f01f6334191/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 17:26:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bb1cd858ebb30ab9007a8e553774e91331ffb247a5154ba67ac5f01f6334191/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 17:26:08 compute-0 podman[252329]: 2026-01-10 17:26:08.137523865 +0000 UTC m=+0.177786587 container init e2b095c8a695c99c2b7a2edd1537b1f12969a8c80e4374ae5cabc7d975c59f27 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_panini, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2)
Jan 10 17:26:08 compute-0 podman[252329]: 2026-01-10 17:26:08.147031703 +0000 UTC m=+0.187294415 container start e2b095c8a695c99c2b7a2edd1537b1f12969a8c80e4374ae5cabc7d975c59f27 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_panini, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 17:26:08 compute-0 podman[252329]: 2026-01-10 17:26:08.151327455 +0000 UTC m=+0.191590167 container attach e2b095c8a695c99c2b7a2edd1537b1f12969a8c80e4374ae5cabc7d975c59f27 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_panini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 10 17:26:08 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v960: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:26:08 compute-0 lvm[252425]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 10 17:26:08 compute-0 lvm[252423]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 10 17:26:08 compute-0 lvm[252423]: VG ceph_vg1 finished
Jan 10 17:26:08 compute-0 lvm[252425]: VG ceph_vg2 finished
Jan 10 17:26:08 compute-0 lvm[252422]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 10 17:26:08 compute-0 lvm[252422]: VG ceph_vg0 finished
Jan 10 17:26:08 compute-0 lvm[252427]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 10 17:26:08 compute-0 lvm[252427]: VG ceph_vg2 finished
Jan 10 17:26:08 compute-0 interesting_panini[252344]: {}
Jan 10 17:26:09 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:26:09 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:26:09 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:26:09 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:26:09 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:26:09 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:26:09 compute-0 systemd[1]: libpod-e2b095c8a695c99c2b7a2edd1537b1f12969a8c80e4374ae5cabc7d975c59f27.scope: Deactivated successfully.
Jan 10 17:26:09 compute-0 systemd[1]: libpod-e2b095c8a695c99c2b7a2edd1537b1f12969a8c80e4374ae5cabc7d975c59f27.scope: Consumed 1.462s CPU time.
Jan 10 17:26:09 compute-0 podman[252329]: 2026-01-10 17:26:09.033959746 +0000 UTC m=+1.074222458 container died e2b095c8a695c99c2b7a2edd1537b1f12969a8c80e4374ae5cabc7d975c59f27 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_panini, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 17:26:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-2bb1cd858ebb30ab9007a8e553774e91331ffb247a5154ba67ac5f01f6334191-merged.mount: Deactivated successfully.
Jan 10 17:26:09 compute-0 podman[252329]: 2026-01-10 17:26:09.088176104 +0000 UTC m=+1.128438776 container remove e2b095c8a695c99c2b7a2edd1537b1f12969a8c80e4374ae5cabc7d975c59f27 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_panini, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 10 17:26:09 compute-0 systemd[1]: libpod-conmon-e2b095c8a695c99c2b7a2edd1537b1f12969a8c80e4374ae5cabc7d975c59f27.scope: Deactivated successfully.
Jan 10 17:26:09 compute-0 sudo[252254]: pam_unix(sudo:session): session closed for user root
Jan 10 17:26:09 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 10 17:26:09 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:26:09 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 10 17:26:09 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:26:09 compute-0 sudo[252441]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 10 17:26:09 compute-0 sudo[252441]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:26:09 compute-0 sudo[252441]: pam_unix(sudo:session): session closed for user root
Jan 10 17:26:09 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:26:10 compute-0 ceph-mon[75249]: pgmap v960: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:26:10 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:26:10 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:26:10 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v961: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:26:12 compute-0 ceph-mon[75249]: pgmap v961: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:26:12 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v962: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:26:12 compute-0 nova_compute[237049]: 2026-01-10 17:26:12.995 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:26:14 compute-0 ceph-mon[75249]: pgmap v962: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:26:14 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:26:14 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v963: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:26:15 compute-0 podman[252467]: 2026-01-10 17:26:15.094602679 +0000 UTC m=+0.083432967 container health_status 4a66317a3a3660e903224f5e823ead0a57597ae17c6ac34919f8a3aeea25124f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 10 17:26:15 compute-0 podman[252468]: 2026-01-10 17:26:15.124795795 +0000 UTC m=+0.111508743 container health_status a2049b55215718ead6719d161f51721cb7ba61ad8a59a8a1174e05b17008fa6f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 17:26:16 compute-0 ceph-mon[75249]: pgmap v963: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:26:16 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v964: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:26:18 compute-0 ceph-mon[75249]: pgmap v964: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:26:18 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v965: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:26:19 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:26:20 compute-0 ceph-mon[75249]: pgmap v965: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:26:20 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v966: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:26:22 compute-0 ceph-mon[75249]: pgmap v966: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:26:22 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v967: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:26:24 compute-0 ceph-mon[75249]: pgmap v967: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:26:24 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:26:24 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v968: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:26:25 compute-0 ceph-mon[75249]: pgmap v968: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:26:26 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v969: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:26:27 compute-0 ceph-mon[75249]: pgmap v969: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:26:28 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v970: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:26:29 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:26:29 compute-0 ceph-mon[75249]: pgmap v970: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:26:30 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v971: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:26:31 compute-0 ceph-mon[75249]: pgmap v971: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:26:32 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v972: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:26:33 compute-0 ceph-mon[75249]: pgmap v972: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:26:34 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:26:34 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v973: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:26:35 compute-0 ceph-mon[75249]: pgmap v973: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:26:36 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 10 17:26:36 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2279832207' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 10 17:26:36 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 10 17:26:36 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2279832207' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 10 17:26:36 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v974: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:26:36 compute-0 ceph-mon[75249]: from='client.? 192.168.122.10:0/2279832207' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 10 17:26:36 compute-0 ceph-mon[75249]: from='client.? 192.168.122.10:0/2279832207' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 10 17:26:37 compute-0 ceph-mon[75249]: pgmap v974: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:26:38 compute-0 ceph-mgr[75538]: [balancer INFO root] Optimize plan auto_2026-01-10_17:26:38
Jan 10 17:26:38 compute-0 ceph-mgr[75538]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 10 17:26:38 compute-0 ceph-mgr[75538]: [balancer INFO root] do_upmap
Jan 10 17:26:38 compute-0 ceph-mgr[75538]: [balancer INFO root] pools ['.mgr', 'images', 'backups', 'cephfs.cephfs.data', 'volumes', 'vms', 'cephfs.cephfs.meta']
Jan 10 17:26:38 compute-0 ceph-mgr[75538]: [balancer INFO root] prepared 0/10 upmap changes
Jan 10 17:26:38 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v975: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:26:39 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:26:39 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:26:39 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:26:39 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:26:39 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:26:39 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:26:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 10 17:26:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 10 17:26:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 10 17:26:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 10 17:26:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 10 17:26:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 10 17:26:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 10 17:26:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 10 17:26:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 10 17:26:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 10 17:26:39 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:26:39 compute-0 ceph-mon[75249]: pgmap v975: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:26:40 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v976: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:26:41 compute-0 ceph-mon[75249]: pgmap v976: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:26:42 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v977: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:26:43 compute-0 ceph-mon[75249]: pgmap v977: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:26:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] _maybe_adjust
Jan 10 17:26:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:26:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 10 17:26:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:26:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 5.365931724612428e-07 of space, bias 1.0, pg target 0.00016097795173837282 quantized to 32 (current 32)
Jan 10 17:26:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:26:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.1924810223865999e-07 of space, bias 1.0, pg target 3.5774430671597993e-05 quantized to 32 (current 32)
Jan 10 17:26:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:26:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 17:26:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:26:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000668695260671586 of space, bias 1.0, pg target 0.2006085782014758 quantized to 32 (current 32)
Jan 10 17:26:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:26:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0462037643091811e-06 of space, bias 4.0, pg target 0.0012554445171710175 quantized to 16 (current 16)
Jan 10 17:26:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:26:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 17:26:44 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:26:44 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v978: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:26:45 compute-0 ceph-mon[75249]: pgmap v978: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:26:46 compute-0 podman[252512]: 2026-01-10 17:26:46.096998403 +0000 UTC m=+0.088066710 container health_status 4a66317a3a3660e903224f5e823ead0a57597ae17c6ac34919f8a3aeea25124f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Jan 10 17:26:46 compute-0 podman[252513]: 2026-01-10 17:26:46.179309573 +0000 UTC m=+0.165484353 container health_status a2049b55215718ead6719d161f51721cb7ba61ad8a59a8a1174e05b17008fa6f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Jan 10 17:26:46 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v979: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:26:47 compute-0 ceph-mon[75249]: pgmap v979: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:26:48 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v980: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:26:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:26:48.944 152671 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 10 17:26:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:26:48.945 152671 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 10 17:26:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:26:48.945 152671 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 10 17:26:49 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:26:49 compute-0 ceph-mon[75249]: pgmap v980: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:26:50 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v981: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:26:51 compute-0 ceph-mon[75249]: pgmap v981: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:26:52 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v982: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:26:53 compute-0 nova_compute[237049]: 2026-01-10 17:26:53.376 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:26:53 compute-0 nova_compute[237049]: 2026-01-10 17:26:53.378 237053 DEBUG nova.compute.manager [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 10 17:26:53 compute-0 nova_compute[237049]: 2026-01-10 17:26:53.379 237053 DEBUG nova.compute.manager [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 10 17:26:53 compute-0 nova_compute[237049]: 2026-01-10 17:26:53.403 237053 DEBUG nova.compute.manager [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 10 17:26:53 compute-0 nova_compute[237049]: 2026-01-10 17:26:53.403 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:26:53 compute-0 ceph-mon[75249]: pgmap v982: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:26:54 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:26:54 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v983: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:26:55 compute-0 ceph-mon[75249]: pgmap v983: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:26:56 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v984: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:26:57 compute-0 ceph-mon[75249]: pgmap v984: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:26:58 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v985: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:26:59 compute-0 nova_compute[237049]: 2026-01-10 17:26:59.345 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:26:59 compute-0 nova_compute[237049]: 2026-01-10 17:26:59.345 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:26:59 compute-0 nova_compute[237049]: 2026-01-10 17:26:59.397 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 10 17:26:59 compute-0 nova_compute[237049]: 2026-01-10 17:26:59.398 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 10 17:26:59 compute-0 nova_compute[237049]: 2026-01-10 17:26:59.398 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 10 17:26:59 compute-0 nova_compute[237049]: 2026-01-10 17:26:59.399 237053 DEBUG nova.compute.resource_tracker [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 10 17:26:59 compute-0 nova_compute[237049]: 2026-01-10 17:26:59.400 237053 DEBUG oslo_concurrency.processutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 10 17:26:59 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:26:59 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 10 17:26:59 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2290957834' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 10 17:26:59 compute-0 ceph-mon[75249]: pgmap v985: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:26:59 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/2290957834' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 10 17:26:59 compute-0 nova_compute[237049]: 2026-01-10 17:26:59.966 237053 DEBUG oslo_concurrency.processutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.566s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 10 17:27:00 compute-0 nova_compute[237049]: 2026-01-10 17:27:00.226 237053 WARNING nova.virt.libvirt.driver [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 10 17:27:00 compute-0 nova_compute[237049]: 2026-01-10 17:27:00.228 237053 DEBUG nova.compute.resource_tracker [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5133MB free_disk=59.988249060697854GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 10 17:27:00 compute-0 nova_compute[237049]: 2026-01-10 17:27:00.228 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 10 17:27:00 compute-0 nova_compute[237049]: 2026-01-10 17:27:00.228 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 10 17:27:00 compute-0 nova_compute[237049]: 2026-01-10 17:27:00.313 237053 DEBUG nova.compute.resource_tracker [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 10 17:27:00 compute-0 nova_compute[237049]: 2026-01-10 17:27:00.313 237053 DEBUG nova.compute.resource_tracker [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 10 17:27:00 compute-0 nova_compute[237049]: 2026-01-10 17:27:00.338 237053 DEBUG oslo_concurrency.processutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 10 17:27:00 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v986: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:27:00 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 10 17:27:00 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1645754675' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 10 17:27:00 compute-0 nova_compute[237049]: 2026-01-10 17:27:00.888 237053 DEBUG oslo_concurrency.processutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.549s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 10 17:27:00 compute-0 nova_compute[237049]: 2026-01-10 17:27:00.894 237053 DEBUG nova.compute.provider_tree [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Inventory has not changed in ProviderTree for provider: 5f85855c-8a9b-43b5-ae49-f5846b9dcebe update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 10 17:27:00 compute-0 nova_compute[237049]: 2026-01-10 17:27:00.908 237053 DEBUG nova.scheduler.client.report [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Inventory has not changed for provider 5f85855c-8a9b-43b5-ae49-f5846b9dcebe based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 10 17:27:00 compute-0 nova_compute[237049]: 2026-01-10 17:27:00.910 237053 DEBUG nova.compute.resource_tracker [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 10 17:27:00 compute-0 nova_compute[237049]: 2026-01-10 17:27:00.910 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.681s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 10 17:27:00 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/1645754675' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 10 17:27:01 compute-0 nova_compute[237049]: 2026-01-10 17:27:01.900 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:27:01 compute-0 nova_compute[237049]: 2026-01-10 17:27:01.901 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:27:01 compute-0 nova_compute[237049]: 2026-01-10 17:27:01.901 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:27:01 compute-0 nova_compute[237049]: 2026-01-10 17:27:01.902 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:27:01 compute-0 nova_compute[237049]: 2026-01-10 17:27:01.902 237053 DEBUG nova.compute.manager [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 10 17:27:01 compute-0 ceph-mon[75249]: pgmap v986: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:27:02 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v987: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:27:03 compute-0 nova_compute[237049]: 2026-01-10 17:27:03.346 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:27:03 compute-0 ceph-mon[75249]: pgmap v987: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:27:04 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:27:04 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v988: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:27:05 compute-0 ceph-mon[75249]: pgmap v988: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:27:06 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v989: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:27:07 compute-0 ceph-mon[75249]: pgmap v989: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:27:08 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v990: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:27:09 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:27:09 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:27:09 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:27:09 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:27:09 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:27:09 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:27:09 compute-0 sudo[252602]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 17:27:09 compute-0 sudo[252602]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:27:09 compute-0 sudo[252602]: pam_unix(sudo:session): session closed for user root
Jan 10 17:27:09 compute-0 sudo[252627]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 10 17:27:09 compute-0 sudo[252627]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:27:09 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:27:09 compute-0 sudo[252627]: pam_unix(sudo:session): session closed for user root
Jan 10 17:27:09 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 17:27:09 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 17:27:09 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 10 17:27:09 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 10 17:27:09 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 10 17:27:09 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:27:09 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 10 17:27:09 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 10 17:27:09 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 10 17:27:09 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 10 17:27:09 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 17:27:09 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 17:27:09 compute-0 ceph-mon[75249]: pgmap v990: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:27:09 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 17:27:09 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 10 17:27:09 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:27:09 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 10 17:27:09 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 10 17:27:09 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 17:27:10 compute-0 sudo[252682]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 17:27:10 compute-0 sudo[252682]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:27:10 compute-0 sudo[252682]: pam_unix(sudo:session): session closed for user root
Jan 10 17:27:10 compute-0 sudo[252707]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 10 17:27:10 compute-0 sudo[252707]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:27:10 compute-0 podman[252744]: 2026-01-10 17:27:10.455856922 +0000 UTC m=+0.045822276 container create b5b6cd1f3ced5f24ff4cdbe357cc23ca2c68ae5894ac8558e2b1ed5f9ecd1b35 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_tu, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 10 17:27:10 compute-0 systemd[1]: Started libpod-conmon-b5b6cd1f3ced5f24ff4cdbe357cc23ca2c68ae5894ac8558e2b1ed5f9ecd1b35.scope.
Jan 10 17:27:10 compute-0 podman[252744]: 2026-01-10 17:27:10.434625616 +0000 UTC m=+0.024590950 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:27:10 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:27:10 compute-0 podman[252744]: 2026-01-10 17:27:10.567654267 +0000 UTC m=+0.157619631 container init b5b6cd1f3ced5f24ff4cdbe357cc23ca2c68ae5894ac8558e2b1ed5f9ecd1b35 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_tu, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 10 17:27:10 compute-0 podman[252744]: 2026-01-10 17:27:10.579163159 +0000 UTC m=+0.169128513 container start b5b6cd1f3ced5f24ff4cdbe357cc23ca2c68ae5894ac8558e2b1ed5f9ecd1b35 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_tu, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 10 17:27:10 compute-0 podman[252744]: 2026-01-10 17:27:10.583879702 +0000 UTC m=+0.173845106 container attach b5b6cd1f3ced5f24ff4cdbe357cc23ca2c68ae5894ac8558e2b1ed5f9ecd1b35 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_tu, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 10 17:27:10 compute-0 strange_tu[252760]: 167 167
Jan 10 17:27:10 compute-0 systemd[1]: libpod-b5b6cd1f3ced5f24ff4cdbe357cc23ca2c68ae5894ac8558e2b1ed5f9ecd1b35.scope: Deactivated successfully.
Jan 10 17:27:10 compute-0 podman[252744]: 2026-01-10 17:27:10.588418679 +0000 UTC m=+0.178384033 container died b5b6cd1f3ced5f24ff4cdbe357cc23ca2c68ae5894ac8558e2b1ed5f9ecd1b35 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_tu, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 17:27:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-9ddd750058bc209490f722540110900317d464a41f14c3666309e9e11342e1e7-merged.mount: Deactivated successfully.
Jan 10 17:27:10 compute-0 podman[252744]: 2026-01-10 17:27:10.645784937 +0000 UTC m=+0.235750291 container remove b5b6cd1f3ced5f24ff4cdbe357cc23ca2c68ae5894ac8558e2b1ed5f9ecd1b35 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_tu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 10 17:27:10 compute-0 systemd[1]: libpod-conmon-b5b6cd1f3ced5f24ff4cdbe357cc23ca2c68ae5894ac8558e2b1ed5f9ecd1b35.scope: Deactivated successfully.
Jan 10 17:27:10 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v991: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:27:10 compute-0 podman[252785]: 2026-01-10 17:27:10.892899006 +0000 UTC m=+0.074561712 container create d9210a79005b6d9cd795f8d5793673a0f0bd130f836bd896ff0320d41a8b6b5f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_payne, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 10 17:27:10 compute-0 podman[252785]: 2026-01-10 17:27:10.861634469 +0000 UTC m=+0.043297215 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:27:10 compute-0 systemd[1]: Started libpod-conmon-d9210a79005b6d9cd795f8d5793673a0f0bd130f836bd896ff0320d41a8b6b5f.scope.
Jan 10 17:27:10 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:27:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf0e168913fbea4f88f08834425f46c67becdd63f30ee92a86a8d9562c6d5965/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 17:27:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf0e168913fbea4f88f08834425f46c67becdd63f30ee92a86a8d9562c6d5965/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 17:27:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf0e168913fbea4f88f08834425f46c67becdd63f30ee92a86a8d9562c6d5965/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 17:27:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf0e168913fbea4f88f08834425f46c67becdd63f30ee92a86a8d9562c6d5965/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 17:27:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf0e168913fbea4f88f08834425f46c67becdd63f30ee92a86a8d9562c6d5965/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 10 17:27:11 compute-0 podman[252785]: 2026-01-10 17:27:11.018993432 +0000 UTC m=+0.200656218 container init d9210a79005b6d9cd795f8d5793673a0f0bd130f836bd896ff0320d41a8b6b5f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_payne, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 17:27:11 compute-0 podman[252785]: 2026-01-10 17:27:11.031841762 +0000 UTC m=+0.213504428 container start d9210a79005b6d9cd795f8d5793673a0f0bd130f836bd896ff0320d41a8b6b5f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_payne, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 10 17:27:11 compute-0 podman[252785]: 2026-01-10 17:27:11.037051418 +0000 UTC m=+0.218714174 container attach d9210a79005b6d9cd795f8d5793673a0f0bd130f836bd896ff0320d41a8b6b5f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_payne, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 17:27:11 compute-0 pensive_payne[252802]: --> passed data devices: 0 physical, 3 LVM
Jan 10 17:27:11 compute-0 pensive_payne[252802]: --> All data devices are unavailable
Jan 10 17:27:11 compute-0 systemd[1]: libpod-d9210a79005b6d9cd795f8d5793673a0f0bd130f836bd896ff0320d41a8b6b5f.scope: Deactivated successfully.
Jan 10 17:27:11 compute-0 podman[252785]: 2026-01-10 17:27:11.549678982 +0000 UTC m=+0.731341648 container died d9210a79005b6d9cd795f8d5793673a0f0bd130f836bd896ff0320d41a8b6b5f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_payne, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 10 17:27:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-cf0e168913fbea4f88f08834425f46c67becdd63f30ee92a86a8d9562c6d5965-merged.mount: Deactivated successfully.
Jan 10 17:27:11 compute-0 podman[252785]: 2026-01-10 17:27:11.590400884 +0000 UTC m=+0.772063550 container remove d9210a79005b6d9cd795f8d5793673a0f0bd130f836bd896ff0320d41a8b6b5f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_payne, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 10 17:27:11 compute-0 systemd[1]: libpod-conmon-d9210a79005b6d9cd795f8d5793673a0f0bd130f836bd896ff0320d41a8b6b5f.scope: Deactivated successfully.
Jan 10 17:27:11 compute-0 sudo[252707]: pam_unix(sudo:session): session closed for user root
Jan 10 17:27:11 compute-0 sudo[252832]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 17:27:11 compute-0 sudo[252832]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:27:11 compute-0 sudo[252832]: pam_unix(sudo:session): session closed for user root
Jan 10 17:27:11 compute-0 sudo[252857]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 -- lvm list --format json
Jan 10 17:27:11 compute-0 sudo[252857]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:27:12 compute-0 ceph-mon[75249]: pgmap v991: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:27:12 compute-0 podman[252894]: 2026-01-10 17:27:12.173861125 +0000 UTC m=+0.059503109 container create e032d9e0e758e8192114db9698fdbc66eaada5c986e41396a9655ba28a08f2fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_vaughan, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 10 17:27:12 compute-0 systemd[1]: Started libpod-conmon-e032d9e0e758e8192114db9698fdbc66eaada5c986e41396a9655ba28a08f2fd.scope.
Jan 10 17:27:12 compute-0 podman[252894]: 2026-01-10 17:27:12.147848996 +0000 UTC m=+0.033491030 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:27:12 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:27:12 compute-0 podman[252894]: 2026-01-10 17:27:12.280165116 +0000 UTC m=+0.165807140 container init e032d9e0e758e8192114db9698fdbc66eaada5c986e41396a9655ba28a08f2fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_vaughan, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 17:27:12 compute-0 podman[252894]: 2026-01-10 17:27:12.291871114 +0000 UTC m=+0.177513098 container start e032d9e0e758e8192114db9698fdbc66eaada5c986e41396a9655ba28a08f2fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_vaughan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 10 17:27:12 compute-0 podman[252894]: 2026-01-10 17:27:12.296476703 +0000 UTC m=+0.182118687 container attach e032d9e0e758e8192114db9698fdbc66eaada5c986e41396a9655ba28a08f2fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_vaughan, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 10 17:27:12 compute-0 upbeat_vaughan[252910]: 167 167
Jan 10 17:27:12 compute-0 systemd[1]: libpod-e032d9e0e758e8192114db9698fdbc66eaada5c986e41396a9655ba28a08f2fd.scope: Deactivated successfully.
Jan 10 17:27:12 compute-0 podman[252894]: 2026-01-10 17:27:12.299903949 +0000 UTC m=+0.185545933 container died e032d9e0e758e8192114db9698fdbc66eaada5c986e41396a9655ba28a08f2fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_vaughan, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 17:27:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-80c72960f66174b78fe93c8c2fc29e10e999ebbc12412738ecc465809ed2078e-merged.mount: Deactivated successfully.
Jan 10 17:27:12 compute-0 podman[252894]: 2026-01-10 17:27:12.349526391 +0000 UTC m=+0.235168335 container remove e032d9e0e758e8192114db9698fdbc66eaada5c986e41396a9655ba28a08f2fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_vaughan, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 17:27:12 compute-0 systemd[1]: libpod-conmon-e032d9e0e758e8192114db9698fdbc66eaada5c986e41396a9655ba28a08f2fd.scope: Deactivated successfully.
Jan 10 17:27:12 compute-0 podman[252934]: 2026-01-10 17:27:12.562502923 +0000 UTC m=+0.060249331 container create ac7532239d826f94126d50ad2073ec101ed613855c8d8a0a10cbcbca342092ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_leavitt, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 17:27:12 compute-0 systemd[1]: Started libpod-conmon-ac7532239d826f94126d50ad2073ec101ed613855c8d8a0a10cbcbca342092ba.scope.
Jan 10 17:27:12 compute-0 podman[252934]: 2026-01-10 17:27:12.532559153 +0000 UTC m=+0.030305611 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:27:12 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:27:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82a78923e7a05afdbc61174279ef8c804e8dae4a1cd672d5e9f1ee07def6c3c7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 17:27:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82a78923e7a05afdbc61174279ef8c804e8dae4a1cd672d5e9f1ee07def6c3c7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 17:27:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82a78923e7a05afdbc61174279ef8c804e8dae4a1cd672d5e9f1ee07def6c3c7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 17:27:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82a78923e7a05afdbc61174279ef8c804e8dae4a1cd672d5e9f1ee07def6c3c7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 17:27:12 compute-0 podman[252934]: 2026-01-10 17:27:12.675058549 +0000 UTC m=+0.172804927 container init ac7532239d826f94126d50ad2073ec101ed613855c8d8a0a10cbcbca342092ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_leavitt, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 10 17:27:12 compute-0 podman[252934]: 2026-01-10 17:27:12.689393621 +0000 UTC m=+0.187139989 container start ac7532239d826f94126d50ad2073ec101ed613855c8d8a0a10cbcbca342092ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_leavitt, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 17:27:12 compute-0 podman[252934]: 2026-01-10 17:27:12.693427844 +0000 UTC m=+0.191174232 container attach ac7532239d826f94126d50ad2073ec101ed613855c8d8a0a10cbcbca342092ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_leavitt, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 17:27:12 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v992: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:27:12 compute-0 awesome_leavitt[252951]: {
Jan 10 17:27:12 compute-0 awesome_leavitt[252951]:     "0": [
Jan 10 17:27:12 compute-0 awesome_leavitt[252951]:         {
Jan 10 17:27:12 compute-0 awesome_leavitt[252951]:             "devices": [
Jan 10 17:27:12 compute-0 awesome_leavitt[252951]:                 "/dev/loop3"
Jan 10 17:27:12 compute-0 awesome_leavitt[252951]:             ],
Jan 10 17:27:12 compute-0 awesome_leavitt[252951]:             "lv_name": "ceph_lv0",
Jan 10 17:27:12 compute-0 awesome_leavitt[252951]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 10 17:27:12 compute-0 awesome_leavitt[252951]:             "lv_size": "21470642176",
Jan 10 17:27:12 compute-0 awesome_leavitt[252951]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9aa1dcc9-88f4-49c0-be40-744313964d3e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 17:27:12 compute-0 awesome_leavitt[252951]:             "lv_uuid": "fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2",
Jan 10 17:27:12 compute-0 awesome_leavitt[252951]:             "name": "ceph_lv0",
Jan 10 17:27:12 compute-0 awesome_leavitt[252951]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 10 17:27:12 compute-0 awesome_leavitt[252951]:             "tags": {
Jan 10 17:27:12 compute-0 awesome_leavitt[252951]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 10 17:27:12 compute-0 awesome_leavitt[252951]:                 "ceph.block_uuid": "fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2",
Jan 10 17:27:12 compute-0 awesome_leavitt[252951]:                 "ceph.cephx_lockbox_secret": "",
Jan 10 17:27:12 compute-0 awesome_leavitt[252951]:                 "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 17:27:12 compute-0 awesome_leavitt[252951]:                 "ceph.cluster_name": "ceph",
Jan 10 17:27:12 compute-0 awesome_leavitt[252951]:                 "ceph.crush_device_class": "",
Jan 10 17:27:12 compute-0 awesome_leavitt[252951]:                 "ceph.encrypted": "0",
Jan 10 17:27:12 compute-0 awesome_leavitt[252951]:                 "ceph.objectstore": "bluestore",
Jan 10 17:27:12 compute-0 awesome_leavitt[252951]:                 "ceph.osd_fsid": "9aa1dcc9-88f4-49c0-be40-744313964d3e",
Jan 10 17:27:12 compute-0 awesome_leavitt[252951]:                 "ceph.osd_id": "0",
Jan 10 17:27:12 compute-0 awesome_leavitt[252951]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 10 17:27:12 compute-0 awesome_leavitt[252951]:                 "ceph.type": "block",
Jan 10 17:27:12 compute-0 awesome_leavitt[252951]:                 "ceph.vdo": "0",
Jan 10 17:27:12 compute-0 awesome_leavitt[252951]:                 "ceph.with_tpm": "0"
Jan 10 17:27:12 compute-0 awesome_leavitt[252951]:             },
Jan 10 17:27:12 compute-0 awesome_leavitt[252951]:             "type": "block",
Jan 10 17:27:12 compute-0 awesome_leavitt[252951]:             "vg_name": "ceph_vg0"
Jan 10 17:27:12 compute-0 awesome_leavitt[252951]:         }
Jan 10 17:27:12 compute-0 awesome_leavitt[252951]:     ],
Jan 10 17:27:12 compute-0 awesome_leavitt[252951]:     "1": [
Jan 10 17:27:12 compute-0 awesome_leavitt[252951]:         {
Jan 10 17:27:12 compute-0 awesome_leavitt[252951]:             "devices": [
Jan 10 17:27:12 compute-0 awesome_leavitt[252951]:                 "/dev/loop4"
Jan 10 17:27:12 compute-0 awesome_leavitt[252951]:             ],
Jan 10 17:27:12 compute-0 awesome_leavitt[252951]:             "lv_name": "ceph_lv1",
Jan 10 17:27:12 compute-0 awesome_leavitt[252951]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 10 17:27:12 compute-0 awesome_leavitt[252951]:             "lv_size": "21470642176",
Jan 10 17:27:12 compute-0 awesome_leavitt[252951]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e8e31518-65ae-476c-891c-e2fc550d0a1c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 17:27:12 compute-0 awesome_leavitt[252951]:             "lv_uuid": "jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE",
Jan 10 17:27:12 compute-0 awesome_leavitt[252951]:             "name": "ceph_lv1",
Jan 10 17:27:12 compute-0 awesome_leavitt[252951]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 10 17:27:12 compute-0 awesome_leavitt[252951]:             "tags": {
Jan 10 17:27:12 compute-0 awesome_leavitt[252951]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 10 17:27:12 compute-0 awesome_leavitt[252951]:                 "ceph.block_uuid": "jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE",
Jan 10 17:27:12 compute-0 awesome_leavitt[252951]:                 "ceph.cephx_lockbox_secret": "",
Jan 10 17:27:12 compute-0 awesome_leavitt[252951]:                 "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 17:27:12 compute-0 awesome_leavitt[252951]:                 "ceph.cluster_name": "ceph",
Jan 10 17:27:12 compute-0 awesome_leavitt[252951]:                 "ceph.crush_device_class": "",
Jan 10 17:27:12 compute-0 awesome_leavitt[252951]:                 "ceph.encrypted": "0",
Jan 10 17:27:12 compute-0 awesome_leavitt[252951]:                 "ceph.objectstore": "bluestore",
Jan 10 17:27:12 compute-0 awesome_leavitt[252951]:                 "ceph.osd_fsid": "e8e31518-65ae-476c-891c-e2fc550d0a1c",
Jan 10 17:27:12 compute-0 awesome_leavitt[252951]:                 "ceph.osd_id": "1",
Jan 10 17:27:12 compute-0 awesome_leavitt[252951]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 10 17:27:12 compute-0 awesome_leavitt[252951]:                 "ceph.type": "block",
Jan 10 17:27:12 compute-0 awesome_leavitt[252951]:                 "ceph.vdo": "0",
Jan 10 17:27:12 compute-0 awesome_leavitt[252951]:                 "ceph.with_tpm": "0"
Jan 10 17:27:12 compute-0 awesome_leavitt[252951]:             },
Jan 10 17:27:12 compute-0 awesome_leavitt[252951]:             "type": "block",
Jan 10 17:27:12 compute-0 awesome_leavitt[252951]:             "vg_name": "ceph_vg1"
Jan 10 17:27:12 compute-0 awesome_leavitt[252951]:         }
Jan 10 17:27:12 compute-0 awesome_leavitt[252951]:     ],
Jan 10 17:27:12 compute-0 awesome_leavitt[252951]:     "2": [
Jan 10 17:27:12 compute-0 awesome_leavitt[252951]:         {
Jan 10 17:27:12 compute-0 awesome_leavitt[252951]:             "devices": [
Jan 10 17:27:12 compute-0 awesome_leavitt[252951]:                 "/dev/loop5"
Jan 10 17:27:12 compute-0 awesome_leavitt[252951]:             ],
Jan 10 17:27:12 compute-0 awesome_leavitt[252951]:             "lv_name": "ceph_lv2",
Jan 10 17:27:12 compute-0 awesome_leavitt[252951]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 10 17:27:12 compute-0 awesome_leavitt[252951]:             "lv_size": "21470642176",
Jan 10 17:27:12 compute-0 awesome_leavitt[252951]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=87473727-6468-4f68-8371-e0bf60edaa43,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 17:27:12 compute-0 awesome_leavitt[252951]:             "lv_uuid": "47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL",
Jan 10 17:27:12 compute-0 awesome_leavitt[252951]:             "name": "ceph_lv2",
Jan 10 17:27:12 compute-0 awesome_leavitt[252951]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 10 17:27:12 compute-0 awesome_leavitt[252951]:             "tags": {
Jan 10 17:27:12 compute-0 awesome_leavitt[252951]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 10 17:27:12 compute-0 awesome_leavitt[252951]:                 "ceph.block_uuid": "47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL",
Jan 10 17:27:12 compute-0 awesome_leavitt[252951]:                 "ceph.cephx_lockbox_secret": "",
Jan 10 17:27:12 compute-0 awesome_leavitt[252951]:                 "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 17:27:12 compute-0 awesome_leavitt[252951]:                 "ceph.cluster_name": "ceph",
Jan 10 17:27:12 compute-0 awesome_leavitt[252951]:                 "ceph.crush_device_class": "",
Jan 10 17:27:12 compute-0 awesome_leavitt[252951]:                 "ceph.encrypted": "0",
Jan 10 17:27:12 compute-0 awesome_leavitt[252951]:                 "ceph.objectstore": "bluestore",
Jan 10 17:27:12 compute-0 awesome_leavitt[252951]:                 "ceph.osd_fsid": "87473727-6468-4f68-8371-e0bf60edaa43",
Jan 10 17:27:12 compute-0 awesome_leavitt[252951]:                 "ceph.osd_id": "2",
Jan 10 17:27:12 compute-0 awesome_leavitt[252951]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 10 17:27:12 compute-0 awesome_leavitt[252951]:                 "ceph.type": "block",
Jan 10 17:27:12 compute-0 awesome_leavitt[252951]:                 "ceph.vdo": "0",
Jan 10 17:27:12 compute-0 awesome_leavitt[252951]:                 "ceph.with_tpm": "0"
Jan 10 17:27:12 compute-0 awesome_leavitt[252951]:             },
Jan 10 17:27:12 compute-0 awesome_leavitt[252951]:             "type": "block",
Jan 10 17:27:12 compute-0 awesome_leavitt[252951]:             "vg_name": "ceph_vg2"
Jan 10 17:27:12 compute-0 awesome_leavitt[252951]:         }
Jan 10 17:27:12 compute-0 awesome_leavitt[252951]:     ]
Jan 10 17:27:12 compute-0 awesome_leavitt[252951]: }
Jan 10 17:27:13 compute-0 systemd[1]: libpod-ac7532239d826f94126d50ad2073ec101ed613855c8d8a0a10cbcbca342092ba.scope: Deactivated successfully.
Jan 10 17:27:13 compute-0 podman[252934]: 2026-01-10 17:27:13.035886167 +0000 UTC m=+0.533632545 container died ac7532239d826f94126d50ad2073ec101ed613855c8d8a0a10cbcbca342092ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_leavitt, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True)
Jan 10 17:27:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-82a78923e7a05afdbc61174279ef8c804e8dae4a1cd672d5e9f1ee07def6c3c7-merged.mount: Deactivated successfully.
Jan 10 17:27:13 compute-0 podman[252934]: 2026-01-10 17:27:13.089426378 +0000 UTC m=+0.587172786 container remove ac7532239d826f94126d50ad2073ec101ed613855c8d8a0a10cbcbca342092ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_leavitt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 17:27:13 compute-0 systemd[1]: libpod-conmon-ac7532239d826f94126d50ad2073ec101ed613855c8d8a0a10cbcbca342092ba.scope: Deactivated successfully.
Jan 10 17:27:13 compute-0 sudo[252857]: pam_unix(sudo:session): session closed for user root
Jan 10 17:27:13 compute-0 sudo[252972]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 17:27:13 compute-0 sudo[252972]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:27:13 compute-0 sudo[252972]: pam_unix(sudo:session): session closed for user root
Jan 10 17:27:13 compute-0 sudo[252997]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 -- raw list --format json
Jan 10 17:27:13 compute-0 sudo[252997]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:27:13 compute-0 podman[253034]: 2026-01-10 17:27:13.67896832 +0000 UTC m=+0.059923312 container create 225d1a3637fd3084edeed8ee43100ec1add776258abf4fbf7402aa66b6639c8b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_merkle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 17:27:13 compute-0 systemd[1]: Started libpod-conmon-225d1a3637fd3084edeed8ee43100ec1add776258abf4fbf7402aa66b6639c8b.scope.
Jan 10 17:27:13 compute-0 podman[253034]: 2026-01-10 17:27:13.652070305 +0000 UTC m=+0.033025397 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:27:13 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:27:13 compute-0 podman[253034]: 2026-01-10 17:27:13.783331906 +0000 UTC m=+0.164286978 container init 225d1a3637fd3084edeed8ee43100ec1add776258abf4fbf7402aa66b6639c8b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_merkle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 10 17:27:13 compute-0 podman[253034]: 2026-01-10 17:27:13.798058879 +0000 UTC m=+0.179013881 container start 225d1a3637fd3084edeed8ee43100ec1add776258abf4fbf7402aa66b6639c8b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_merkle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 10 17:27:13 compute-0 podman[253034]: 2026-01-10 17:27:13.801949378 +0000 UTC m=+0.182904410 container attach 225d1a3637fd3084edeed8ee43100ec1add776258abf4fbf7402aa66b6639c8b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_merkle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 17:27:13 compute-0 awesome_merkle[253050]: 167 167
Jan 10 17:27:13 compute-0 systemd[1]: libpod-225d1a3637fd3084edeed8ee43100ec1add776258abf4fbf7402aa66b6639c8b.scope: Deactivated successfully.
Jan 10 17:27:13 compute-0 conmon[253050]: conmon 225d1a3637fd3084edee <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-225d1a3637fd3084edeed8ee43100ec1add776258abf4fbf7402aa66b6639c8b.scope/container/memory.events
Jan 10 17:27:13 compute-0 podman[253034]: 2026-01-10 17:27:13.806097154 +0000 UTC m=+0.187052186 container died 225d1a3637fd3084edeed8ee43100ec1add776258abf4fbf7402aa66b6639c8b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_merkle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 17:27:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-fe4db05b2dede91d8d46a362db42edbbcf259271a881f6c5dd3a8e6cc5e02a11-merged.mount: Deactivated successfully.
Jan 10 17:27:13 compute-0 podman[253034]: 2026-01-10 17:27:13.855458518 +0000 UTC m=+0.236413510 container remove 225d1a3637fd3084edeed8ee43100ec1add776258abf4fbf7402aa66b6639c8b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_merkle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 17:27:13 compute-0 systemd[1]: libpod-conmon-225d1a3637fd3084edeed8ee43100ec1add776258abf4fbf7402aa66b6639c8b.scope: Deactivated successfully.
Jan 10 17:27:14 compute-0 ceph-mon[75249]: pgmap v992: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:27:14 compute-0 podman[253074]: 2026-01-10 17:27:14.124829072 +0000 UTC m=+0.056791394 container create f2e2dd7ad54d77e6cd8dcbf42e31b5b321d1ba283fd4ec901458c5f4e7563cd6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_rubin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 10 17:27:14 compute-0 systemd[1]: Started libpod-conmon-f2e2dd7ad54d77e6cd8dcbf42e31b5b321d1ba283fd4ec901458c5f4e7563cd6.scope.
Jan 10 17:27:14 compute-0 podman[253074]: 2026-01-10 17:27:14.103032581 +0000 UTC m=+0.034994953 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:27:14 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:27:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1bb6e7df66467396b28c7a073cbefdb777893d687056140a9bff07e56d1d88bd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 17:27:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1bb6e7df66467396b28c7a073cbefdb777893d687056140a9bff07e56d1d88bd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 17:27:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1bb6e7df66467396b28c7a073cbefdb777893d687056140a9bff07e56d1d88bd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 17:27:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1bb6e7df66467396b28c7a073cbefdb777893d687056140a9bff07e56d1d88bd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 17:27:14 compute-0 podman[253074]: 2026-01-10 17:27:14.239186659 +0000 UTC m=+0.171149001 container init f2e2dd7ad54d77e6cd8dcbf42e31b5b321d1ba283fd4ec901458c5f4e7563cd6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_rubin, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 17:27:14 compute-0 podman[253074]: 2026-01-10 17:27:14.2545677 +0000 UTC m=+0.186530022 container start f2e2dd7ad54d77e6cd8dcbf42e31b5b321d1ba283fd4ec901458c5f4e7563cd6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_rubin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 10 17:27:14 compute-0 podman[253074]: 2026-01-10 17:27:14.258905772 +0000 UTC m=+0.190868384 container attach f2e2dd7ad54d77e6cd8dcbf42e31b5b321d1ba283fd4ec901458c5f4e7563cd6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_rubin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 10 17:27:14 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:27:14 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v993: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:27:15 compute-0 lvm[253170]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 10 17:27:15 compute-0 lvm[253170]: VG ceph_vg1 finished
Jan 10 17:27:15 compute-0 lvm[253173]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 10 17:27:15 compute-0 lvm[253173]: VG ceph_vg2 finished
Jan 10 17:27:15 compute-0 lvm[253171]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 10 17:27:15 compute-0 lvm[253171]: VG ceph_vg0 finished
Jan 10 17:27:15 compute-0 lvm[253176]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 10 17:27:15 compute-0 lvm[253176]: VG ceph_vg2 finished
Jan 10 17:27:15 compute-0 frosty_rubin[253091]: {}
Jan 10 17:27:15 compute-0 nova_compute[237049]: 2026-01-10 17:27:15.154 237053 DEBUG oslo_concurrency.processutils [None req-7812d2ae-e55a-44b7-bbb1-865b49bf4cf2 a24eb11d16524fc38ae639f53539d933 76e2e6e093f244a88428425bacb84b1e - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 10 17:27:15 compute-0 systemd[1]: libpod-f2e2dd7ad54d77e6cd8dcbf42e31b5b321d1ba283fd4ec901458c5f4e7563cd6.scope: Deactivated successfully.
Jan 10 17:27:15 compute-0 systemd[1]: libpod-f2e2dd7ad54d77e6cd8dcbf42e31b5b321d1ba283fd4ec901458c5f4e7563cd6.scope: Consumed 1.571s CPU time.
Jan 10 17:27:15 compute-0 podman[253074]: 2026-01-10 17:27:15.164787412 +0000 UTC m=+1.096749764 container died f2e2dd7ad54d77e6cd8dcbf42e31b5b321d1ba283fd4ec901458c5f4e7563cd6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_rubin, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 10 17:27:15 compute-0 nova_compute[237049]: 2026-01-10 17:27:15.183 237053 DEBUG oslo_concurrency.processutils [None req-7812d2ae-e55a-44b7-bbb1-865b49bf4cf2 a24eb11d16524fc38ae639f53539d933 76e2e6e093f244a88428425bacb84b1e - - default default] CMD "env LANG=C uptime" returned: 0 in 0.028s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 10 17:27:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-1bb6e7df66467396b28c7a073cbefdb777893d687056140a9bff07e56d1d88bd-merged.mount: Deactivated successfully.
Jan 10 17:27:15 compute-0 podman[253074]: 2026-01-10 17:27:15.220917106 +0000 UTC m=+1.152879468 container remove f2e2dd7ad54d77e6cd8dcbf42e31b5b321d1ba283fd4ec901458c5f4e7563cd6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_rubin, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 10 17:27:15 compute-0 systemd[1]: libpod-conmon-f2e2dd7ad54d77e6cd8dcbf42e31b5b321d1ba283fd4ec901458c5f4e7563cd6.scope: Deactivated successfully.
Jan 10 17:27:15 compute-0 sudo[252997]: pam_unix(sudo:session): session closed for user root
Jan 10 17:27:15 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 10 17:27:15 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:27:15 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 10 17:27:15 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:27:15 compute-0 sudo[253193]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 10 17:27:15 compute-0 sudo[253193]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:27:15 compute-0 sudo[253193]: pam_unix(sudo:session): session closed for user root
Jan 10 17:27:16 compute-0 ceph-mon[75249]: pgmap v993: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:27:16 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:27:16 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:27:16 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v994: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:27:17 compute-0 podman[253218]: 2026-01-10 17:27:17.093513725 +0000 UTC m=+0.089052258 container health_status 4a66317a3a3660e903224f5e823ead0a57597ae17c6ac34919f8a3aeea25124f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 17:27:17 compute-0 podman[253219]: 2026-01-10 17:27:17.164546087 +0000 UTC m=+0.157738184 container health_status a2049b55215718ead6719d161f51721cb7ba61ad8a59a8a1174e05b17008fa6f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 10 17:27:18 compute-0 ceph-mon[75249]: pgmap v994: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:27:18 compute-0 ceph-mon[75249]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 10 17:27:18 compute-0 ceph-mon[75249]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.0 total, 600.0 interval
                                           Cumulative writes: 4613 writes, 20K keys, 4613 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 4613 writes, 4613 syncs, 1.00 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1508 writes, 6817 keys, 1508 commit groups, 1.0 writes per commit group, ingest: 6.50 MB, 0.01 MB/s
                                           Interval WAL: 1508 writes, 1508 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     89.5      0.18              0.08        11    0.016       0      0       0.0       0.0
                                             L6      1/0    5.41 MB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   3.2    120.7    100.0      0.52              0.28        10    0.052     38K   5293       0.0       0.0
                                            Sum      1/0    5.41 MB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   4.2     89.6     97.3      0.70              0.36        21    0.033     38K   5293       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   5.6    101.6    103.7      0.31              0.15        10    0.031     21K   3024       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   0.0    120.7    100.0      0.52              0.28        10    0.052     38K   5293       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     91.3      0.18              0.08        10    0.018       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     13.8      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1800.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.016, interval 0.006
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.07 GB write, 0.04 MB/s write, 0.06 GB read, 0.03 MB/s read, 0.7 seconds
                                           Interval compaction: 0.03 GB write, 0.05 MB/s write, 0.03 GB read, 0.05 MB/s read, 0.3 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55efa2bef8d0#2 capacity: 308.00 MB usage: 5.61 MB table_size: 0 occupancy: 18446744073709551615 collections: 4 last_copies: 0 last_secs: 0.000191 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(559,5.28 MB,1.71461%) FilterBlock(22,113.36 KB,0.0359424%) IndexBlock(22,220.55 KB,0.0699279%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 10 17:27:18 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v995: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:27:19 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:27:20 compute-0 ceph-mon[75249]: pgmap v995: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:27:20 compute-0 ceph-mon[75249]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #48. Immutable memtables: 0.
Jan 10 17:27:20 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:27:20.053394) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 10 17:27:20 compute-0 ceph-mon[75249]: rocksdb: [db/flush_job.cc:856] [default] [JOB 23] Flushing memtable with next log file: 48
Jan 10 17:27:20 compute-0 ceph-mon[75249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768066040053869, "job": 23, "event": "flush_started", "num_memtables": 1, "num_entries": 1415, "num_deletes": 251, "total_data_size": 1503072, "memory_usage": 1528160, "flush_reason": "Manual Compaction"}
Jan 10 17:27:20 compute-0 ceph-mon[75249]: rocksdb: [db/flush_job.cc:885] [default] [JOB 23] Level-0 flush table #49: started
Jan 10 17:27:20 compute-0 ceph-mon[75249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768066040068435, "cf_name": "default", "job": 23, "event": "table_file_creation", "file_number": 49, "file_size": 1462211, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 19265, "largest_seqno": 20679, "table_properties": {"data_size": 1455603, "index_size": 3747, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 13760, "raw_average_key_size": 19, "raw_value_size": 1442345, "raw_average_value_size": 2075, "num_data_blocks": 172, "num_entries": 695, "num_filter_entries": 695, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768065895, "oldest_key_time": 1768065895, "file_creation_time": 1768066040, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f71f9c2-f3c5-4fc3-bcd9-6ffe346ae9d4", "db_session_id": "VPFJD76VNV79HUMFHEYZ", "orig_file_number": 49, "seqno_to_time_mapping": "N/A"}}
Jan 10 17:27:20 compute-0 ceph-mon[75249]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 23] Flush lasted 15156 microseconds, and 6925 cpu microseconds.
Jan 10 17:27:20 compute-0 ceph-mon[75249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 10 17:27:20 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:27:20.068557) [db/flush_job.cc:967] [default] [JOB 23] Level-0 flush table #49: 1462211 bytes OK
Jan 10 17:27:20 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:27:20.068593) [db/memtable_list.cc:519] [default] Level-0 commit table #49 started
Jan 10 17:27:20 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:27:20.070409) [db/memtable_list.cc:722] [default] Level-0 commit table #49: memtable #1 done
Jan 10 17:27:20 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:27:20.070431) EVENT_LOG_v1 {"time_micros": 1768066040070425, "job": 23, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 10 17:27:20 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:27:20.070462) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 10 17:27:20 compute-0 ceph-mon[75249]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 23] Try to delete WAL files size 1496804, prev total WAL file size 1496804, number of live WAL files 2.
Jan 10 17:27:20 compute-0 ceph-mon[75249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000045.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 10 17:27:20 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:27:20.071481) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031353036' seq:72057594037927935, type:22 .. '7061786F730031373538' seq:0, type:0; will stop at (end)
Jan 10 17:27:20 compute-0 ceph-mon[75249]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 24] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 10 17:27:20 compute-0 ceph-mon[75249]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 23 Base level 0, inputs: [49(1427KB)], [47(5542KB)]
Jan 10 17:27:20 compute-0 ceph-mon[75249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768066040071563, "job": 24, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [49], "files_L6": [47], "score": -1, "input_data_size": 7137855, "oldest_snapshot_seqno": -1}
Jan 10 17:27:20 compute-0 ceph-mon[75249]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 24] Generated table #50: 4079 keys, 5927510 bytes, temperature: kUnknown
Jan 10 17:27:20 compute-0 ceph-mon[75249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768066040114092, "cf_name": "default", "job": 24, "event": "table_file_creation", "file_number": 50, "file_size": 5927510, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 5898025, "index_size": 18175, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10245, "raw_key_size": 98478, "raw_average_key_size": 24, "raw_value_size": 5822613, "raw_average_value_size": 1427, "num_data_blocks": 771, "num_entries": 4079, "num_filter_entries": 4079, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768064235, "oldest_key_time": 0, "file_creation_time": 1768066040, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f71f9c2-f3c5-4fc3-bcd9-6ffe346ae9d4", "db_session_id": "VPFJD76VNV79HUMFHEYZ", "orig_file_number": 50, "seqno_to_time_mapping": "N/A"}}
Jan 10 17:27:20 compute-0 ceph-mon[75249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 10 17:27:20 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:27:20.114558) [db/compaction/compaction_job.cc:1663] [default] [JOB 24] Compacted 1@0 + 1@6 files to L6 => 5927510 bytes
Jan 10 17:27:20 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:27:20.115991) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 167.2 rd, 138.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.4, 5.4 +0.0 blob) out(5.7 +0.0 blob), read-write-amplify(8.9) write-amplify(4.1) OK, records in: 4593, records dropped: 514 output_compression: NoCompression
Jan 10 17:27:20 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:27:20.116015) EVENT_LOG_v1 {"time_micros": 1768066040116002, "job": 24, "event": "compaction_finished", "compaction_time_micros": 42691, "compaction_time_cpu_micros": 21588, "output_level": 6, "num_output_files": 1, "total_output_size": 5927510, "num_input_records": 4593, "num_output_records": 4079, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 10 17:27:20 compute-0 ceph-mon[75249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000049.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 10 17:27:20 compute-0 ceph-mon[75249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768066040116494, "job": 24, "event": "table_file_deletion", "file_number": 49}
Jan 10 17:27:20 compute-0 ceph-mon[75249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000047.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 10 17:27:20 compute-0 ceph-mon[75249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768066040117678, "job": 24, "event": "table_file_deletion", "file_number": 47}
Jan 10 17:27:20 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:27:20.071263) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 10 17:27:20 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:27:20.117891) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 10 17:27:20 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:27:20.117901) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 10 17:27:20 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:27:20.117903) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 10 17:27:20 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:27:20.117905) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 10 17:27:20 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:27:20.117907) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 10 17:27:20 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v996: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:27:21 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:27:21.760 152671 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd6:b5:c0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '8e:56:cf:00:80:b3'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 10 17:27:21 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:27:21.763 152671 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 10 17:27:22 compute-0 ceph-mon[75249]: pgmap v996: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:27:22 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v997: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:27:23 compute-0 sshd-session[253263]: Connection closed by authenticating user root 216.36.124.133 port 57900 [preauth]
Jan 10 17:27:24 compute-0 ceph-mon[75249]: pgmap v997: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:27:24 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:27:24 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v998: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:27:26 compute-0 ceph-mon[75249]: pgmap v998: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:27:26 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v999: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:27:28 compute-0 ceph-mon[75249]: pgmap v999: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:27:28 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1000: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:27:29 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:27:29 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:27:29.766 152671 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=fbd04e21-7be2-4eb3-a385-03f0bb540a40, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 10 17:27:30 compute-0 ceph-mon[75249]: pgmap v1000: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:27:30 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1001: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:27:32 compute-0 ceph-mon[75249]: pgmap v1001: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:27:32 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1002: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:27:33 compute-0 ceph-mon[75249]: pgmap v1002: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:27:34 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:27:34 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1003: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:27:35 compute-0 ceph-mon[75249]: pgmap v1003: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:27:36 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 10 17:27:36 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2846208037' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 10 17:27:36 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 10 17:27:36 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2846208037' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 10 17:27:36 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1004: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:27:36 compute-0 ceph-mon[75249]: from='client.? 192.168.122.10:0/2846208037' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 10 17:27:36 compute-0 ceph-mon[75249]: from='client.? 192.168.122.10:0/2846208037' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 10 17:27:37 compute-0 ceph-mon[75249]: pgmap v1004: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:27:38 compute-0 ceph-mgr[75538]: [balancer INFO root] Optimize plan auto_2026-01-10_17:27:38
Jan 10 17:27:38 compute-0 ceph-mgr[75538]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 10 17:27:38 compute-0 ceph-mgr[75538]: [balancer INFO root] do_upmap
Jan 10 17:27:38 compute-0 ceph-mgr[75538]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'images', 'backups', '.mgr', 'volumes', 'vms', 'cephfs.cephfs.data']
Jan 10 17:27:38 compute-0 ceph-mgr[75538]: [balancer INFO root] prepared 0/10 upmap changes
Jan 10 17:27:38 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1005: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:27:39 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:27:39 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:27:39 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:27:39 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:27:39 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:27:39 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:27:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 10 17:27:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 10 17:27:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 10 17:27:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 10 17:27:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 10 17:27:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 10 17:27:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 10 17:27:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 10 17:27:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 10 17:27:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 10 17:27:39 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:27:39 compute-0 ceph-mon[75249]: pgmap v1005: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:27:40 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1006: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:27:41 compute-0 ceph-mon[75249]: pgmap v1006: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:27:42 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1007: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:27:43 compute-0 ceph-mon[75249]: pgmap v1007: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:27:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] _maybe_adjust
Jan 10 17:27:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:27:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 10 17:27:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:27:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 5.365931724612428e-07 of space, bias 1.0, pg target 0.00016097795173837282 quantized to 32 (current 32)
Jan 10 17:27:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:27:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.1924810223865999e-07 of space, bias 1.0, pg target 3.5774430671597993e-05 quantized to 32 (current 32)
Jan 10 17:27:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:27:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 17:27:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:27:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000668695260671586 of space, bias 1.0, pg target 0.2006085782014758 quantized to 32 (current 32)
Jan 10 17:27:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:27:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0462037643091811e-06 of space, bias 4.0, pg target 0.0012554445171710175 quantized to 16 (current 16)
Jan 10 17:27:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:27:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 17:27:44 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:27:44 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1008: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:27:45 compute-0 ceph-mon[75249]: pgmap v1008: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:27:46 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1009: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:27:47 compute-0 ceph-mon[75249]: pgmap v1009: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:27:48 compute-0 podman[253265]: 2026-01-10 17:27:48.11591303 +0000 UTC m=+0.113109461 container health_status 4a66317a3a3660e903224f5e823ead0a57597ae17c6ac34919f8a3aeea25124f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 10 17:27:48 compute-0 podman[253266]: 2026-01-10 17:27:48.136922693 +0000 UTC m=+0.119328077 container health_status a2049b55215718ead6719d161f51721cb7ba61ad8a59a8a1174e05b17008fa6f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251202)
Jan 10 17:27:48 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1010: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:27:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:27:48.946 152671 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 10 17:27:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:27:48.946 152671 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 10 17:27:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:27:48.947 152671 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 10 17:27:49 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:27:49 compute-0 ceph-mon[75249]: pgmap v1010: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:27:50 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1011: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:27:51 compute-0 rsyslogd[1006]: imjournal: 15364 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Jan 10 17:27:51 compute-0 ceph-mon[75249]: pgmap v1011: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:27:52 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1012: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:27:53 compute-0 ceph-mon[75249]: pgmap v1012: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:27:54 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:27:54 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1013: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:27:55 compute-0 nova_compute[237049]: 2026-01-10 17:27:55.346 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:27:55 compute-0 nova_compute[237049]: 2026-01-10 17:27:55.347 237053 DEBUG nova.compute.manager [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 10 17:27:55 compute-0 nova_compute[237049]: 2026-01-10 17:27:55.347 237053 DEBUG nova.compute.manager [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 10 17:27:55 compute-0 nova_compute[237049]: 2026-01-10 17:27:55.374 237053 DEBUG nova.compute.manager [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 10 17:27:55 compute-0 nova_compute[237049]: 2026-01-10 17:27:55.376 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:27:55 compute-0 ceph-mon[75249]: pgmap v1013: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:27:56 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1014: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:27:57 compute-0 ceph-mon[75249]: pgmap v1014: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:27:58 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1015: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:27:59 compute-0 nova_compute[237049]: 2026-01-10 17:27:59.345 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:27:59 compute-0 nova_compute[237049]: 2026-01-10 17:27:59.346 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:27:59 compute-0 nova_compute[237049]: 2026-01-10 17:27:59.402 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 10 17:27:59 compute-0 nova_compute[237049]: 2026-01-10 17:27:59.403 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 10 17:27:59 compute-0 nova_compute[237049]: 2026-01-10 17:27:59.403 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 10 17:27:59 compute-0 nova_compute[237049]: 2026-01-10 17:27:59.403 237053 DEBUG nova.compute.resource_tracker [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 10 17:27:59 compute-0 nova_compute[237049]: 2026-01-10 17:27:59.403 237053 DEBUG oslo_concurrency.processutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 10 17:27:59 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:27:59 compute-0 ceph-mon[75249]: pgmap v1015: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:27:59 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 10 17:27:59 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2937330844' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 10 17:28:00 compute-0 nova_compute[237049]: 2026-01-10 17:28:00.005 237053 DEBUG oslo_concurrency.processutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.602s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 10 17:28:00 compute-0 nova_compute[237049]: 2026-01-10 17:28:00.258 237053 WARNING nova.virt.libvirt.driver [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 10 17:28:00 compute-0 nova_compute[237049]: 2026-01-10 17:28:00.261 237053 DEBUG nova.compute.resource_tracker [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5155MB free_disk=59.988249060697854GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 10 17:28:00 compute-0 nova_compute[237049]: 2026-01-10 17:28:00.261 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 10 17:28:00 compute-0 nova_compute[237049]: 2026-01-10 17:28:00.262 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 10 17:28:00 compute-0 nova_compute[237049]: 2026-01-10 17:28:00.364 237053 DEBUG nova.compute.resource_tracker [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 10 17:28:00 compute-0 nova_compute[237049]: 2026-01-10 17:28:00.365 237053 DEBUG nova.compute.resource_tracker [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 10 17:28:00 compute-0 nova_compute[237049]: 2026-01-10 17:28:00.397 237053 DEBUG oslo_concurrency.processutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 10 17:28:00 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1016: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:28:00 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 10 17:28:00 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2636035365' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 10 17:28:00 compute-0 nova_compute[237049]: 2026-01-10 17:28:00.956 237053 DEBUG oslo_concurrency.processutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.559s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 10 17:28:00 compute-0 nova_compute[237049]: 2026-01-10 17:28:00.966 237053 DEBUG nova.compute.provider_tree [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Inventory has not changed in ProviderTree for provider: 5f85855c-8a9b-43b5-ae49-f5846b9dcebe update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 10 17:28:00 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/2937330844' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 10 17:28:00 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/2636035365' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 10 17:28:00 compute-0 nova_compute[237049]: 2026-01-10 17:28:00.984 237053 DEBUG nova.scheduler.client.report [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Inventory has not changed for provider 5f85855c-8a9b-43b5-ae49-f5846b9dcebe based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 10 17:28:00 compute-0 nova_compute[237049]: 2026-01-10 17:28:00.987 237053 DEBUG nova.compute.resource_tracker [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 10 17:28:00 compute-0 nova_compute[237049]: 2026-01-10 17:28:00.987 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.725s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 10 17:28:01 compute-0 ceph-mon[75249]: pgmap v1016: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:28:02 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1017: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:28:02 compute-0 nova_compute[237049]: 2026-01-10 17:28:02.977 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:28:02 compute-0 nova_compute[237049]: 2026-01-10 17:28:02.978 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:28:03 compute-0 nova_compute[237049]: 2026-01-10 17:28:03.006 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:28:03 compute-0 nova_compute[237049]: 2026-01-10 17:28:03.006 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:28:03 compute-0 nova_compute[237049]: 2026-01-10 17:28:03.006 237053 DEBUG nova.compute.manager [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 10 17:28:03 compute-0 nova_compute[237049]: 2026-01-10 17:28:03.346 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:28:03 compute-0 ceph-mon[75249]: pgmap v1017: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:28:04 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:28:04 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1018: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:28:05 compute-0 nova_compute[237049]: 2026-01-10 17:28:05.345 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:28:06 compute-0 ceph-mon[75249]: pgmap v1018: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:28:06 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1019: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:28:08 compute-0 ceph-mon[75249]: pgmap v1019: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:28:08 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1020: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:28:09 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:28:09 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:28:09 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:28:09 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:28:09 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:28:09 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:28:09 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:28:10 compute-0 ceph-mon[75249]: pgmap v1020: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:28:10 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1021: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:28:12 compute-0 ceph-mon[75249]: pgmap v1021: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:28:12 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1022: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:28:14 compute-0 ceph-mon[75249]: pgmap v1022: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:28:14 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:28:14 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1023: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:28:15 compute-0 sudo[253353]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 17:28:15 compute-0 sudo[253353]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:28:15 compute-0 sudo[253353]: pam_unix(sudo:session): session closed for user root
Jan 10 17:28:15 compute-0 sudo[253378]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 10 17:28:15 compute-0 sudo[253378]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:28:16 compute-0 ceph-mon[75249]: pgmap v1023: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:28:16 compute-0 sudo[253378]: pam_unix(sudo:session): session closed for user root
Jan 10 17:28:16 compute-0 sudo[253434]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 17:28:16 compute-0 sudo[253434]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:28:16 compute-0 sudo[253434]: pam_unix(sudo:session): session closed for user root
Jan 10 17:28:16 compute-0 sudo[253459]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 list-networks
Jan 10 17:28:16 compute-0 sudo[253459]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:28:16 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1024: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:28:16 compute-0 sudo[253459]: pam_unix(sudo:session): session closed for user root
Jan 10 17:28:16 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 10 17:28:16 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:28:16 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 10 17:28:16 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:28:16 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 17:28:16 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 17:28:16 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 10 17:28:16 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 10 17:28:16 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 10 17:28:16 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:28:16 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 10 17:28:16 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 10 17:28:16 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 10 17:28:16 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 10 17:28:16 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 17:28:16 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 17:28:16 compute-0 sudo[253502]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 17:28:16 compute-0 sudo[253502]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:28:16 compute-0 sudo[253502]: pam_unix(sudo:session): session closed for user root
Jan 10 17:28:17 compute-0 sudo[253527]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 10 17:28:17 compute-0 sudo[253527]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:28:17 compute-0 podman[253564]: 2026-01-10 17:28:17.410005764 +0000 UTC m=+0.063229614 container create 8b3ce6910b405038341908f2dfb91ee3fba607416608fea8f3b2eceea62761c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_montalcini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 10 17:28:17 compute-0 systemd[1]: Started libpod-conmon-8b3ce6910b405038341908f2dfb91ee3fba607416608fea8f3b2eceea62761c0.scope.
Jan 10 17:28:17 compute-0 podman[253564]: 2026-01-10 17:28:17.380878813 +0000 UTC m=+0.034102673 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:28:17 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:28:17 compute-0 podman[253564]: 2026-01-10 17:28:17.515491949 +0000 UTC m=+0.168715829 container init 8b3ce6910b405038341908f2dfb91ee3fba607416608fea8f3b2eceea62761c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_montalcini, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 10 17:28:17 compute-0 podman[253564]: 2026-01-10 17:28:17.526315985 +0000 UTC m=+0.179539835 container start 8b3ce6910b405038341908f2dfb91ee3fba607416608fea8f3b2eceea62761c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_montalcini, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True)
Jan 10 17:28:17 compute-0 podman[253564]: 2026-01-10 17:28:17.530890824 +0000 UTC m=+0.184114724 container attach 8b3ce6910b405038341908f2dfb91ee3fba607416608fea8f3b2eceea62761c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_montalcini, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 17:28:17 compute-0 gifted_montalcini[253580]: 167 167
Jan 10 17:28:17 compute-0 systemd[1]: libpod-8b3ce6910b405038341908f2dfb91ee3fba607416608fea8f3b2eceea62761c0.scope: Deactivated successfully.
Jan 10 17:28:17 compute-0 conmon[253580]: conmon 8b3ce6910b4050383419 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8b3ce6910b405038341908f2dfb91ee3fba607416608fea8f3b2eceea62761c0.scope/container/memory.events
Jan 10 17:28:17 compute-0 podman[253585]: 2026-01-10 17:28:17.594190059 +0000 UTC m=+0.036221193 container died 8b3ce6910b405038341908f2dfb91ee3fba607416608fea8f3b2eceea62761c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_montalcini, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 10 17:28:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-0eeeaff61198cacb7be1d7252b44629cb40eb2777edbf12b6bc67b8453e0372a-merged.mount: Deactivated successfully.
Jan 10 17:28:17 compute-0 podman[253585]: 2026-01-10 17:28:17.629480844 +0000 UTC m=+0.071511948 container remove 8b3ce6910b405038341908f2dfb91ee3fba607416608fea8f3b2eceea62761c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_montalcini, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 17:28:17 compute-0 systemd[1]: libpod-conmon-8b3ce6910b405038341908f2dfb91ee3fba607416608fea8f3b2eceea62761c0.scope: Deactivated successfully.
Jan 10 17:28:17 compute-0 ceph-mon[75249]: pgmap v1024: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:28:17 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:28:17 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:28:17 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 17:28:17 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 10 17:28:17 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:28:17 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 10 17:28:17 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 10 17:28:17 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 17:28:17 compute-0 podman[253607]: 2026-01-10 17:28:17.867484687 +0000 UTC m=+0.070065077 container create 62d09bddd0c53be140f38b51e41dd2f51e81deefbb624766cfb125746f251298 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_goldstine, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 17:28:17 compute-0 systemd[1]: Started libpod-conmon-62d09bddd0c53be140f38b51e41dd2f51e81deefbb624766cfb125746f251298.scope.
Jan 10 17:28:17 compute-0 podman[253607]: 2026-01-10 17:28:17.836480363 +0000 UTC m=+0.039060803 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:28:17 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:28:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30f1c5563b2b4c9295bee3a6c0e6cd6121e37db5f00cdb1b2bf6dc7a82c10185/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 17:28:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30f1c5563b2b4c9295bee3a6c0e6cd6121e37db5f00cdb1b2bf6dc7a82c10185/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 17:28:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30f1c5563b2b4c9295bee3a6c0e6cd6121e37db5f00cdb1b2bf6dc7a82c10185/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 17:28:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30f1c5563b2b4c9295bee3a6c0e6cd6121e37db5f00cdb1b2bf6dc7a82c10185/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 17:28:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30f1c5563b2b4c9295bee3a6c0e6cd6121e37db5f00cdb1b2bf6dc7a82c10185/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 10 17:28:17 compute-0 podman[253607]: 2026-01-10 17:28:17.991498875 +0000 UTC m=+0.194079285 container init 62d09bddd0c53be140f38b51e41dd2f51e81deefbb624766cfb125746f251298 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_goldstine, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 10 17:28:18 compute-0 podman[253607]: 2026-01-10 17:28:18.006867059 +0000 UTC m=+0.209447449 container start 62d09bddd0c53be140f38b51e41dd2f51e81deefbb624766cfb125746f251298 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_goldstine, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 10 17:28:18 compute-0 podman[253607]: 2026-01-10 17:28:18.013809185 +0000 UTC m=+0.216389605 container attach 62d09bddd0c53be140f38b51e41dd2f51e81deefbb624766cfb125746f251298 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_goldstine, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 10 17:28:18 compute-0 great_goldstine[253624]: --> passed data devices: 0 physical, 3 LVM
Jan 10 17:28:18 compute-0 great_goldstine[253624]: --> All data devices are unavailable
Jan 10 17:28:18 compute-0 systemd[1]: libpod-62d09bddd0c53be140f38b51e41dd2f51e81deefbb624766cfb125746f251298.scope: Deactivated successfully.
Jan 10 17:28:18 compute-0 podman[253607]: 2026-01-10 17:28:18.569270781 +0000 UTC m=+0.771851201 container died 62d09bddd0c53be140f38b51e41dd2f51e81deefbb624766cfb125746f251298 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_goldstine, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 10 17:28:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-30f1c5563b2b4c9295bee3a6c0e6cd6121e37db5f00cdb1b2bf6dc7a82c10185-merged.mount: Deactivated successfully.
Jan 10 17:28:18 compute-0 podman[253607]: 2026-01-10 17:28:18.636916419 +0000 UTC m=+0.839496809 container remove 62d09bddd0c53be140f38b51e41dd2f51e81deefbb624766cfb125746f251298 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_goldstine, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 17:28:18 compute-0 systemd[1]: libpod-conmon-62d09bddd0c53be140f38b51e41dd2f51e81deefbb624766cfb125746f251298.scope: Deactivated successfully.
Jan 10 17:28:18 compute-0 sudo[253527]: pam_unix(sudo:session): session closed for user root
Jan 10 17:28:18 compute-0 podman[253645]: 2026-01-10 17:28:18.721516736 +0000 UTC m=+0.102235755 container health_status 4a66317a3a3660e903224f5e823ead0a57597ae17c6ac34919f8a3aeea25124f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Jan 10 17:28:18 compute-0 podman[253653]: 2026-01-10 17:28:18.763612322 +0000 UTC m=+0.144445724 container health_status a2049b55215718ead6719d161f51721cb7ba61ad8a59a8a1174e05b17008fa6f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202)
Jan 10 17:28:18 compute-0 sudo[253699]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 17:28:18 compute-0 sudo[253699]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:28:18 compute-0 sudo[253699]: pam_unix(sudo:session): session closed for user root
Jan 10 17:28:18 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1025: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:28:18 compute-0 sudo[253728]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 -- lvm list --format json
Jan 10 17:28:18 compute-0 sudo[253728]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:28:19 compute-0 podman[253764]: 2026-01-10 17:28:19.127591428 +0000 UTC m=+0.037717335 container create e6357e9e648bf614a2ea8fbd4c8b1f7814825a6ae968c028f113060434e2fb47 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_tesla, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 10 17:28:19 compute-0 systemd[1]: Started libpod-conmon-e6357e9e648bf614a2ea8fbd4c8b1f7814825a6ae968c028f113060434e2fb47.scope.
Jan 10 17:28:19 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:28:19 compute-0 podman[253764]: 2026-01-10 17:28:19.110909927 +0000 UTC m=+0.021035854 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:28:19 compute-0 podman[253764]: 2026-01-10 17:28:19.211481424 +0000 UTC m=+0.121607411 container init e6357e9e648bf614a2ea8fbd4c8b1f7814825a6ae968c028f113060434e2fb47 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_tesla, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 10 17:28:19 compute-0 podman[253764]: 2026-01-10 17:28:19.220099127 +0000 UTC m=+0.130225044 container start e6357e9e648bf614a2ea8fbd4c8b1f7814825a6ae968c028f113060434e2fb47 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_tesla, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 17:28:19 compute-0 podman[253764]: 2026-01-10 17:28:19.223848483 +0000 UTC m=+0.133974480 container attach e6357e9e648bf614a2ea8fbd4c8b1f7814825a6ae968c028f113060434e2fb47 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_tesla, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 10 17:28:19 compute-0 beautiful_tesla[253780]: 167 167
Jan 10 17:28:19 compute-0 systemd[1]: libpod-e6357e9e648bf614a2ea8fbd4c8b1f7814825a6ae968c028f113060434e2fb47.scope: Deactivated successfully.
Jan 10 17:28:19 compute-0 podman[253764]: 2026-01-10 17:28:19.225572031 +0000 UTC m=+0.135697948 container died e6357e9e648bf614a2ea8fbd4c8b1f7814825a6ae968c028f113060434e2fb47 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_tesla, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 10 17:28:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-3acdfcf703d7b7c189f4744e3a2b853c564b74b7c1852f50f3a97979327a6ba7-merged.mount: Deactivated successfully.
Jan 10 17:28:19 compute-0 podman[253764]: 2026-01-10 17:28:19.264371746 +0000 UTC m=+0.174497663 container remove e6357e9e648bf614a2ea8fbd4c8b1f7814825a6ae968c028f113060434e2fb47 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_tesla, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 10 17:28:19 compute-0 systemd[1]: libpod-conmon-e6357e9e648bf614a2ea8fbd4c8b1f7814825a6ae968c028f113060434e2fb47.scope: Deactivated successfully.
Jan 10 17:28:19 compute-0 podman[253803]: 2026-01-10 17:28:19.523888476 +0000 UTC m=+0.069247775 container create f55640314e94017358116557b090d0b3bd9fe895a7c19e9ff54fbacf1797e72c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_cannon, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 17:28:19 compute-0 systemd[1]: Started libpod-conmon-f55640314e94017358116557b090d0b3bd9fe895a7c19e9ff54fbacf1797e72c.scope.
Jan 10 17:28:19 compute-0 podman[253803]: 2026-01-10 17:28:19.49425401 +0000 UTC m=+0.039613359 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:28:19 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:28:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15876511755c3625d3bce3933f2e286ef97d399128155b849f4434effdc54b0b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 17:28:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15876511755c3625d3bce3933f2e286ef97d399128155b849f4434effdc54b0b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 17:28:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15876511755c3625d3bce3933f2e286ef97d399128155b849f4434effdc54b0b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 17:28:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15876511755c3625d3bce3933f2e286ef97d399128155b849f4434effdc54b0b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 17:28:19 compute-0 podman[253803]: 2026-01-10 17:28:19.631582643 +0000 UTC m=+0.176942002 container init f55640314e94017358116557b090d0b3bd9fe895a7c19e9ff54fbacf1797e72c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_cannon, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 10 17:28:19 compute-0 podman[253803]: 2026-01-10 17:28:19.64814239 +0000 UTC m=+0.193501659 container start f55640314e94017358116557b090d0b3bd9fe895a7c19e9ff54fbacf1797e72c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_cannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 10 17:28:19 compute-0 podman[253803]: 2026-01-10 17:28:19.654374846 +0000 UTC m=+0.199734145 container attach f55640314e94017358116557b090d0b3bd9fe895a7c19e9ff54fbacf1797e72c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_cannon, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 10 17:28:19 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:28:19 compute-0 ceph-mon[75249]: pgmap v1025: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:28:19 compute-0 flamboyant_cannon[253819]: {
Jan 10 17:28:19 compute-0 flamboyant_cannon[253819]:     "0": [
Jan 10 17:28:19 compute-0 flamboyant_cannon[253819]:         {
Jan 10 17:28:19 compute-0 flamboyant_cannon[253819]:             "devices": [
Jan 10 17:28:19 compute-0 flamboyant_cannon[253819]:                 "/dev/loop3"
Jan 10 17:28:19 compute-0 flamboyant_cannon[253819]:             ],
Jan 10 17:28:19 compute-0 flamboyant_cannon[253819]:             "lv_name": "ceph_lv0",
Jan 10 17:28:19 compute-0 flamboyant_cannon[253819]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 10 17:28:19 compute-0 flamboyant_cannon[253819]:             "lv_size": "21470642176",
Jan 10 17:28:19 compute-0 flamboyant_cannon[253819]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9aa1dcc9-88f4-49c0-be40-744313964d3e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 17:28:19 compute-0 flamboyant_cannon[253819]:             "lv_uuid": "fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2",
Jan 10 17:28:19 compute-0 flamboyant_cannon[253819]:             "name": "ceph_lv0",
Jan 10 17:28:19 compute-0 flamboyant_cannon[253819]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 10 17:28:19 compute-0 flamboyant_cannon[253819]:             "tags": {
Jan 10 17:28:19 compute-0 flamboyant_cannon[253819]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 10 17:28:19 compute-0 flamboyant_cannon[253819]:                 "ceph.block_uuid": "fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2",
Jan 10 17:28:19 compute-0 flamboyant_cannon[253819]:                 "ceph.cephx_lockbox_secret": "",
Jan 10 17:28:19 compute-0 flamboyant_cannon[253819]:                 "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 17:28:19 compute-0 flamboyant_cannon[253819]:                 "ceph.cluster_name": "ceph",
Jan 10 17:28:19 compute-0 flamboyant_cannon[253819]:                 "ceph.crush_device_class": "",
Jan 10 17:28:19 compute-0 flamboyant_cannon[253819]:                 "ceph.encrypted": "0",
Jan 10 17:28:19 compute-0 flamboyant_cannon[253819]:                 "ceph.objectstore": "bluestore",
Jan 10 17:28:19 compute-0 flamboyant_cannon[253819]:                 "ceph.osd_fsid": "9aa1dcc9-88f4-49c0-be40-744313964d3e",
Jan 10 17:28:19 compute-0 flamboyant_cannon[253819]:                 "ceph.osd_id": "0",
Jan 10 17:28:19 compute-0 flamboyant_cannon[253819]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 10 17:28:19 compute-0 flamboyant_cannon[253819]:                 "ceph.type": "block",
Jan 10 17:28:19 compute-0 flamboyant_cannon[253819]:                 "ceph.vdo": "0",
Jan 10 17:28:19 compute-0 flamboyant_cannon[253819]:                 "ceph.with_tpm": "0"
Jan 10 17:28:19 compute-0 flamboyant_cannon[253819]:             },
Jan 10 17:28:19 compute-0 flamboyant_cannon[253819]:             "type": "block",
Jan 10 17:28:19 compute-0 flamboyant_cannon[253819]:             "vg_name": "ceph_vg0"
Jan 10 17:28:19 compute-0 flamboyant_cannon[253819]:         }
Jan 10 17:28:19 compute-0 flamboyant_cannon[253819]:     ],
Jan 10 17:28:19 compute-0 flamboyant_cannon[253819]:     "1": [
Jan 10 17:28:19 compute-0 flamboyant_cannon[253819]:         {
Jan 10 17:28:19 compute-0 flamboyant_cannon[253819]:             "devices": [
Jan 10 17:28:19 compute-0 flamboyant_cannon[253819]:                 "/dev/loop4"
Jan 10 17:28:19 compute-0 flamboyant_cannon[253819]:             ],
Jan 10 17:28:19 compute-0 flamboyant_cannon[253819]:             "lv_name": "ceph_lv1",
Jan 10 17:28:19 compute-0 flamboyant_cannon[253819]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 10 17:28:19 compute-0 flamboyant_cannon[253819]:             "lv_size": "21470642176",
Jan 10 17:28:19 compute-0 flamboyant_cannon[253819]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e8e31518-65ae-476c-891c-e2fc550d0a1c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 17:28:19 compute-0 flamboyant_cannon[253819]:             "lv_uuid": "jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE",
Jan 10 17:28:19 compute-0 flamboyant_cannon[253819]:             "name": "ceph_lv1",
Jan 10 17:28:19 compute-0 flamboyant_cannon[253819]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 10 17:28:19 compute-0 flamboyant_cannon[253819]:             "tags": {
Jan 10 17:28:19 compute-0 flamboyant_cannon[253819]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 10 17:28:19 compute-0 flamboyant_cannon[253819]:                 "ceph.block_uuid": "jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE",
Jan 10 17:28:19 compute-0 flamboyant_cannon[253819]:                 "ceph.cephx_lockbox_secret": "",
Jan 10 17:28:19 compute-0 flamboyant_cannon[253819]:                 "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 17:28:19 compute-0 flamboyant_cannon[253819]:                 "ceph.cluster_name": "ceph",
Jan 10 17:28:19 compute-0 flamboyant_cannon[253819]:                 "ceph.crush_device_class": "",
Jan 10 17:28:19 compute-0 flamboyant_cannon[253819]:                 "ceph.encrypted": "0",
Jan 10 17:28:19 compute-0 flamboyant_cannon[253819]:                 "ceph.objectstore": "bluestore",
Jan 10 17:28:19 compute-0 flamboyant_cannon[253819]:                 "ceph.osd_fsid": "e8e31518-65ae-476c-891c-e2fc550d0a1c",
Jan 10 17:28:19 compute-0 flamboyant_cannon[253819]:                 "ceph.osd_id": "1",
Jan 10 17:28:19 compute-0 flamboyant_cannon[253819]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 10 17:28:19 compute-0 flamboyant_cannon[253819]:                 "ceph.type": "block",
Jan 10 17:28:19 compute-0 flamboyant_cannon[253819]:                 "ceph.vdo": "0",
Jan 10 17:28:19 compute-0 flamboyant_cannon[253819]:                 "ceph.with_tpm": "0"
Jan 10 17:28:19 compute-0 flamboyant_cannon[253819]:             },
Jan 10 17:28:19 compute-0 flamboyant_cannon[253819]:             "type": "block",
Jan 10 17:28:19 compute-0 flamboyant_cannon[253819]:             "vg_name": "ceph_vg1"
Jan 10 17:28:19 compute-0 flamboyant_cannon[253819]:         }
Jan 10 17:28:19 compute-0 flamboyant_cannon[253819]:     ],
Jan 10 17:28:19 compute-0 flamboyant_cannon[253819]:     "2": [
Jan 10 17:28:19 compute-0 flamboyant_cannon[253819]:         {
Jan 10 17:28:19 compute-0 flamboyant_cannon[253819]:             "devices": [
Jan 10 17:28:19 compute-0 flamboyant_cannon[253819]:                 "/dev/loop5"
Jan 10 17:28:19 compute-0 flamboyant_cannon[253819]:             ],
Jan 10 17:28:19 compute-0 flamboyant_cannon[253819]:             "lv_name": "ceph_lv2",
Jan 10 17:28:19 compute-0 flamboyant_cannon[253819]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 10 17:28:19 compute-0 flamboyant_cannon[253819]:             "lv_size": "21470642176",
Jan 10 17:28:19 compute-0 flamboyant_cannon[253819]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=87473727-6468-4f68-8371-e0bf60edaa43,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 17:28:19 compute-0 flamboyant_cannon[253819]:             "lv_uuid": "47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL",
Jan 10 17:28:19 compute-0 flamboyant_cannon[253819]:             "name": "ceph_lv2",
Jan 10 17:28:19 compute-0 flamboyant_cannon[253819]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 10 17:28:19 compute-0 flamboyant_cannon[253819]:             "tags": {
Jan 10 17:28:19 compute-0 flamboyant_cannon[253819]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 10 17:28:19 compute-0 flamboyant_cannon[253819]:                 "ceph.block_uuid": "47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL",
Jan 10 17:28:19 compute-0 flamboyant_cannon[253819]:                 "ceph.cephx_lockbox_secret": "",
Jan 10 17:28:19 compute-0 flamboyant_cannon[253819]:                 "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 17:28:19 compute-0 flamboyant_cannon[253819]:                 "ceph.cluster_name": "ceph",
Jan 10 17:28:19 compute-0 flamboyant_cannon[253819]:                 "ceph.crush_device_class": "",
Jan 10 17:28:19 compute-0 flamboyant_cannon[253819]:                 "ceph.encrypted": "0",
Jan 10 17:28:19 compute-0 flamboyant_cannon[253819]:                 "ceph.objectstore": "bluestore",
Jan 10 17:28:19 compute-0 flamboyant_cannon[253819]:                 "ceph.osd_fsid": "87473727-6468-4f68-8371-e0bf60edaa43",
Jan 10 17:28:19 compute-0 flamboyant_cannon[253819]:                 "ceph.osd_id": "2",
Jan 10 17:28:19 compute-0 flamboyant_cannon[253819]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 10 17:28:19 compute-0 flamboyant_cannon[253819]:                 "ceph.type": "block",
Jan 10 17:28:19 compute-0 flamboyant_cannon[253819]:                 "ceph.vdo": "0",
Jan 10 17:28:19 compute-0 flamboyant_cannon[253819]:                 "ceph.with_tpm": "0"
Jan 10 17:28:19 compute-0 flamboyant_cannon[253819]:             },
Jan 10 17:28:19 compute-0 flamboyant_cannon[253819]:             "type": "block",
Jan 10 17:28:19 compute-0 flamboyant_cannon[253819]:             "vg_name": "ceph_vg2"
Jan 10 17:28:19 compute-0 flamboyant_cannon[253819]:         }
Jan 10 17:28:19 compute-0 flamboyant_cannon[253819]:     ]
Jan 10 17:28:19 compute-0 flamboyant_cannon[253819]: }
Jan 10 17:28:20 compute-0 systemd[1]: libpod-f55640314e94017358116557b090d0b3bd9fe895a7c19e9ff54fbacf1797e72c.scope: Deactivated successfully.
Jan 10 17:28:20 compute-0 podman[253803]: 2026-01-10 17:28:20.001345122 +0000 UTC m=+0.546704441 container died f55640314e94017358116557b090d0b3bd9fe895a7c19e9ff54fbacf1797e72c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_cannon, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 17:28:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-15876511755c3625d3bce3933f2e286ef97d399128155b849f4434effdc54b0b-merged.mount: Deactivated successfully.
Jan 10 17:28:20 compute-0 podman[253803]: 2026-01-10 17:28:20.065742809 +0000 UTC m=+0.611102118 container remove f55640314e94017358116557b090d0b3bd9fe895a7c19e9ff54fbacf1797e72c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_cannon, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 10 17:28:20 compute-0 systemd[1]: libpod-conmon-f55640314e94017358116557b090d0b3bd9fe895a7c19e9ff54fbacf1797e72c.scope: Deactivated successfully.
Jan 10 17:28:20 compute-0 sudo[253728]: pam_unix(sudo:session): session closed for user root
Jan 10 17:28:20 compute-0 sudo[253841]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 17:28:20 compute-0 sudo[253841]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:28:20 compute-0 sudo[253841]: pam_unix(sudo:session): session closed for user root
Jan 10 17:28:20 compute-0 sudo[253866]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 -- raw list --format json
Jan 10 17:28:20 compute-0 sudo[253866]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:28:20 compute-0 podman[253903]: 2026-01-10 17:28:20.63590702 +0000 UTC m=+0.061402632 container create 58249e71bf08426a5d74ef17d52bd707457f199d5da5bff221a760bac3ac9717 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_poincare, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 17:28:20 compute-0 systemd[1]: Started libpod-conmon-58249e71bf08426a5d74ef17d52bd707457f199d5da5bff221a760bac3ac9717.scope.
Jan 10 17:28:20 compute-0 podman[253903]: 2026-01-10 17:28:20.614914928 +0000 UTC m=+0.040410550 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:28:20 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:28:20 compute-0 podman[253903]: 2026-01-10 17:28:20.729872981 +0000 UTC m=+0.155368653 container init 58249e71bf08426a5d74ef17d52bd707457f199d5da5bff221a760bac3ac9717 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_poincare, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 17:28:20 compute-0 podman[253903]: 2026-01-10 17:28:20.739157373 +0000 UTC m=+0.164652975 container start 58249e71bf08426a5d74ef17d52bd707457f199d5da5bff221a760bac3ac9717 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_poincare, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 17:28:20 compute-0 podman[253903]: 2026-01-10 17:28:20.744553115 +0000 UTC m=+0.170048727 container attach 58249e71bf08426a5d74ef17d52bd707457f199d5da5bff221a760bac3ac9717 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_poincare, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle)
Jan 10 17:28:20 compute-0 gifted_poincare[253920]: 167 167
Jan 10 17:28:20 compute-0 systemd[1]: libpod-58249e71bf08426a5d74ef17d52bd707457f199d5da5bff221a760bac3ac9717.scope: Deactivated successfully.
Jan 10 17:28:20 compute-0 conmon[253920]: conmon 58249e71bf08426a5d74 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-58249e71bf08426a5d74ef17d52bd707457f199d5da5bff221a760bac3ac9717.scope/container/memory.events
Jan 10 17:28:20 compute-0 podman[253903]: 2026-01-10 17:28:20.746896931 +0000 UTC m=+0.172392563 container died 58249e71bf08426a5d74ef17d52bd707457f199d5da5bff221a760bac3ac9717 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_poincare, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 17:28:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-d31c834a4af00a30fa32f4275bdc33b7e0b91a8d1fa1ff26f045d3de76e55136-merged.mount: Deactivated successfully.
Jan 10 17:28:20 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1026: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:28:20 compute-0 podman[253903]: 2026-01-10 17:28:20.804343061 +0000 UTC m=+0.229838683 container remove 58249e71bf08426a5d74ef17d52bd707457f199d5da5bff221a760bac3ac9717 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_poincare, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 17:28:20 compute-0 systemd[1]: libpod-conmon-58249e71bf08426a5d74ef17d52bd707457f199d5da5bff221a760bac3ac9717.scope: Deactivated successfully.
Jan 10 17:28:21 compute-0 podman[253944]: 2026-01-10 17:28:21.037026744 +0000 UTC m=+0.064160321 container create 989ba271444b11adbe746ec21addbeb2f286deeeb2b40b160111ed50f3f0bfa9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_tharp, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 17:28:21 compute-0 systemd[1]: Started libpod-conmon-989ba271444b11adbe746ec21addbeb2f286deeeb2b40b160111ed50f3f0bfa9.scope.
Jan 10 17:28:21 compute-0 podman[253944]: 2026-01-10 17:28:21.010823405 +0000 UTC m=+0.037957062 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:28:21 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:28:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5092a2192ccb86d00e323b466ae9438ca10763330674827041aaecc370ee55c5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 17:28:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5092a2192ccb86d00e323b466ae9438ca10763330674827041aaecc370ee55c5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 17:28:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5092a2192ccb86d00e323b466ae9438ca10763330674827041aaecc370ee55c5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 17:28:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5092a2192ccb86d00e323b466ae9438ca10763330674827041aaecc370ee55c5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 17:28:21 compute-0 podman[253944]: 2026-01-10 17:28:21.142153849 +0000 UTC m=+0.169287466 container init 989ba271444b11adbe746ec21addbeb2f286deeeb2b40b160111ed50f3f0bfa9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_tharp, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 10 17:28:21 compute-0 podman[253944]: 2026-01-10 17:28:21.156742711 +0000 UTC m=+0.183876318 container start 989ba271444b11adbe746ec21addbeb2f286deeeb2b40b160111ed50f3f0bfa9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_tharp, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 17:28:21 compute-0 podman[253944]: 2026-01-10 17:28:21.161070803 +0000 UTC m=+0.188204430 container attach 989ba271444b11adbe746ec21addbeb2f286deeeb2b40b160111ed50f3f0bfa9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_tharp, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 17:28:21 compute-0 ceph-mon[75249]: pgmap v1026: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:28:21 compute-0 lvm[254039]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 10 17:28:21 compute-0 lvm[254039]: VG ceph_vg1 finished
Jan 10 17:28:21 compute-0 lvm[254038]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 10 17:28:21 compute-0 lvm[254038]: VG ceph_vg0 finished
Jan 10 17:28:21 compute-0 lvm[254041]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 10 17:28:21 compute-0 lvm[254041]: VG ceph_vg2 finished
Jan 10 17:28:21 compute-0 epic_tharp[253960]: {}
Jan 10 17:28:22 compute-0 systemd[1]: libpod-989ba271444b11adbe746ec21addbeb2f286deeeb2b40b160111ed50f3f0bfa9.scope: Deactivated successfully.
Jan 10 17:28:22 compute-0 podman[253944]: 2026-01-10 17:28:22.026510903 +0000 UTC m=+1.053644470 container died 989ba271444b11adbe746ec21addbeb2f286deeeb2b40b160111ed50f3f0bfa9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_tharp, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 10 17:28:22 compute-0 systemd[1]: libpod-989ba271444b11adbe746ec21addbeb2f286deeeb2b40b160111ed50f3f0bfa9.scope: Consumed 1.353s CPU time.
Jan 10 17:28:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-5092a2192ccb86d00e323b466ae9438ca10763330674827041aaecc370ee55c5-merged.mount: Deactivated successfully.
Jan 10 17:28:22 compute-0 podman[253944]: 2026-01-10 17:28:22.081212506 +0000 UTC m=+1.108346113 container remove 989ba271444b11adbe746ec21addbeb2f286deeeb2b40b160111ed50f3f0bfa9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_tharp, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 10 17:28:22 compute-0 systemd[1]: libpod-conmon-989ba271444b11adbe746ec21addbeb2f286deeeb2b40b160111ed50f3f0bfa9.scope: Deactivated successfully.
Jan 10 17:28:22 compute-0 sudo[253866]: pam_unix(sudo:session): session closed for user root
Jan 10 17:28:22 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 10 17:28:22 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:28:22 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 10 17:28:22 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:28:22 compute-0 sudo[254054]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 10 17:28:22 compute-0 sudo[254054]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:28:22 compute-0 sudo[254054]: pam_unix(sudo:session): session closed for user root
Jan 10 17:28:22 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1027: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:28:23 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:28:23 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:28:24 compute-0 ceph-mon[75249]: pgmap v1027: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:28:24 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:28:24 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1028: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:28:26 compute-0 ceph-mon[75249]: pgmap v1028: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:28:26 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1029: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:28:28 compute-0 ceph-mon[75249]: pgmap v1029: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:28:28 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1030: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:28:29 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:28:29 compute-0 ceph-osd[85764]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 10 17:28:29 compute-0 ceph-osd[85764]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.1 total, 600.0 interval
                                           Cumulative writes: 5552 writes, 23K keys, 5552 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 5552 writes, 988 syncs, 5.62 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1173 writes, 3350 keys, 1173 commit groups, 1.0 writes per commit group, ingest: 1.88 MB, 0.00 MB/s
                                           Interval WAL: 1173 writes, 520 syncs, 2.26 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 10 17:28:30 compute-0 ceph-mon[75249]: pgmap v1030: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:28:30 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1031: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:28:32 compute-0 ceph-mon[75249]: pgmap v1031: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:28:32 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1032: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:28:34 compute-0 ceph-mon[75249]: pgmap v1032: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:28:34 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:28:34 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1033: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:28:35 compute-0 ceph-osd[86809]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 10 17:28:35 compute-0 ceph-osd[86809]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.2 total, 600.0 interval
                                           Cumulative writes: 6001 writes, 24K keys, 6001 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 6001 writes, 1157 syncs, 5.19 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1449 writes, 4204 keys, 1449 commit groups, 1.0 writes per commit group, ingest: 2.27 MB, 0.00 MB/s
                                           Interval WAL: 1449 writes, 642 syncs, 2.26 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 10 17:28:36 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 10 17:28:36 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/763008534' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 10 17:28:36 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 10 17:28:36 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/763008534' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 10 17:28:36 compute-0 ceph-mon[75249]: pgmap v1033: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:28:36 compute-0 ceph-mon[75249]: from='client.? 192.168.122.10:0/763008534' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 10 17:28:36 compute-0 ceph-mon[75249]: from='client.? 192.168.122.10:0/763008534' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 10 17:28:36 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1034: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:28:38 compute-0 ceph-mgr[75538]: [balancer INFO root] Optimize plan auto_2026-01-10_17:28:38
Jan 10 17:28:38 compute-0 ceph-mgr[75538]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 10 17:28:38 compute-0 ceph-mgr[75538]: [balancer INFO root] do_upmap
Jan 10 17:28:38 compute-0 ceph-mgr[75538]: [balancer INFO root] pools ['backups', 'volumes', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'images', '.mgr', 'vms']
Jan 10 17:28:38 compute-0 ceph-mgr[75538]: [balancer INFO root] prepared 0/10 upmap changes
Jan 10 17:28:38 compute-0 ceph-mon[75249]: pgmap v1034: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:28:38 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1035: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:28:39 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:28:39 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:28:39 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:28:39 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:28:39 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:28:39 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:28:39 compute-0 ceph-mon[75249]: pgmap v1035: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:28:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 10 17:28:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 10 17:28:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 10 17:28:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 10 17:28:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 10 17:28:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 10 17:28:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 10 17:28:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 10 17:28:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 10 17:28:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 10 17:28:39 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:28:40 compute-0 sshd-session[254080]: Received disconnect from 80.94.93.233 port 63638:11:  [preauth]
Jan 10 17:28:40 compute-0 sshd-session[254080]: Disconnected from authenticating user root 80.94.93.233 port 63638 [preauth]
Jan 10 17:28:40 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1036: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:28:41 compute-0 ceph-mon[75249]: pgmap v1036: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:28:42 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1037: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:28:42 compute-0 ceph-osd[87867]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 10 17:28:42 compute-0 ceph-osd[87867]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.1 total, 600.0 interval
                                           Cumulative writes: 6272 writes, 24K keys, 6272 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 6272 writes, 1344 syncs, 4.67 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2050 writes, 5141 keys, 2050 commit groups, 1.0 writes per commit group, ingest: 2.88 MB, 0.00 MB/s
                                           Interval WAL: 2050 writes, 951 syncs, 2.16 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 10 17:28:43 compute-0 ceph-mon[75249]: pgmap v1037: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:28:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] _maybe_adjust
Jan 10 17:28:44 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:28:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:28:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 10 17:28:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:28:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 5.365931724612428e-07 of space, bias 1.0, pg target 0.00016097795173837282 quantized to 32 (current 32)
Jan 10 17:28:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:28:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.1924810223865999e-07 of space, bias 1.0, pg target 3.5774430671597993e-05 quantized to 32 (current 32)
Jan 10 17:28:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:28:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 17:28:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:28:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000668695260671586 of space, bias 1.0, pg target 0.2006085782014758 quantized to 32 (current 32)
Jan 10 17:28:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:28:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0462037643091811e-06 of space, bias 4.0, pg target 0.0012554445171710175 quantized to 16 (current 16)
Jan 10 17:28:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:28:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 17:28:44 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1038: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:28:46 compute-0 ceph-mon[75249]: pgmap v1038: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:28:46 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1039: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:28:48 compute-0 ceph-mon[75249]: pgmap v1039: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:28:48 compute-0 ceph-mgr[75538]: [devicehealth INFO root] Check health
Jan 10 17:28:48 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1040: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:28:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:28:48.947 152671 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 10 17:28:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:28:48.948 152671 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 10 17:28:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:28:48.949 152671 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 10 17:28:49 compute-0 podman[254082]: 2026-01-10 17:28:49.086663608 +0000 UTC m=+0.073526252 container health_status 4a66317a3a3660e903224f5e823ead0a57597ae17c6ac34919f8a3aeea25124f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 10 17:28:49 compute-0 podman[254083]: 2026-01-10 17:28:49.177720235 +0000 UTC m=+0.166104543 container health_status a2049b55215718ead6719d161f51721cb7ba61ad8a59a8a1174e05b17008fa6f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 10 17:28:49 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:28:50 compute-0 ceph-mon[75249]: pgmap v1040: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:28:50 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1041: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:28:52 compute-0 ceph-mon[75249]: pgmap v1041: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:28:52 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1042: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:28:54 compute-0 ceph-mon[75249]: pgmap v1042: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:28:54 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:28:54 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1043: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:28:56 compute-0 sshd-session[254127]: Connection closed by authenticating user root 216.36.124.133 port 58988 [preauth]
Jan 10 17:28:56 compute-0 ceph-mon[75249]: pgmap v1043: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:28:56 compute-0 nova_compute[237049]: 2026-01-10 17:28:56.346 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:28:56 compute-0 nova_compute[237049]: 2026-01-10 17:28:56.346 237053 DEBUG nova.compute.manager [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 10 17:28:56 compute-0 nova_compute[237049]: 2026-01-10 17:28:56.347 237053 DEBUG nova.compute.manager [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 10 17:28:56 compute-0 nova_compute[237049]: 2026-01-10 17:28:56.371 237053 DEBUG nova.compute.manager [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 10 17:28:56 compute-0 nova_compute[237049]: 2026-01-10 17:28:56.372 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:28:56 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1044: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:28:58 compute-0 ceph-mon[75249]: pgmap v1044: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:28:58 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1045: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:28:59 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:29:00 compute-0 ceph-mon[75249]: pgmap v1045: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:29:00 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1046: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:29:01 compute-0 nova_compute[237049]: 2026-01-10 17:29:01.345 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:29:01 compute-0 nova_compute[237049]: 2026-01-10 17:29:01.346 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:29:01 compute-0 nova_compute[237049]: 2026-01-10 17:29:01.346 237053 DEBUG nova.compute.manager [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 10 17:29:01 compute-0 nova_compute[237049]: 2026-01-10 17:29:01.346 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:29:01 compute-0 nova_compute[237049]: 2026-01-10 17:29:01.377 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 10 17:29:01 compute-0 nova_compute[237049]: 2026-01-10 17:29:01.377 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 10 17:29:01 compute-0 nova_compute[237049]: 2026-01-10 17:29:01.378 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 10 17:29:01 compute-0 nova_compute[237049]: 2026-01-10 17:29:01.378 237053 DEBUG nova.compute.resource_tracker [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 10 17:29:01 compute-0 nova_compute[237049]: 2026-01-10 17:29:01.378 237053 DEBUG oslo_concurrency.processutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 10 17:29:01 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 10 17:29:01 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2205175263' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 10 17:29:01 compute-0 nova_compute[237049]: 2026-01-10 17:29:01.995 237053 DEBUG oslo_concurrency.processutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.617s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 10 17:29:02 compute-0 ceph-mon[75249]: pgmap v1046: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:29:02 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/2205175263' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 10 17:29:02 compute-0 nova_compute[237049]: 2026-01-10 17:29:02.223 237053 WARNING nova.virt.libvirt.driver [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 10 17:29:02 compute-0 nova_compute[237049]: 2026-01-10 17:29:02.226 237053 DEBUG nova.compute.resource_tracker [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5132MB free_disk=59.988249060697854GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 10 17:29:02 compute-0 nova_compute[237049]: 2026-01-10 17:29:02.226 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 10 17:29:02 compute-0 nova_compute[237049]: 2026-01-10 17:29:02.227 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 10 17:29:02 compute-0 nova_compute[237049]: 2026-01-10 17:29:02.311 237053 DEBUG nova.compute.resource_tracker [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 10 17:29:02 compute-0 nova_compute[237049]: 2026-01-10 17:29:02.312 237053 DEBUG nova.compute.resource_tracker [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 10 17:29:02 compute-0 nova_compute[237049]: 2026-01-10 17:29:02.331 237053 DEBUG oslo_concurrency.processutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 10 17:29:02 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1047: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:29:02 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 10 17:29:02 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2670586627' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 10 17:29:02 compute-0 nova_compute[237049]: 2026-01-10 17:29:02.929 237053 DEBUG oslo_concurrency.processutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.598s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 10 17:29:02 compute-0 nova_compute[237049]: 2026-01-10 17:29:02.938 237053 DEBUG nova.compute.provider_tree [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Inventory has not changed in ProviderTree for provider: 5f85855c-8a9b-43b5-ae49-f5846b9dcebe update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 10 17:29:02 compute-0 nova_compute[237049]: 2026-01-10 17:29:02.954 237053 DEBUG nova.scheduler.client.report [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Inventory has not changed for provider 5f85855c-8a9b-43b5-ae49-f5846b9dcebe based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 10 17:29:02 compute-0 nova_compute[237049]: 2026-01-10 17:29:02.956 237053 DEBUG nova.compute.resource_tracker [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 10 17:29:02 compute-0 nova_compute[237049]: 2026-01-10 17:29:02.956 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.729s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 10 17:29:03 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/2670586627' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 10 17:29:03 compute-0 nova_compute[237049]: 2026-01-10 17:29:03.946 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:29:03 compute-0 nova_compute[237049]: 2026-01-10 17:29:03.947 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:29:03 compute-0 nova_compute[237049]: 2026-01-10 17:29:03.948 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:29:04 compute-0 ceph-mon[75249]: pgmap v1047: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:29:04 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:29:04 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1048: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:29:06 compute-0 ceph-mon[75249]: pgmap v1048: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:29:06 compute-0 nova_compute[237049]: 2026-01-10 17:29:06.346 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:29:06 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1049: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:29:08 compute-0 ceph-mon[75249]: pgmap v1049: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:29:08 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1050: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:29:09 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:29:09 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:29:09 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:29:09 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:29:09 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:29:09 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:29:09 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:29:10 compute-0 ceph-mon[75249]: pgmap v1050: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:29:10 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1051: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:29:12 compute-0 ceph-mon[75249]: pgmap v1051: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:29:12 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1052: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:29:14 compute-0 ceph-mon[75249]: pgmap v1052: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:29:14 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:29:14 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1053: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:29:16 compute-0 ceph-mon[75249]: pgmap v1053: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:29:16 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1054: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:29:18 compute-0 ceph-mon[75249]: pgmap v1054: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:29:18 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1055: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:29:19 compute-0 ceph-mon[75249]: pgmap v1055: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:29:19 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:29:20 compute-0 podman[254173]: 2026-01-10 17:29:20.094506327 +0000 UTC m=+0.081347258 container health_status 4a66317a3a3660e903224f5e823ead0a57597ae17c6ac34919f8a3aeea25124f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent)
Jan 10 17:29:20 compute-0 podman[254174]: 2026-01-10 17:29:20.127467428 +0000 UTC m=+0.116192614 container health_status a2049b55215718ead6719d161f51721cb7ba61ad8a59a8a1174e05b17008fa6f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.license=GPLv2)
Jan 10 17:29:20 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1056: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:29:21 compute-0 ceph-mon[75249]: pgmap v1056: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:29:22 compute-0 sudo[254218]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 17:29:22 compute-0 sudo[254218]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:29:22 compute-0 sudo[254218]: pam_unix(sudo:session): session closed for user root
Jan 10 17:29:22 compute-0 sudo[254243]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host
Jan 10 17:29:22 compute-0 sudo[254243]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:29:22 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1057: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:29:22 compute-0 sudo[254243]: pam_unix(sudo:session): session closed for user root
Jan 10 17:29:22 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 10 17:29:22 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:29:22 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 10 17:29:22 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:29:22 compute-0 sudo[254289]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 17:29:23 compute-0 sudo[254289]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:29:23 compute-0 sudo[254289]: pam_unix(sudo:session): session closed for user root
Jan 10 17:29:23 compute-0 sudo[254314]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 10 17:29:23 compute-0 sudo[254314]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:29:23 compute-0 sudo[254314]: pam_unix(sudo:session): session closed for user root
Jan 10 17:29:23 compute-0 sudo[254370]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 17:29:23 compute-0 sudo[254370]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:29:23 compute-0 sudo[254370]: pam_unix(sudo:session): session closed for user root
Jan 10 17:29:23 compute-0 ceph-mon[75249]: pgmap v1057: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:29:23 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:29:23 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:29:23 compute-0 sudo[254395]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 -- inventory --format=json-pretty --filter-for-batch
Jan 10 17:29:23 compute-0 sudo[254395]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:29:24 compute-0 podman[254433]: 2026-01-10 17:29:24.266996595 +0000 UTC m=+0.056982135 container create 479a1db590a2db9239ee0d982550c1ac9fdcd1bbedfb4ad5e341b751f4df31de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_mclaren, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 10 17:29:24 compute-0 systemd[1]: Started libpod-conmon-479a1db590a2db9239ee0d982550c1ac9fdcd1bbedfb4ad5e341b751f4df31de.scope.
Jan 10 17:29:24 compute-0 podman[254433]: 2026-01-10 17:29:24.244601089 +0000 UTC m=+0.034586679 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:29:24 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:29:24 compute-0 podman[254433]: 2026-01-10 17:29:24.36595608 +0000 UTC m=+0.155941730 container init 479a1db590a2db9239ee0d982550c1ac9fdcd1bbedfb4ad5e341b751f4df31de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_mclaren, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 10 17:29:24 compute-0 podman[254433]: 2026-01-10 17:29:24.379404018 +0000 UTC m=+0.169389588 container start 479a1db590a2db9239ee0d982550c1ac9fdcd1bbedfb4ad5e341b751f4df31de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_mclaren, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 17:29:24 compute-0 podman[254433]: 2026-01-10 17:29:24.383786335 +0000 UTC m=+0.173771875 container attach 479a1db590a2db9239ee0d982550c1ac9fdcd1bbedfb4ad5e341b751f4df31de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_mclaren, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 10 17:29:24 compute-0 serene_mclaren[254449]: 167 167
Jan 10 17:29:24 compute-0 systemd[1]: libpod-479a1db590a2db9239ee0d982550c1ac9fdcd1bbedfb4ad5e341b751f4df31de.scope: Deactivated successfully.
Jan 10 17:29:24 compute-0 podman[254433]: 2026-01-10 17:29:24.386962497 +0000 UTC m=+0.176948077 container died 479a1db590a2db9239ee0d982550c1ac9fdcd1bbedfb4ad5e341b751f4df31de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_mclaren, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 10 17:29:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-d52b901919fc985461a58d414f590342ab6faab5b7b1c5d5c9751975f6b443df-merged.mount: Deactivated successfully.
Jan 10 17:29:24 compute-0 podman[254433]: 2026-01-10 17:29:24.442509069 +0000 UTC m=+0.232494619 container remove 479a1db590a2db9239ee0d982550c1ac9fdcd1bbedfb4ad5e341b751f4df31de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_mclaren, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 17:29:24 compute-0 systemd[1]: libpod-conmon-479a1db590a2db9239ee0d982550c1ac9fdcd1bbedfb4ad5e341b751f4df31de.scope: Deactivated successfully.
Jan 10 17:29:24 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:29:24 compute-0 podman[254473]: 2026-01-10 17:29:24.712420247 +0000 UTC m=+0.069694872 container create 9bd4f9adf787ad651e7fc483d2c476f85267e9f0e4933b17efd9be91c5d6e36f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_bardeen, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 17:29:24 compute-0 systemd[1]: Started libpod-conmon-9bd4f9adf787ad651e7fc483d2c476f85267e9f0e4933b17efd9be91c5d6e36f.scope.
Jan 10 17:29:24 compute-0 podman[254473]: 2026-01-10 17:29:24.68723038 +0000 UTC m=+0.044504985 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:29:24 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:29:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6401f9bda8c093e4f2c9a038ea2ccc8332898f3920a6de9b4ca9df695c626723/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 17:29:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6401f9bda8c093e4f2c9a038ea2ccc8332898f3920a6de9b4ca9df695c626723/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 17:29:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6401f9bda8c093e4f2c9a038ea2ccc8332898f3920a6de9b4ca9df695c626723/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 17:29:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6401f9bda8c093e4f2c9a038ea2ccc8332898f3920a6de9b4ca9df695c626723/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 17:29:24 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1058: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:29:24 compute-0 podman[254473]: 2026-01-10 17:29:24.829691841 +0000 UTC m=+0.186966496 container init 9bd4f9adf787ad651e7fc483d2c476f85267e9f0e4933b17efd9be91c5d6e36f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_bardeen, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 10 17:29:24 compute-0 podman[254473]: 2026-01-10 17:29:24.842377607 +0000 UTC m=+0.199652232 container start 9bd4f9adf787ad651e7fc483d2c476f85267e9f0e4933b17efd9be91c5d6e36f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_bardeen, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 17:29:24 compute-0 podman[254473]: 2026-01-10 17:29:24.847635178 +0000 UTC m=+0.204909853 container attach 9bd4f9adf787ad651e7fc483d2c476f85267e9f0e4933b17efd9be91c5d6e36f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_bardeen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 10 17:29:25 compute-0 nice_bardeen[254490]: [
Jan 10 17:29:25 compute-0 nice_bardeen[254490]:     {
Jan 10 17:29:25 compute-0 nice_bardeen[254490]:         "available": false,
Jan 10 17:29:25 compute-0 nice_bardeen[254490]:         "being_replaced": false,
Jan 10 17:29:25 compute-0 nice_bardeen[254490]:         "ceph_device_lvm": false,
Jan 10 17:29:25 compute-0 nice_bardeen[254490]:         "device_id": "QEMU_DVD-ROM_QM00001",
Jan 10 17:29:25 compute-0 nice_bardeen[254490]:         "lsm_data": {},
Jan 10 17:29:25 compute-0 nice_bardeen[254490]:         "lvs": [],
Jan 10 17:29:25 compute-0 nice_bardeen[254490]:         "path": "/dev/sr0",
Jan 10 17:29:25 compute-0 nice_bardeen[254490]:         "rejected_reasons": [
Jan 10 17:29:25 compute-0 nice_bardeen[254490]:             "Has a FileSystem",
Jan 10 17:29:25 compute-0 nice_bardeen[254490]:             "Insufficient space (<5GB)"
Jan 10 17:29:25 compute-0 nice_bardeen[254490]:         ],
Jan 10 17:29:25 compute-0 nice_bardeen[254490]:         "sys_api": {
Jan 10 17:29:25 compute-0 nice_bardeen[254490]:             "actuators": null,
Jan 10 17:29:25 compute-0 nice_bardeen[254490]:             "device_nodes": [
Jan 10 17:29:25 compute-0 nice_bardeen[254490]:                 "sr0"
Jan 10 17:29:25 compute-0 nice_bardeen[254490]:             ],
Jan 10 17:29:25 compute-0 nice_bardeen[254490]:             "devname": "sr0",
Jan 10 17:29:25 compute-0 nice_bardeen[254490]:             "human_readable_size": "482.00 KB",
Jan 10 17:29:25 compute-0 nice_bardeen[254490]:             "id_bus": "ata",
Jan 10 17:29:25 compute-0 nice_bardeen[254490]:             "model": "QEMU DVD-ROM",
Jan 10 17:29:25 compute-0 nice_bardeen[254490]:             "nr_requests": "2",
Jan 10 17:29:25 compute-0 nice_bardeen[254490]:             "parent": "/dev/sr0",
Jan 10 17:29:25 compute-0 nice_bardeen[254490]:             "partitions": {},
Jan 10 17:29:25 compute-0 nice_bardeen[254490]:             "path": "/dev/sr0",
Jan 10 17:29:25 compute-0 nice_bardeen[254490]:             "removable": "1",
Jan 10 17:29:25 compute-0 nice_bardeen[254490]:             "rev": "2.5+",
Jan 10 17:29:25 compute-0 nice_bardeen[254490]:             "ro": "0",
Jan 10 17:29:25 compute-0 nice_bardeen[254490]:             "rotational": "1",
Jan 10 17:29:25 compute-0 nice_bardeen[254490]:             "sas_address": "",
Jan 10 17:29:25 compute-0 nice_bardeen[254490]:             "sas_device_handle": "",
Jan 10 17:29:25 compute-0 nice_bardeen[254490]:             "scheduler_mode": "mq-deadline",
Jan 10 17:29:25 compute-0 nice_bardeen[254490]:             "sectors": 0,
Jan 10 17:29:25 compute-0 nice_bardeen[254490]:             "sectorsize": "2048",
Jan 10 17:29:25 compute-0 nice_bardeen[254490]:             "size": 493568.0,
Jan 10 17:29:25 compute-0 nice_bardeen[254490]:             "support_discard": "2048",
Jan 10 17:29:25 compute-0 nice_bardeen[254490]:             "type": "disk",
Jan 10 17:29:25 compute-0 nice_bardeen[254490]:             "vendor": "QEMU"
Jan 10 17:29:25 compute-0 nice_bardeen[254490]:         }
Jan 10 17:29:25 compute-0 nice_bardeen[254490]:     }
Jan 10 17:29:25 compute-0 nice_bardeen[254490]: ]
Jan 10 17:29:25 compute-0 systemd[1]: libpod-9bd4f9adf787ad651e7fc483d2c476f85267e9f0e4933b17efd9be91c5d6e36f.scope: Deactivated successfully.
Jan 10 17:29:25 compute-0 podman[254473]: 2026-01-10 17:29:25.564655027 +0000 UTC m=+0.921929642 container died 9bd4f9adf787ad651e7fc483d2c476f85267e9f0e4933b17efd9be91c5d6e36f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_bardeen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 17:29:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-6401f9bda8c093e4f2c9a038ea2ccc8332898f3920a6de9b4ca9df695c626723-merged.mount: Deactivated successfully.
Jan 10 17:29:25 compute-0 podman[254473]: 2026-01-10 17:29:25.615631348 +0000 UTC m=+0.972905933 container remove 9bd4f9adf787ad651e7fc483d2c476f85267e9f0e4933b17efd9be91c5d6e36f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_bardeen, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 10 17:29:25 compute-0 systemd[1]: libpod-conmon-9bd4f9adf787ad651e7fc483d2c476f85267e9f0e4933b17efd9be91c5d6e36f.scope: Deactivated successfully.
Jan 10 17:29:25 compute-0 sudo[254395]: pam_unix(sudo:session): session closed for user root
Jan 10 17:29:25 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 10 17:29:25 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:29:25 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 10 17:29:25 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:29:25 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Jan 10 17:29:25 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 10 17:29:25 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 17:29:25 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 17:29:25 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 10 17:29:25 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 10 17:29:25 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 10 17:29:25 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:29:25 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 10 17:29:25 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 10 17:29:25 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 10 17:29:25 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 10 17:29:25 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 17:29:25 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 17:29:25 compute-0 sudo[255227]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 17:29:25 compute-0 sudo[255227]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:29:25 compute-0 sudo[255227]: pam_unix(sudo:session): session closed for user root
Jan 10 17:29:25 compute-0 sudo[255252]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 10 17:29:25 compute-0 sudo[255252]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:29:25 compute-0 ceph-mon[75249]: pgmap v1058: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:29:25 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:29:25 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:29:25 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 10 17:29:25 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 17:29:25 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 10 17:29:25 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:29:25 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 10 17:29:25 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 10 17:29:25 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 17:29:26 compute-0 podman[255290]: 2026-01-10 17:29:26.273112488 +0000 UTC m=+0.060527537 container create fc5933be3a4197325413bc9a7155b2798a7d4f3c690897876e27cf61930398f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_turing, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 10 17:29:26 compute-0 systemd[1]: Started libpod-conmon-fc5933be3a4197325413bc9a7155b2798a7d4f3c690897876e27cf61930398f0.scope.
Jan 10 17:29:26 compute-0 podman[255290]: 2026-01-10 17:29:26.245169712 +0000 UTC m=+0.032584801 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:29:26 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:29:26 compute-0 podman[255290]: 2026-01-10 17:29:26.36953794 +0000 UTC m=+0.156952979 container init fc5933be3a4197325413bc9a7155b2798a7d4f3c690897876e27cf61930398f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_turing, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True)
Jan 10 17:29:26 compute-0 podman[255290]: 2026-01-10 17:29:26.378840899 +0000 UTC m=+0.166255938 container start fc5933be3a4197325413bc9a7155b2798a7d4f3c690897876e27cf61930398f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_turing, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 10 17:29:26 compute-0 friendly_turing[255306]: 167 167
Jan 10 17:29:26 compute-0 podman[255290]: 2026-01-10 17:29:26.383460762 +0000 UTC m=+0.170875811 container attach fc5933be3a4197325413bc9a7155b2798a7d4f3c690897876e27cf61930398f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_turing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 10 17:29:26 compute-0 systemd[1]: libpod-fc5933be3a4197325413bc9a7155b2798a7d4f3c690897876e27cf61930398f0.scope: Deactivated successfully.
Jan 10 17:29:26 compute-0 podman[255290]: 2026-01-10 17:29:26.387037345 +0000 UTC m=+0.174452364 container died fc5933be3a4197325413bc9a7155b2798a7d4f3c690897876e27cf61930398f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_turing, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 17:29:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-4e99729f21db00ae1fb7a109e51146725de708d0b4dda0e967544c4918e05063-merged.mount: Deactivated successfully.
Jan 10 17:29:26 compute-0 podman[255290]: 2026-01-10 17:29:26.425814754 +0000 UTC m=+0.213229803 container remove fc5933be3a4197325413bc9a7155b2798a7d4f3c690897876e27cf61930398f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_turing, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle)
Jan 10 17:29:26 compute-0 systemd[1]: libpod-conmon-fc5933be3a4197325413bc9a7155b2798a7d4f3c690897876e27cf61930398f0.scope: Deactivated successfully.
Jan 10 17:29:26 compute-0 podman[255331]: 2026-01-10 17:29:26.684281432 +0000 UTC m=+0.065523342 container create ef926725b223cfd5dc3810244a82c3101dbbe263c3a807d1f5d0925c774ac7fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_johnson, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 17:29:26 compute-0 systemd[1]: Started libpod-conmon-ef926725b223cfd5dc3810244a82c3101dbbe263c3a807d1f5d0925c774ac7fe.scope.
Jan 10 17:29:26 compute-0 podman[255331]: 2026-01-10 17:29:26.652948608 +0000 UTC m=+0.034190608 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:29:26 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:29:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/effeee03aee15520a1533186ba6f18793642097aaaa184e8ed2dec89c646a0cd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 17:29:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/effeee03aee15520a1533186ba6f18793642097aaaa184e8ed2dec89c646a0cd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 17:29:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/effeee03aee15520a1533186ba6f18793642097aaaa184e8ed2dec89c646a0cd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 17:29:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/effeee03aee15520a1533186ba6f18793642097aaaa184e8ed2dec89c646a0cd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 17:29:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/effeee03aee15520a1533186ba6f18793642097aaaa184e8ed2dec89c646a0cd/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 10 17:29:26 compute-0 podman[255331]: 2026-01-10 17:29:26.792338559 +0000 UTC m=+0.173580469 container init ef926725b223cfd5dc3810244a82c3101dbbe263c3a807d1f5d0925c774ac7fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_johnson, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 17:29:26 compute-0 podman[255331]: 2026-01-10 17:29:26.817760892 +0000 UTC m=+0.199002832 container start ef926725b223cfd5dc3810244a82c3101dbbe263c3a807d1f5d0925c774ac7fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_johnson, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 17:29:26 compute-0 podman[255331]: 2026-01-10 17:29:26.822565551 +0000 UTC m=+0.203807571 container attach ef926725b223cfd5dc3810244a82c3101dbbe263c3a807d1f5d0925c774ac7fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_johnson, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 17:29:26 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1059: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:29:27 compute-0 suspicious_johnson[255347]: --> passed data devices: 0 physical, 3 LVM
Jan 10 17:29:27 compute-0 suspicious_johnson[255347]: --> All data devices are unavailable
Jan 10 17:29:27 compute-0 systemd[1]: libpod-ef926725b223cfd5dc3810244a82c3101dbbe263c3a807d1f5d0925c774ac7fe.scope: Deactivated successfully.
Jan 10 17:29:27 compute-0 podman[255331]: 2026-01-10 17:29:27.425654562 +0000 UTC m=+0.806896472 container died ef926725b223cfd5dc3810244a82c3101dbbe263c3a807d1f5d0925c774ac7fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_johnson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 10 17:29:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-effeee03aee15520a1533186ba6f18793642097aaaa184e8ed2dec89c646a0cd-merged.mount: Deactivated successfully.
Jan 10 17:29:27 compute-0 podman[255331]: 2026-01-10 17:29:27.477263981 +0000 UTC m=+0.858505891 container remove ef926725b223cfd5dc3810244a82c3101dbbe263c3a807d1f5d0925c774ac7fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_johnson, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 17:29:27 compute-0 systemd[1]: libpod-conmon-ef926725b223cfd5dc3810244a82c3101dbbe263c3a807d1f5d0925c774ac7fe.scope: Deactivated successfully.
Jan 10 17:29:27 compute-0 sudo[255252]: pam_unix(sudo:session): session closed for user root
Jan 10 17:29:27 compute-0 sudo[255380]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 17:29:27 compute-0 sudo[255380]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:29:27 compute-0 sudo[255380]: pam_unix(sudo:session): session closed for user root
Jan 10 17:29:27 compute-0 sudo[255405]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 -- lvm list --format json
Jan 10 17:29:27 compute-0 sudo[255405]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:29:27 compute-0 ceph-mon[75249]: pgmap v1059: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:29:28 compute-0 podman[255443]: 2026-01-10 17:29:28.148821107 +0000 UTC m=+0.076348623 container create 34415630525d4e133f4dba57e8a319d2a59089d808228953fd919f0786381f5d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_gagarin, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 10 17:29:28 compute-0 systemd[1]: Started libpod-conmon-34415630525d4e133f4dba57e8a319d2a59089d808228953fd919f0786381f5d.scope.
Jan 10 17:29:28 compute-0 podman[255443]: 2026-01-10 17:29:28.120223372 +0000 UTC m=+0.047750938 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:29:28 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:29:28 compute-0 podman[255443]: 2026-01-10 17:29:28.264160045 +0000 UTC m=+0.191687601 container init 34415630525d4e133f4dba57e8a319d2a59089d808228953fd919f0786381f5d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_gagarin, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 10 17:29:28 compute-0 podman[255443]: 2026-01-10 17:29:28.277391297 +0000 UTC m=+0.204918813 container start 34415630525d4e133f4dba57e8a319d2a59089d808228953fd919f0786381f5d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_gagarin, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 10 17:29:28 compute-0 podman[255443]: 2026-01-10 17:29:28.28198786 +0000 UTC m=+0.209515426 container attach 34415630525d4e133f4dba57e8a319d2a59089d808228953fd919f0786381f5d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_gagarin, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 10 17:29:28 compute-0 interesting_gagarin[255460]: 167 167
Jan 10 17:29:28 compute-0 systemd[1]: libpod-34415630525d4e133f4dba57e8a319d2a59089d808228953fd919f0786381f5d.scope: Deactivated successfully.
Jan 10 17:29:28 compute-0 podman[255443]: 2026-01-10 17:29:28.290567887 +0000 UTC m=+0.218095373 container died 34415630525d4e133f4dba57e8a319d2a59089d808228953fd919f0786381f5d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_gagarin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0)
Jan 10 17:29:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-adf626c4002afd6bb040000fba66e5c2fa3fa115d2dff1f83d72f321df672214-merged.mount: Deactivated successfully.
Jan 10 17:29:28 compute-0 podman[255443]: 2026-01-10 17:29:28.338415638 +0000 UTC m=+0.265943124 container remove 34415630525d4e133f4dba57e8a319d2a59089d808228953fd919f0786381f5d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_gagarin, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 17:29:28 compute-0 systemd[1]: libpod-conmon-34415630525d4e133f4dba57e8a319d2a59089d808228953fd919f0786381f5d.scope: Deactivated successfully.
Jan 10 17:29:28 compute-0 podman[255484]: 2026-01-10 17:29:28.589918065 +0000 UTC m=+0.077391544 container create a13291a43c24ecd8f14c49ea50bf0ccc282824f58d0580d9e5ccc939b4b3b46e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_hoover, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 10 17:29:28 compute-0 systemd[1]: Started libpod-conmon-a13291a43c24ecd8f14c49ea50bf0ccc282824f58d0580d9e5ccc939b4b3b46e.scope.
Jan 10 17:29:28 compute-0 podman[255484]: 2026-01-10 17:29:28.553976538 +0000 UTC m=+0.041450077 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:29:28 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:29:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f644dc05436f2285b3a467f8ee27b73c94f64bbed4f2705b64887203217813cc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 17:29:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f644dc05436f2285b3a467f8ee27b73c94f64bbed4f2705b64887203217813cc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 17:29:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f644dc05436f2285b3a467f8ee27b73c94f64bbed4f2705b64887203217813cc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 17:29:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f644dc05436f2285b3a467f8ee27b73c94f64bbed4f2705b64887203217813cc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 17:29:28 compute-0 podman[255484]: 2026-01-10 17:29:28.705360755 +0000 UTC m=+0.192834294 container init a13291a43c24ecd8f14c49ea50bf0ccc282824f58d0580d9e5ccc939b4b3b46e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_hoover, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 17:29:28 compute-0 podman[255484]: 2026-01-10 17:29:28.723819148 +0000 UTC m=+0.211292637 container start a13291a43c24ecd8f14c49ea50bf0ccc282824f58d0580d9e5ccc939b4b3b46e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_hoover, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 10 17:29:28 compute-0 podman[255484]: 2026-01-10 17:29:28.729881463 +0000 UTC m=+0.217354992 container attach a13291a43c24ecd8f14c49ea50bf0ccc282824f58d0580d9e5ccc939b4b3b46e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_hoover, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 10 17:29:28 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1060: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:29:29 compute-0 vigilant_hoover[255501]: {
Jan 10 17:29:29 compute-0 vigilant_hoover[255501]:     "0": [
Jan 10 17:29:29 compute-0 vigilant_hoover[255501]:         {
Jan 10 17:29:29 compute-0 vigilant_hoover[255501]:             "devices": [
Jan 10 17:29:29 compute-0 vigilant_hoover[255501]:                 "/dev/loop3"
Jan 10 17:29:29 compute-0 vigilant_hoover[255501]:             ],
Jan 10 17:29:29 compute-0 vigilant_hoover[255501]:             "lv_name": "ceph_lv0",
Jan 10 17:29:29 compute-0 vigilant_hoover[255501]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 10 17:29:29 compute-0 vigilant_hoover[255501]:             "lv_size": "21470642176",
Jan 10 17:29:29 compute-0 vigilant_hoover[255501]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9aa1dcc9-88f4-49c0-be40-744313964d3e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 17:29:29 compute-0 vigilant_hoover[255501]:             "lv_uuid": "fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2",
Jan 10 17:29:29 compute-0 vigilant_hoover[255501]:             "name": "ceph_lv0",
Jan 10 17:29:29 compute-0 vigilant_hoover[255501]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 10 17:29:29 compute-0 vigilant_hoover[255501]:             "tags": {
Jan 10 17:29:29 compute-0 vigilant_hoover[255501]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 10 17:29:29 compute-0 vigilant_hoover[255501]:                 "ceph.block_uuid": "fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2",
Jan 10 17:29:29 compute-0 vigilant_hoover[255501]:                 "ceph.cephx_lockbox_secret": "",
Jan 10 17:29:29 compute-0 vigilant_hoover[255501]:                 "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 17:29:29 compute-0 vigilant_hoover[255501]:                 "ceph.cluster_name": "ceph",
Jan 10 17:29:29 compute-0 vigilant_hoover[255501]:                 "ceph.crush_device_class": "",
Jan 10 17:29:29 compute-0 vigilant_hoover[255501]:                 "ceph.encrypted": "0",
Jan 10 17:29:29 compute-0 vigilant_hoover[255501]:                 "ceph.objectstore": "bluestore",
Jan 10 17:29:29 compute-0 vigilant_hoover[255501]:                 "ceph.osd_fsid": "9aa1dcc9-88f4-49c0-be40-744313964d3e",
Jan 10 17:29:29 compute-0 vigilant_hoover[255501]:                 "ceph.osd_id": "0",
Jan 10 17:29:29 compute-0 vigilant_hoover[255501]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 10 17:29:29 compute-0 vigilant_hoover[255501]:                 "ceph.type": "block",
Jan 10 17:29:29 compute-0 vigilant_hoover[255501]:                 "ceph.vdo": "0",
Jan 10 17:29:29 compute-0 vigilant_hoover[255501]:                 "ceph.with_tpm": "0"
Jan 10 17:29:29 compute-0 vigilant_hoover[255501]:             },
Jan 10 17:29:29 compute-0 vigilant_hoover[255501]:             "type": "block",
Jan 10 17:29:29 compute-0 vigilant_hoover[255501]:             "vg_name": "ceph_vg0"
Jan 10 17:29:29 compute-0 vigilant_hoover[255501]:         }
Jan 10 17:29:29 compute-0 vigilant_hoover[255501]:     ],
Jan 10 17:29:29 compute-0 vigilant_hoover[255501]:     "1": [
Jan 10 17:29:29 compute-0 vigilant_hoover[255501]:         {
Jan 10 17:29:29 compute-0 vigilant_hoover[255501]:             "devices": [
Jan 10 17:29:29 compute-0 vigilant_hoover[255501]:                 "/dev/loop4"
Jan 10 17:29:29 compute-0 vigilant_hoover[255501]:             ],
Jan 10 17:29:29 compute-0 vigilant_hoover[255501]:             "lv_name": "ceph_lv1",
Jan 10 17:29:29 compute-0 vigilant_hoover[255501]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 10 17:29:29 compute-0 vigilant_hoover[255501]:             "lv_size": "21470642176",
Jan 10 17:29:29 compute-0 vigilant_hoover[255501]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e8e31518-65ae-476c-891c-e2fc550d0a1c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 17:29:29 compute-0 vigilant_hoover[255501]:             "lv_uuid": "jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE",
Jan 10 17:29:29 compute-0 vigilant_hoover[255501]:             "name": "ceph_lv1",
Jan 10 17:29:29 compute-0 vigilant_hoover[255501]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 10 17:29:29 compute-0 vigilant_hoover[255501]:             "tags": {
Jan 10 17:29:29 compute-0 vigilant_hoover[255501]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 10 17:29:29 compute-0 vigilant_hoover[255501]:                 "ceph.block_uuid": "jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE",
Jan 10 17:29:29 compute-0 vigilant_hoover[255501]:                 "ceph.cephx_lockbox_secret": "",
Jan 10 17:29:29 compute-0 vigilant_hoover[255501]:                 "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 17:29:29 compute-0 vigilant_hoover[255501]:                 "ceph.cluster_name": "ceph",
Jan 10 17:29:29 compute-0 vigilant_hoover[255501]:                 "ceph.crush_device_class": "",
Jan 10 17:29:29 compute-0 vigilant_hoover[255501]:                 "ceph.encrypted": "0",
Jan 10 17:29:29 compute-0 vigilant_hoover[255501]:                 "ceph.objectstore": "bluestore",
Jan 10 17:29:29 compute-0 vigilant_hoover[255501]:                 "ceph.osd_fsid": "e8e31518-65ae-476c-891c-e2fc550d0a1c",
Jan 10 17:29:29 compute-0 vigilant_hoover[255501]:                 "ceph.osd_id": "1",
Jan 10 17:29:29 compute-0 vigilant_hoover[255501]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 10 17:29:29 compute-0 vigilant_hoover[255501]:                 "ceph.type": "block",
Jan 10 17:29:29 compute-0 vigilant_hoover[255501]:                 "ceph.vdo": "0",
Jan 10 17:29:29 compute-0 vigilant_hoover[255501]:                 "ceph.with_tpm": "0"
Jan 10 17:29:29 compute-0 vigilant_hoover[255501]:             },
Jan 10 17:29:29 compute-0 vigilant_hoover[255501]:             "type": "block",
Jan 10 17:29:29 compute-0 vigilant_hoover[255501]:             "vg_name": "ceph_vg1"
Jan 10 17:29:29 compute-0 vigilant_hoover[255501]:         }
Jan 10 17:29:29 compute-0 vigilant_hoover[255501]:     ],
Jan 10 17:29:29 compute-0 vigilant_hoover[255501]:     "2": [
Jan 10 17:29:29 compute-0 vigilant_hoover[255501]:         {
Jan 10 17:29:29 compute-0 vigilant_hoover[255501]:             "devices": [
Jan 10 17:29:29 compute-0 vigilant_hoover[255501]:                 "/dev/loop5"
Jan 10 17:29:29 compute-0 vigilant_hoover[255501]:             ],
Jan 10 17:29:29 compute-0 vigilant_hoover[255501]:             "lv_name": "ceph_lv2",
Jan 10 17:29:29 compute-0 vigilant_hoover[255501]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 10 17:29:29 compute-0 vigilant_hoover[255501]:             "lv_size": "21470642176",
Jan 10 17:29:29 compute-0 vigilant_hoover[255501]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=87473727-6468-4f68-8371-e0bf60edaa43,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 17:29:29 compute-0 vigilant_hoover[255501]:             "lv_uuid": "47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL",
Jan 10 17:29:29 compute-0 vigilant_hoover[255501]:             "name": "ceph_lv2",
Jan 10 17:29:29 compute-0 vigilant_hoover[255501]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 10 17:29:29 compute-0 vigilant_hoover[255501]:             "tags": {
Jan 10 17:29:29 compute-0 vigilant_hoover[255501]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 10 17:29:29 compute-0 vigilant_hoover[255501]:                 "ceph.block_uuid": "47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL",
Jan 10 17:29:29 compute-0 vigilant_hoover[255501]:                 "ceph.cephx_lockbox_secret": "",
Jan 10 17:29:29 compute-0 vigilant_hoover[255501]:                 "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 17:29:29 compute-0 vigilant_hoover[255501]:                 "ceph.cluster_name": "ceph",
Jan 10 17:29:29 compute-0 vigilant_hoover[255501]:                 "ceph.crush_device_class": "",
Jan 10 17:29:29 compute-0 vigilant_hoover[255501]:                 "ceph.encrypted": "0",
Jan 10 17:29:29 compute-0 vigilant_hoover[255501]:                 "ceph.objectstore": "bluestore",
Jan 10 17:29:29 compute-0 vigilant_hoover[255501]:                 "ceph.osd_fsid": "87473727-6468-4f68-8371-e0bf60edaa43",
Jan 10 17:29:29 compute-0 vigilant_hoover[255501]:                 "ceph.osd_id": "2",
Jan 10 17:29:29 compute-0 vigilant_hoover[255501]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 10 17:29:29 compute-0 vigilant_hoover[255501]:                 "ceph.type": "block",
Jan 10 17:29:29 compute-0 vigilant_hoover[255501]:                 "ceph.vdo": "0",
Jan 10 17:29:29 compute-0 vigilant_hoover[255501]:                 "ceph.with_tpm": "0"
Jan 10 17:29:29 compute-0 vigilant_hoover[255501]:             },
Jan 10 17:29:29 compute-0 vigilant_hoover[255501]:             "type": "block",
Jan 10 17:29:29 compute-0 vigilant_hoover[255501]:             "vg_name": "ceph_vg2"
Jan 10 17:29:29 compute-0 vigilant_hoover[255501]:         }
Jan 10 17:29:29 compute-0 vigilant_hoover[255501]:     ]
Jan 10 17:29:29 compute-0 vigilant_hoover[255501]: }
Jan 10 17:29:29 compute-0 systemd[1]: libpod-a13291a43c24ecd8f14c49ea50bf0ccc282824f58d0580d9e5ccc939b4b3b46e.scope: Deactivated successfully.
Jan 10 17:29:29 compute-0 podman[255510]: 2026-01-10 17:29:29.186098506 +0000 UTC m=+0.048055037 container died a13291a43c24ecd8f14c49ea50bf0ccc282824f58d0580d9e5ccc939b4b3b46e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_hoover, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 17:29:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-f644dc05436f2285b3a467f8ee27b73c94f64bbed4f2705b64887203217813cc-merged.mount: Deactivated successfully.
Jan 10 17:29:29 compute-0 podman[255510]: 2026-01-10 17:29:29.242046391 +0000 UTC m=+0.104002892 container remove a13291a43c24ecd8f14c49ea50bf0ccc282824f58d0580d9e5ccc939b4b3b46e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_hoover, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 17:29:29 compute-0 systemd[1]: libpod-conmon-a13291a43c24ecd8f14c49ea50bf0ccc282824f58d0580d9e5ccc939b4b3b46e.scope: Deactivated successfully.
Jan 10 17:29:29 compute-0 sudo[255405]: pam_unix(sudo:session): session closed for user root
Jan 10 17:29:29 compute-0 sudo[255525]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 17:29:29 compute-0 sudo[255525]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:29:29 compute-0 sudo[255525]: pam_unix(sudo:session): session closed for user root
Jan 10 17:29:29 compute-0 sudo[255550]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 -- raw list --format json
Jan 10 17:29:29 compute-0 sudo[255550]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:29:29 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:29:29 compute-0 podman[255587]: 2026-01-10 17:29:29.877963209 +0000 UTC m=+0.058540840 container create 659c318c3c76b1defccb8eef788a9a4c28c72d20414b4af7d95215be56ec3b81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_darwin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 10 17:29:29 compute-0 systemd[1]: Started libpod-conmon-659c318c3c76b1defccb8eef788a9a4c28c72d20414b4af7d95215be56ec3b81.scope.
Jan 10 17:29:29 compute-0 ceph-mon[75249]: pgmap v1060: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:29:29 compute-0 podman[255587]: 2026-01-10 17:29:29.85618076 +0000 UTC m=+0.036758371 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:29:29 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:29:29 compute-0 podman[255587]: 2026-01-10 17:29:29.970211101 +0000 UTC m=+0.150788792 container init 659c318c3c76b1defccb8eef788a9a4c28c72d20414b4af7d95215be56ec3b81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_darwin, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 10 17:29:29 compute-0 podman[255587]: 2026-01-10 17:29:29.976809261 +0000 UTC m=+0.157386882 container start 659c318c3c76b1defccb8eef788a9a4c28c72d20414b4af7d95215be56ec3b81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_darwin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 10 17:29:29 compute-0 podman[255587]: 2026-01-10 17:29:29.981138186 +0000 UTC m=+0.161715817 container attach 659c318c3c76b1defccb8eef788a9a4c28c72d20414b4af7d95215be56ec3b81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_darwin, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 17:29:29 compute-0 nostalgic_darwin[255604]: 167 167
Jan 10 17:29:29 compute-0 systemd[1]: libpod-659c318c3c76b1defccb8eef788a9a4c28c72d20414b4af7d95215be56ec3b81.scope: Deactivated successfully.
Jan 10 17:29:29 compute-0 podman[255587]: 2026-01-10 17:29:29.98508578 +0000 UTC m=+0.165663411 container died 659c318c3c76b1defccb8eef788a9a4c28c72d20414b4af7d95215be56ec3b81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_darwin, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 17:29:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-70d9c62277471f51618b4bad7b43ec77f7318f050c312a3e5f345622a6f4d102-merged.mount: Deactivated successfully.
Jan 10 17:29:30 compute-0 podman[255587]: 2026-01-10 17:29:30.034175646 +0000 UTC m=+0.214753237 container remove 659c318c3c76b1defccb8eef788a9a4c28c72d20414b4af7d95215be56ec3b81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_darwin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 10 17:29:30 compute-0 systemd[1]: libpod-conmon-659c318c3c76b1defccb8eef788a9a4c28c72d20414b4af7d95215be56ec3b81.scope: Deactivated successfully.
Jan 10 17:29:30 compute-0 podman[255628]: 2026-01-10 17:29:30.273480051 +0000 UTC m=+0.079437713 container create 6c6fd14fc8d6c2806e3080fc05228c6ab68cd1f40a98a16fe9245d3fc63b2369 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_elion, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 10 17:29:30 compute-0 podman[255628]: 2026-01-10 17:29:30.239331466 +0000 UTC m=+0.045289208 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:29:30 compute-0 systemd[1]: Started libpod-conmon-6c6fd14fc8d6c2806e3080fc05228c6ab68cd1f40a98a16fe9245d3fc63b2369.scope.
Jan 10 17:29:30 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:29:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a307395d5836a2540ddabdfa2efe103e593b9f179b4e91a7d26e275eccb33bc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 17:29:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a307395d5836a2540ddabdfa2efe103e593b9f179b4e91a7d26e275eccb33bc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 17:29:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a307395d5836a2540ddabdfa2efe103e593b9f179b4e91a7d26e275eccb33bc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 17:29:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a307395d5836a2540ddabdfa2efe103e593b9f179b4e91a7d26e275eccb33bc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 17:29:30 compute-0 podman[255628]: 2026-01-10 17:29:30.387219102 +0000 UTC m=+0.193176794 container init 6c6fd14fc8d6c2806e3080fc05228c6ab68cd1f40a98a16fe9245d3fc63b2369 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_elion, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True)
Jan 10 17:29:30 compute-0 podman[255628]: 2026-01-10 17:29:30.396217331 +0000 UTC m=+0.202174993 container start 6c6fd14fc8d6c2806e3080fc05228c6ab68cd1f40a98a16fe9245d3fc63b2369 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_elion, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 17:29:30 compute-0 podman[255628]: 2026-01-10 17:29:30.401404141 +0000 UTC m=+0.207361853 container attach 6c6fd14fc8d6c2806e3080fc05228c6ab68cd1f40a98a16fe9245d3fc63b2369 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_elion, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 10 17:29:30 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1061: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:29:31 compute-0 lvm[255721]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 10 17:29:31 compute-0 lvm[255721]: VG ceph_vg0 finished
Jan 10 17:29:31 compute-0 lvm[255725]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 10 17:29:31 compute-0 lvm[255725]: VG ceph_vg2 finished
Jan 10 17:29:31 compute-0 lvm[255724]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 10 17:29:31 compute-0 lvm[255724]: VG ceph_vg1 finished
Jan 10 17:29:31 compute-0 elated_elion[255644]: {}
Jan 10 17:29:31 compute-0 systemd[1]: libpod-6c6fd14fc8d6c2806e3080fc05228c6ab68cd1f40a98a16fe9245d3fc63b2369.scope: Deactivated successfully.
Jan 10 17:29:31 compute-0 systemd[1]: libpod-6c6fd14fc8d6c2806e3080fc05228c6ab68cd1f40a98a16fe9245d3fc63b2369.scope: Consumed 1.695s CPU time.
Jan 10 17:29:31 compute-0 podman[255729]: 2026-01-10 17:29:31.459950353 +0000 UTC m=+0.043392553 container died 6c6fd14fc8d6c2806e3080fc05228c6ab68cd1f40a98a16fe9245d3fc63b2369 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_elion, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 17:29:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-7a307395d5836a2540ddabdfa2efe103e593b9f179b4e91a7d26e275eccb33bc-merged.mount: Deactivated successfully.
Jan 10 17:29:31 compute-0 podman[255729]: 2026-01-10 17:29:31.5173606 +0000 UTC m=+0.100802780 container remove 6c6fd14fc8d6c2806e3080fc05228c6ab68cd1f40a98a16fe9245d3fc63b2369 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_elion, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 10 17:29:31 compute-0 systemd[1]: libpod-conmon-6c6fd14fc8d6c2806e3080fc05228c6ab68cd1f40a98a16fe9245d3fc63b2369.scope: Deactivated successfully.
Jan 10 17:29:31 compute-0 sudo[255550]: pam_unix(sudo:session): session closed for user root
Jan 10 17:29:31 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 10 17:29:31 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:29:31 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 10 17:29:31 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:29:31 compute-0 sudo[255743]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 10 17:29:31 compute-0 sudo[255743]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:29:31 compute-0 sudo[255743]: pam_unix(sudo:session): session closed for user root
Jan 10 17:29:31 compute-0 ceph-mon[75249]: pgmap v1061: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:29:31 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:29:31 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:29:32 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1062: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:29:33 compute-0 ceph-mon[75249]: pgmap v1062: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:29:34 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:29:34 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1063: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:29:35 compute-0 ceph-mon[75249]: pgmap v1063: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:29:36 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 10 17:29:36 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/677186106' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 10 17:29:36 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 10 17:29:36 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/677186106' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 10 17:29:36 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1064: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:29:36 compute-0 ceph-mon[75249]: from='client.? 192.168.122.10:0/677186106' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 10 17:29:36 compute-0 ceph-mon[75249]: from='client.? 192.168.122.10:0/677186106' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 10 17:29:37 compute-0 ceph-mon[75249]: pgmap v1064: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:29:38 compute-0 ceph-mgr[75538]: [balancer INFO root] Optimize plan auto_2026-01-10_17:29:38
Jan 10 17:29:38 compute-0 ceph-mgr[75538]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 10 17:29:38 compute-0 ceph-mgr[75538]: [balancer INFO root] do_upmap
Jan 10 17:29:38 compute-0 ceph-mgr[75538]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.meta', 'backups', 'cephfs.cephfs.data', '.mgr', 'vms', 'images']
Jan 10 17:29:38 compute-0 ceph-mgr[75538]: [balancer INFO root] prepared 0/10 upmap changes
Jan 10 17:29:38 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1065: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:29:39 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:29:39 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:29:39 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:29:39 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:29:39 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:29:39 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:29:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 10 17:29:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 10 17:29:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 10 17:29:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 10 17:29:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 10 17:29:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 10 17:29:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 10 17:29:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 10 17:29:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 10 17:29:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 10 17:29:39 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:29:39 compute-0 ceph-mon[75249]: pgmap v1065: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:29:40 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1066: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:29:41 compute-0 ceph-mon[75249]: pgmap v1066: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:29:42 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1067: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:29:44 compute-0 ceph-mon[75249]: pgmap v1067: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:29:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] _maybe_adjust
Jan 10 17:29:44 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:29:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:29:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 10 17:29:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:29:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 5.365931724612428e-07 of space, bias 1.0, pg target 0.00016097795173837282 quantized to 32 (current 32)
Jan 10 17:29:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:29:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.1924810223865999e-07 of space, bias 1.0, pg target 3.5774430671597993e-05 quantized to 32 (current 32)
Jan 10 17:29:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:29:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 17:29:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:29:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000668695260671586 of space, bias 1.0, pg target 0.2006085782014758 quantized to 32 (current 32)
Jan 10 17:29:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:29:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0462037643091811e-06 of space, bias 4.0, pg target 0.0012554445171710175 quantized to 16 (current 16)
Jan 10 17:29:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:29:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 17:29:44 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1068: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:29:46 compute-0 ceph-mon[75249]: pgmap v1068: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:29:46 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1069: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:29:48 compute-0 ceph-mon[75249]: pgmap v1069: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:29:48 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1070: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:29:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:29:48.948 152671 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 10 17:29:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:29:48.950 152671 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 10 17:29:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:29:48.951 152671 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 10 17:29:49 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:29:50 compute-0 ceph-mon[75249]: pgmap v1070: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:29:50 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1071: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:29:51 compute-0 podman[255768]: 2026-01-10 17:29:51.099843866 +0000 UTC m=+0.081341351 container health_status 4a66317a3a3660e903224f5e823ead0a57597ae17c6ac34919f8a3aeea25124f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 10 17:29:51 compute-0 podman[255769]: 2026-01-10 17:29:51.156016611 +0000 UTC m=+0.138214625 container health_status a2049b55215718ead6719d161f51721cb7ba61ad8a59a8a1174e05b17008fa6f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 10 17:29:52 compute-0 ceph-mon[75249]: pgmap v1071: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:29:52 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1072: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:29:54 compute-0 ceph-mon[75249]: pgmap v1072: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:29:54 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:29:54 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1073: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:29:56 compute-0 ceph-mon[75249]: pgmap v1073: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:29:56 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1074: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:29:57 compute-0 nova_compute[237049]: 2026-01-10 17:29:57.346 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:29:57 compute-0 nova_compute[237049]: 2026-01-10 17:29:57.347 237053 DEBUG nova.compute.manager [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 10 17:29:57 compute-0 nova_compute[237049]: 2026-01-10 17:29:57.347 237053 DEBUG nova.compute.manager [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 10 17:29:57 compute-0 nova_compute[237049]: 2026-01-10 17:29:57.367 237053 DEBUG nova.compute.manager [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 10 17:29:58 compute-0 ceph-mon[75249]: pgmap v1074: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:29:58 compute-0 nova_compute[237049]: 2026-01-10 17:29:58.345 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:29:58 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1075: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:29:59 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:30:00 compute-0 ceph-mon[75249]: pgmap v1075: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:30:00 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1076: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:30:01 compute-0 nova_compute[237049]: 2026-01-10 17:30:01.335 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:30:02 compute-0 ceph-mon[75249]: pgmap v1076: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:30:02 compute-0 nova_compute[237049]: 2026-01-10 17:30:02.346 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:30:02 compute-0 nova_compute[237049]: 2026-01-10 17:30:02.346 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:30:02 compute-0 nova_compute[237049]: 2026-01-10 17:30:02.377 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 10 17:30:02 compute-0 nova_compute[237049]: 2026-01-10 17:30:02.377 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 10 17:30:02 compute-0 nova_compute[237049]: 2026-01-10 17:30:02.377 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 10 17:30:02 compute-0 nova_compute[237049]: 2026-01-10 17:30:02.378 237053 DEBUG nova.compute.resource_tracker [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 10 17:30:02 compute-0 nova_compute[237049]: 2026-01-10 17:30:02.378 237053 DEBUG oslo_concurrency.processutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 10 17:30:02 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1077: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:30:02 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 10 17:30:02 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3491358180' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 10 17:30:02 compute-0 nova_compute[237049]: 2026-01-10 17:30:02.950 237053 DEBUG oslo_concurrency.processutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.572s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 10 17:30:03 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/3491358180' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 10 17:30:03 compute-0 nova_compute[237049]: 2026-01-10 17:30:03.159 237053 WARNING nova.virt.libvirt.driver [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 10 17:30:03 compute-0 nova_compute[237049]: 2026-01-10 17:30:03.160 237053 DEBUG nova.compute.resource_tracker [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5140MB free_disk=59.988249060697854GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 10 17:30:03 compute-0 nova_compute[237049]: 2026-01-10 17:30:03.160 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 10 17:30:03 compute-0 nova_compute[237049]: 2026-01-10 17:30:03.161 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 10 17:30:03 compute-0 nova_compute[237049]: 2026-01-10 17:30:03.229 237053 DEBUG nova.compute.resource_tracker [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 10 17:30:03 compute-0 nova_compute[237049]: 2026-01-10 17:30:03.230 237053 DEBUG nova.compute.resource_tracker [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 10 17:30:03 compute-0 nova_compute[237049]: 2026-01-10 17:30:03.247 237053 DEBUG oslo_concurrency.processutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 10 17:30:03 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 10 17:30:03 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3342619412' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 10 17:30:03 compute-0 nova_compute[237049]: 2026-01-10 17:30:03.778 237053 DEBUG oslo_concurrency.processutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.531s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 10 17:30:03 compute-0 nova_compute[237049]: 2026-01-10 17:30:03.785 237053 DEBUG nova.compute.provider_tree [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Inventory has not changed in ProviderTree for provider: 5f85855c-8a9b-43b5-ae49-f5846b9dcebe update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 10 17:30:03 compute-0 nova_compute[237049]: 2026-01-10 17:30:03.805 237053 DEBUG nova.scheduler.client.report [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Inventory has not changed for provider 5f85855c-8a9b-43b5-ae49-f5846b9dcebe based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 10 17:30:03 compute-0 nova_compute[237049]: 2026-01-10 17:30:03.808 237053 DEBUG nova.compute.resource_tracker [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 10 17:30:03 compute-0 nova_compute[237049]: 2026-01-10 17:30:03.809 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.648s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 10 17:30:04 compute-0 ceph-mon[75249]: pgmap v1077: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:30:04 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/3342619412' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 10 17:30:04 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:30:04 compute-0 nova_compute[237049]: 2026-01-10 17:30:04.800 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:30:04 compute-0 nova_compute[237049]: 2026-01-10 17:30:04.801 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:30:04 compute-0 nova_compute[237049]: 2026-01-10 17:30:04.802 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:30:04 compute-0 nova_compute[237049]: 2026-01-10 17:30:04.802 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:30:04 compute-0 nova_compute[237049]: 2026-01-10 17:30:04.803 237053 DEBUG nova.compute.manager [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 10 17:30:04 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1078: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:30:06 compute-0 ceph-mon[75249]: pgmap v1078: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:30:06 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1079: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:30:07 compute-0 nova_compute[237049]: 2026-01-10 17:30:07.346 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:30:08 compute-0 ceph-mon[75249]: pgmap v1079: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:30:08 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1080: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:30:09 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:30:09 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:30:09 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:30:09 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:30:09 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:30:09 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:30:09 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:30:10 compute-0 ceph-mon[75249]: pgmap v1080: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:30:10 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1081: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:30:12 compute-0 ceph-mon[75249]: pgmap v1081: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:30:12 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1082: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:30:14 compute-0 ceph-mon[75249]: pgmap v1082: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:30:14 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:30:14 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1083: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:30:16 compute-0 ceph-mon[75249]: pgmap v1083: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:30:16 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1084: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:30:18 compute-0 ceph-mon[75249]: pgmap v1084: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:30:18 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1085: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:30:19 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:30:20 compute-0 ceph-mon[75249]: pgmap v1085: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:30:20 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1086: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:30:22 compute-0 podman[255856]: 2026-01-10 17:30:22.063840593 +0000 UTC m=+0.058306675 container health_status 4a66317a3a3660e903224f5e823ead0a57597ae17c6ac34919f8a3aeea25124f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, managed_by=edpm_ansible, config_id=ovn_metadata_agent)
Jan 10 17:30:22 compute-0 podman[255857]: 2026-01-10 17:30:22.12759902 +0000 UTC m=+0.112285018 container health_status a2049b55215718ead6719d161f51721cb7ba61ad8a59a8a1174e05b17008fa6f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 10 17:30:22 compute-0 ceph-mon[75249]: pgmap v1086: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:30:22 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1087: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:30:24 compute-0 ceph-mon[75249]: pgmap v1087: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:30:24 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:30:24 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1088: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:30:26 compute-0 ceph-mon[75249]: pgmap v1088: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:30:26 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1089: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:30:28 compute-0 ceph-mon[75249]: pgmap v1089: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:30:28 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1090: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:30:29 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:30:30 compute-0 ceph-mon[75249]: pgmap v1090: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:30:30 compute-0 sshd-session[255901]: Connection closed by authenticating user root 216.36.124.133 port 59992 [preauth]
Jan 10 17:30:30 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1091: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:30:31 compute-0 sudo[255903]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 17:30:31 compute-0 sudo[255903]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:30:31 compute-0 sudo[255903]: pam_unix(sudo:session): session closed for user root
Jan 10 17:30:31 compute-0 sudo[255928]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 10 17:30:31 compute-0 sudo[255928]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:30:32 compute-0 ceph-mon[75249]: pgmap v1091: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:30:32 compute-0 sudo[255928]: pam_unix(sudo:session): session closed for user root
Jan 10 17:30:32 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 17:30:32 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 17:30:32 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 10 17:30:32 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 10 17:30:32 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 10 17:30:32 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:30:32 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 10 17:30:32 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 10 17:30:32 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 10 17:30:32 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 10 17:30:32 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 17:30:32 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 17:30:32 compute-0 sudo[255984]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 17:30:32 compute-0 sudo[255984]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:30:32 compute-0 sudo[255984]: pam_unix(sudo:session): session closed for user root
Jan 10 17:30:32 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1092: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:30:32 compute-0 sudo[256009]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 10 17:30:32 compute-0 sudo[256009]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:30:33 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 17:30:33 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 10 17:30:33 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:30:33 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 10 17:30:33 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 10 17:30:33 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 17:30:33 compute-0 podman[256046]: 2026-01-10 17:30:33.279665114 +0000 UTC m=+0.076861695 container create 91becd772a35b1a37615361e5b924b303712e541099e82a5d0a212e179dea637 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_euler, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 17:30:33 compute-0 systemd[1]: Started libpod-conmon-91becd772a35b1a37615361e5b924b303712e541099e82a5d0a212e179dea637.scope.
Jan 10 17:30:33 compute-0 podman[256046]: 2026-01-10 17:30:33.249843918 +0000 UTC m=+0.047040579 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:30:33 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:30:33 compute-0 podman[256046]: 2026-01-10 17:30:33.399489423 +0000 UTC m=+0.196686064 container init 91becd772a35b1a37615361e5b924b303712e541099e82a5d0a212e179dea637 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_euler, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 17:30:33 compute-0 podman[256046]: 2026-01-10 17:30:33.407417685 +0000 UTC m=+0.204614236 container start 91becd772a35b1a37615361e5b924b303712e541099e82a5d0a212e179dea637 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_euler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 10 17:30:33 compute-0 podman[256046]: 2026-01-10 17:30:33.411788898 +0000 UTC m=+0.208985449 container attach 91becd772a35b1a37615361e5b924b303712e541099e82a5d0a212e179dea637 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_euler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 17:30:33 compute-0 nifty_euler[256062]: 167 167
Jan 10 17:30:33 compute-0 systemd[1]: libpod-91becd772a35b1a37615361e5b924b303712e541099e82a5d0a212e179dea637.scope: Deactivated successfully.
Jan 10 17:30:33 compute-0 podman[256046]: 2026-01-10 17:30:33.416111649 +0000 UTC m=+0.213308220 container died 91becd772a35b1a37615361e5b924b303712e541099e82a5d0a212e179dea637 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_euler, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 17:30:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-a8dab84eb6676a3e5fba7a8b0138d188f578ea8594071399297e2477325b7cfa-merged.mount: Deactivated successfully.
Jan 10 17:30:33 compute-0 podman[256046]: 2026-01-10 17:30:33.473275071 +0000 UTC m=+0.270471662 container remove 91becd772a35b1a37615361e5b924b303712e541099e82a5d0a212e179dea637 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_euler, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 17:30:33 compute-0 systemd[1]: libpod-conmon-91becd772a35b1a37615361e5b924b303712e541099e82a5d0a212e179dea637.scope: Deactivated successfully.
Jan 10 17:30:33 compute-0 podman[256086]: 2026-01-10 17:30:33.634304715 +0000 UTC m=+0.041355341 container create 17595cd5e775b44f47a7fc54349e9c3dfd880fb61c41397dd5f158b6fa073168 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_wescoff, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 10 17:30:33 compute-0 systemd[1]: Started libpod-conmon-17595cd5e775b44f47a7fc54349e9c3dfd880fb61c41397dd5f158b6fa073168.scope.
Jan 10 17:30:33 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:30:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9beacda0ae2be6c3c7ab537bd7688c8d05c643e83a626c1768f8e0eab23bf89/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 17:30:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9beacda0ae2be6c3c7ab537bd7688c8d05c643e83a626c1768f8e0eab23bf89/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 17:30:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9beacda0ae2be6c3c7ab537bd7688c8d05c643e83a626c1768f8e0eab23bf89/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 17:30:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9beacda0ae2be6c3c7ab537bd7688c8d05c643e83a626c1768f8e0eab23bf89/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 17:30:33 compute-0 podman[256086]: 2026-01-10 17:30:33.616809994 +0000 UTC m=+0.023860630 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:30:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9beacda0ae2be6c3c7ab537bd7688c8d05c643e83a626c1768f8e0eab23bf89/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 10 17:30:33 compute-0 podman[256086]: 2026-01-10 17:30:33.724618506 +0000 UTC m=+0.131669202 container init 17595cd5e775b44f47a7fc54349e9c3dfd880fb61c41397dd5f158b6fa073168 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_wescoff, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 10 17:30:33 compute-0 podman[256086]: 2026-01-10 17:30:33.733562837 +0000 UTC m=+0.140613503 container start 17595cd5e775b44f47a7fc54349e9c3dfd880fb61c41397dd5f158b6fa073168 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_wescoff, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 17:30:33 compute-0 podman[256086]: 2026-01-10 17:30:33.737626351 +0000 UTC m=+0.144677017 container attach 17595cd5e775b44f47a7fc54349e9c3dfd880fb61c41397dd5f158b6fa073168 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_wescoff, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 10 17:30:34 compute-0 ceph-mon[75249]: pgmap v1092: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:30:34 compute-0 happy_wescoff[256103]: --> passed data devices: 0 physical, 3 LVM
Jan 10 17:30:34 compute-0 happy_wescoff[256103]: --> All data devices are unavailable
Jan 10 17:30:34 compute-0 systemd[1]: libpod-17595cd5e775b44f47a7fc54349e9c3dfd880fb61c41397dd5f158b6fa073168.scope: Deactivated successfully.
Jan 10 17:30:34 compute-0 podman[256086]: 2026-01-10 17:30:34.346360943 +0000 UTC m=+0.753411599 container died 17595cd5e775b44f47a7fc54349e9c3dfd880fb61c41397dd5f158b6fa073168 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_wescoff, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 17:30:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-d9beacda0ae2be6c3c7ab537bd7688c8d05c643e83a626c1768f8e0eab23bf89-merged.mount: Deactivated successfully.
Jan 10 17:30:34 compute-0 podman[256086]: 2026-01-10 17:30:34.411661183 +0000 UTC m=+0.818711829 container remove 17595cd5e775b44f47a7fc54349e9c3dfd880fb61c41397dd5f158b6fa073168 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_wescoff, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 10 17:30:34 compute-0 systemd[1]: libpod-conmon-17595cd5e775b44f47a7fc54349e9c3dfd880fb61c41397dd5f158b6fa073168.scope: Deactivated successfully.
Jan 10 17:30:34 compute-0 sudo[256009]: pam_unix(sudo:session): session closed for user root
Jan 10 17:30:34 compute-0 sudo[256136]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 17:30:34 compute-0 sudo[256136]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:30:34 compute-0 sudo[256136]: pam_unix(sudo:session): session closed for user root
Jan 10 17:30:34 compute-0 sudo[256161]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 -- lvm list --format json
Jan 10 17:30:34 compute-0 sudo[256161]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:30:34 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:30:34 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1093: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:30:35 compute-0 podman[256197]: 2026-01-10 17:30:35.033188354 +0000 UTC m=+0.054953122 container create 61b111d13079ac56d3b804d80741bfa60dd570785cdda8b229596958bf3de575 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_mendel, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 17:30:35 compute-0 systemd[1]: Started libpod-conmon-61b111d13079ac56d3b804d80741bfa60dd570785cdda8b229596958bf3de575.scope.
Jan 10 17:30:35 compute-0 podman[256197]: 2026-01-10 17:30:35.009601052 +0000 UTC m=+0.031365850 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:30:35 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:30:35 compute-0 podman[256197]: 2026-01-10 17:30:35.126421137 +0000 UTC m=+0.148185935 container init 61b111d13079ac56d3b804d80741bfa60dd570785cdda8b229596958bf3de575 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_mendel, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 10 17:30:35 compute-0 podman[256197]: 2026-01-10 17:30:35.132765425 +0000 UTC m=+0.154530183 container start 61b111d13079ac56d3b804d80741bfa60dd570785cdda8b229596958bf3de575 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_mendel, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 17:30:35 compute-0 podman[256197]: 2026-01-10 17:30:35.135959254 +0000 UTC m=+0.157724012 container attach 61b111d13079ac56d3b804d80741bfa60dd570785cdda8b229596958bf3de575 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_mendel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 10 17:30:35 compute-0 fervent_mendel[256213]: 167 167
Jan 10 17:30:35 compute-0 systemd[1]: libpod-61b111d13079ac56d3b804d80741bfa60dd570785cdda8b229596958bf3de575.scope: Deactivated successfully.
Jan 10 17:30:35 compute-0 podman[256197]: 2026-01-10 17:30:35.1386738 +0000 UTC m=+0.160438558 container died 61b111d13079ac56d3b804d80741bfa60dd570785cdda8b229596958bf3de575 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_mendel, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 10 17:30:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-4b438eb605f37ecfaa215689dbdcb33f26460b0757be62e417fded8982c29041-merged.mount: Deactivated successfully.
Jan 10 17:30:35 compute-0 podman[256197]: 2026-01-10 17:30:35.209870306 +0000 UTC m=+0.231635064 container remove 61b111d13079ac56d3b804d80741bfa60dd570785cdda8b229596958bf3de575 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_mendel, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True)
Jan 10 17:30:35 compute-0 systemd[1]: libpod-conmon-61b111d13079ac56d3b804d80741bfa60dd570785cdda8b229596958bf3de575.scope: Deactivated successfully.
Jan 10 17:30:35 compute-0 podman[256239]: 2026-01-10 17:30:35.40838316 +0000 UTC m=+0.061961348 container create efea25f14474f565c50919f7a08997f62adf2b0900683b69f4b0b750d671f2aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_chatelet, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True)
Jan 10 17:30:35 compute-0 systemd[1]: Started libpod-conmon-efea25f14474f565c50919f7a08997f62adf2b0900683b69f4b0b750d671f2aa.scope.
Jan 10 17:30:35 compute-0 podman[256239]: 2026-01-10 17:30:35.382208807 +0000 UTC m=+0.035787025 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:30:35 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:30:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7ee666a237f0309139fd7b365255fa75609dd57c4cf536884c5a99d69757baa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 17:30:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7ee666a237f0309139fd7b365255fa75609dd57c4cf536884c5a99d69757baa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 17:30:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7ee666a237f0309139fd7b365255fa75609dd57c4cf536884c5a99d69757baa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 17:30:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7ee666a237f0309139fd7b365255fa75609dd57c4cf536884c5a99d69757baa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 17:30:35 compute-0 podman[256239]: 2026-01-10 17:30:35.522444847 +0000 UTC m=+0.176023065 container init efea25f14474f565c50919f7a08997f62adf2b0900683b69f4b0b750d671f2aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_chatelet, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 17:30:35 compute-0 podman[256239]: 2026-01-10 17:30:35.536303106 +0000 UTC m=+0.189881324 container start efea25f14474f565c50919f7a08997f62adf2b0900683b69f4b0b750d671f2aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_chatelet, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 17:30:35 compute-0 podman[256239]: 2026-01-10 17:30:35.541585094 +0000 UTC m=+0.195163302 container attach efea25f14474f565c50919f7a08997f62adf2b0900683b69f4b0b750d671f2aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_chatelet, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 17:30:35 compute-0 affectionate_chatelet[256256]: {
Jan 10 17:30:35 compute-0 affectionate_chatelet[256256]:     "0": [
Jan 10 17:30:35 compute-0 affectionate_chatelet[256256]:         {
Jan 10 17:30:35 compute-0 affectionate_chatelet[256256]:             "devices": [
Jan 10 17:30:35 compute-0 affectionate_chatelet[256256]:                 "/dev/loop3"
Jan 10 17:30:35 compute-0 affectionate_chatelet[256256]:             ],
Jan 10 17:30:35 compute-0 affectionate_chatelet[256256]:             "lv_name": "ceph_lv0",
Jan 10 17:30:35 compute-0 affectionate_chatelet[256256]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 10 17:30:35 compute-0 affectionate_chatelet[256256]:             "lv_size": "21470642176",
Jan 10 17:30:35 compute-0 affectionate_chatelet[256256]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9aa1dcc9-88f4-49c0-be40-744313964d3e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 17:30:35 compute-0 affectionate_chatelet[256256]:             "lv_uuid": "fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2",
Jan 10 17:30:35 compute-0 affectionate_chatelet[256256]:             "name": "ceph_lv0",
Jan 10 17:30:35 compute-0 affectionate_chatelet[256256]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 10 17:30:35 compute-0 affectionate_chatelet[256256]:             "tags": {
Jan 10 17:30:35 compute-0 affectionate_chatelet[256256]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 10 17:30:35 compute-0 affectionate_chatelet[256256]:                 "ceph.block_uuid": "fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2",
Jan 10 17:30:35 compute-0 affectionate_chatelet[256256]:                 "ceph.cephx_lockbox_secret": "",
Jan 10 17:30:35 compute-0 affectionate_chatelet[256256]:                 "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 17:30:35 compute-0 affectionate_chatelet[256256]:                 "ceph.cluster_name": "ceph",
Jan 10 17:30:35 compute-0 affectionate_chatelet[256256]:                 "ceph.crush_device_class": "",
Jan 10 17:30:35 compute-0 affectionate_chatelet[256256]:                 "ceph.encrypted": "0",
Jan 10 17:30:35 compute-0 affectionate_chatelet[256256]:                 "ceph.objectstore": "bluestore",
Jan 10 17:30:35 compute-0 affectionate_chatelet[256256]:                 "ceph.osd_fsid": "9aa1dcc9-88f4-49c0-be40-744313964d3e",
Jan 10 17:30:35 compute-0 affectionate_chatelet[256256]:                 "ceph.osd_id": "0",
Jan 10 17:30:35 compute-0 affectionate_chatelet[256256]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 10 17:30:35 compute-0 affectionate_chatelet[256256]:                 "ceph.type": "block",
Jan 10 17:30:35 compute-0 affectionate_chatelet[256256]:                 "ceph.vdo": "0",
Jan 10 17:30:35 compute-0 affectionate_chatelet[256256]:                 "ceph.with_tpm": "0"
Jan 10 17:30:35 compute-0 affectionate_chatelet[256256]:             },
Jan 10 17:30:35 compute-0 affectionate_chatelet[256256]:             "type": "block",
Jan 10 17:30:35 compute-0 affectionate_chatelet[256256]:             "vg_name": "ceph_vg0"
Jan 10 17:30:35 compute-0 affectionate_chatelet[256256]:         }
Jan 10 17:30:35 compute-0 affectionate_chatelet[256256]:     ],
Jan 10 17:30:35 compute-0 affectionate_chatelet[256256]:     "1": [
Jan 10 17:30:35 compute-0 affectionate_chatelet[256256]:         {
Jan 10 17:30:35 compute-0 affectionate_chatelet[256256]:             "devices": [
Jan 10 17:30:35 compute-0 affectionate_chatelet[256256]:                 "/dev/loop4"
Jan 10 17:30:35 compute-0 affectionate_chatelet[256256]:             ],
Jan 10 17:30:35 compute-0 affectionate_chatelet[256256]:             "lv_name": "ceph_lv1",
Jan 10 17:30:35 compute-0 affectionate_chatelet[256256]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 10 17:30:35 compute-0 affectionate_chatelet[256256]:             "lv_size": "21470642176",
Jan 10 17:30:35 compute-0 affectionate_chatelet[256256]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e8e31518-65ae-476c-891c-e2fc550d0a1c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 17:30:35 compute-0 affectionate_chatelet[256256]:             "lv_uuid": "jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE",
Jan 10 17:30:35 compute-0 affectionate_chatelet[256256]:             "name": "ceph_lv1",
Jan 10 17:30:35 compute-0 affectionate_chatelet[256256]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 10 17:30:35 compute-0 affectionate_chatelet[256256]:             "tags": {
Jan 10 17:30:35 compute-0 affectionate_chatelet[256256]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 10 17:30:35 compute-0 affectionate_chatelet[256256]:                 "ceph.block_uuid": "jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE",
Jan 10 17:30:35 compute-0 affectionate_chatelet[256256]:                 "ceph.cephx_lockbox_secret": "",
Jan 10 17:30:35 compute-0 affectionate_chatelet[256256]:                 "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 17:30:35 compute-0 affectionate_chatelet[256256]:                 "ceph.cluster_name": "ceph",
Jan 10 17:30:35 compute-0 affectionate_chatelet[256256]:                 "ceph.crush_device_class": "",
Jan 10 17:30:35 compute-0 affectionate_chatelet[256256]:                 "ceph.encrypted": "0",
Jan 10 17:30:35 compute-0 affectionate_chatelet[256256]:                 "ceph.objectstore": "bluestore",
Jan 10 17:30:35 compute-0 affectionate_chatelet[256256]:                 "ceph.osd_fsid": "e8e31518-65ae-476c-891c-e2fc550d0a1c",
Jan 10 17:30:35 compute-0 affectionate_chatelet[256256]:                 "ceph.osd_id": "1",
Jan 10 17:30:35 compute-0 affectionate_chatelet[256256]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 10 17:30:35 compute-0 affectionate_chatelet[256256]:                 "ceph.type": "block",
Jan 10 17:30:35 compute-0 affectionate_chatelet[256256]:                 "ceph.vdo": "0",
Jan 10 17:30:35 compute-0 affectionate_chatelet[256256]:                 "ceph.with_tpm": "0"
Jan 10 17:30:35 compute-0 affectionate_chatelet[256256]:             },
Jan 10 17:30:35 compute-0 affectionate_chatelet[256256]:             "type": "block",
Jan 10 17:30:35 compute-0 affectionate_chatelet[256256]:             "vg_name": "ceph_vg1"
Jan 10 17:30:35 compute-0 affectionate_chatelet[256256]:         }
Jan 10 17:30:35 compute-0 affectionate_chatelet[256256]:     ],
Jan 10 17:30:35 compute-0 affectionate_chatelet[256256]:     "2": [
Jan 10 17:30:35 compute-0 affectionate_chatelet[256256]:         {
Jan 10 17:30:35 compute-0 affectionate_chatelet[256256]:             "devices": [
Jan 10 17:30:35 compute-0 affectionate_chatelet[256256]:                 "/dev/loop5"
Jan 10 17:30:35 compute-0 affectionate_chatelet[256256]:             ],
Jan 10 17:30:35 compute-0 affectionate_chatelet[256256]:             "lv_name": "ceph_lv2",
Jan 10 17:30:35 compute-0 affectionate_chatelet[256256]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 10 17:30:35 compute-0 affectionate_chatelet[256256]:             "lv_size": "21470642176",
Jan 10 17:30:35 compute-0 affectionate_chatelet[256256]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=87473727-6468-4f68-8371-e0bf60edaa43,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 17:30:35 compute-0 affectionate_chatelet[256256]:             "lv_uuid": "47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL",
Jan 10 17:30:35 compute-0 affectionate_chatelet[256256]:             "name": "ceph_lv2",
Jan 10 17:30:35 compute-0 affectionate_chatelet[256256]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 10 17:30:35 compute-0 affectionate_chatelet[256256]:             "tags": {
Jan 10 17:30:35 compute-0 affectionate_chatelet[256256]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 10 17:30:35 compute-0 affectionate_chatelet[256256]:                 "ceph.block_uuid": "47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL",
Jan 10 17:30:35 compute-0 affectionate_chatelet[256256]:                 "ceph.cephx_lockbox_secret": "",
Jan 10 17:30:35 compute-0 affectionate_chatelet[256256]:                 "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 17:30:35 compute-0 affectionate_chatelet[256256]:                 "ceph.cluster_name": "ceph",
Jan 10 17:30:35 compute-0 affectionate_chatelet[256256]:                 "ceph.crush_device_class": "",
Jan 10 17:30:35 compute-0 affectionate_chatelet[256256]:                 "ceph.encrypted": "0",
Jan 10 17:30:35 compute-0 affectionate_chatelet[256256]:                 "ceph.objectstore": "bluestore",
Jan 10 17:30:35 compute-0 affectionate_chatelet[256256]:                 "ceph.osd_fsid": "87473727-6468-4f68-8371-e0bf60edaa43",
Jan 10 17:30:35 compute-0 affectionate_chatelet[256256]:                 "ceph.osd_id": "2",
Jan 10 17:30:35 compute-0 affectionate_chatelet[256256]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 10 17:30:35 compute-0 affectionate_chatelet[256256]:                 "ceph.type": "block",
Jan 10 17:30:35 compute-0 affectionate_chatelet[256256]:                 "ceph.vdo": "0",
Jan 10 17:30:35 compute-0 affectionate_chatelet[256256]:                 "ceph.with_tpm": "0"
Jan 10 17:30:35 compute-0 affectionate_chatelet[256256]:             },
Jan 10 17:30:35 compute-0 affectionate_chatelet[256256]:             "type": "block",
Jan 10 17:30:35 compute-0 affectionate_chatelet[256256]:             "vg_name": "ceph_vg2"
Jan 10 17:30:35 compute-0 affectionate_chatelet[256256]:         }
Jan 10 17:30:35 compute-0 affectionate_chatelet[256256]:     ]
Jan 10 17:30:35 compute-0 affectionate_chatelet[256256]: }
Jan 10 17:30:35 compute-0 systemd[1]: libpod-efea25f14474f565c50919f7a08997f62adf2b0900683b69f4b0b750d671f2aa.scope: Deactivated successfully.
Jan 10 17:30:35 compute-0 podman[256239]: 2026-01-10 17:30:35.889619129 +0000 UTC m=+0.543197357 container died efea25f14474f565c50919f7a08997f62adf2b0900683b69f4b0b750d671f2aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_chatelet, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 10 17:30:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-c7ee666a237f0309139fd7b365255fa75609dd57c4cf536884c5a99d69757baa-merged.mount: Deactivated successfully.
Jan 10 17:30:35 compute-0 podman[256239]: 2026-01-10 17:30:35.932014027 +0000 UTC m=+0.585592235 container remove efea25f14474f565c50919f7a08997f62adf2b0900683b69f4b0b750d671f2aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_chatelet, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 17:30:35 compute-0 systemd[1]: libpod-conmon-efea25f14474f565c50919f7a08997f62adf2b0900683b69f4b0b750d671f2aa.scope: Deactivated successfully.
Jan 10 17:30:35 compute-0 sudo[256161]: pam_unix(sudo:session): session closed for user root
Jan 10 17:30:36 compute-0 sudo[256277]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 17:30:36 compute-0 sudo[256277]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:30:36 compute-0 sudo[256277]: pam_unix(sudo:session): session closed for user root
Jan 10 17:30:36 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 10 17:30:36 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3857446414' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 10 17:30:36 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 10 17:30:36 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3857446414' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 10 17:30:36 compute-0 sudo[256302]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 -- raw list --format json
Jan 10 17:30:36 compute-0 sudo[256302]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:30:36 compute-0 ceph-mon[75249]: pgmap v1093: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:30:36 compute-0 ceph-mon[75249]: from='client.? 192.168.122.10:0/3857446414' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 10 17:30:36 compute-0 ceph-mon[75249]: from='client.? 192.168.122.10:0/3857446414' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 10 17:30:36 compute-0 podman[256339]: 2026-01-10 17:30:36.449950515 +0000 UTC m=+0.060979840 container create 1a3d2aa9cfa73c38328dbfc8876bb53c684881f2b64896c02765394b4fcbf1f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_babbage, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True)
Jan 10 17:30:36 compute-0 systemd[1]: Started libpod-conmon-1a3d2aa9cfa73c38328dbfc8876bb53c684881f2b64896c02765394b4fcbf1f4.scope.
Jan 10 17:30:36 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:30:36 compute-0 podman[256339]: 2026-01-10 17:30:36.42516187 +0000 UTC m=+0.036191255 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:30:36 compute-0 podman[256339]: 2026-01-10 17:30:36.53434004 +0000 UTC m=+0.145369355 container init 1a3d2aa9cfa73c38328dbfc8876bb53c684881f2b64896c02765394b4fcbf1f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_babbage, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 10 17:30:36 compute-0 podman[256339]: 2026-01-10 17:30:36.540084921 +0000 UTC m=+0.151114216 container start 1a3d2aa9cfa73c38328dbfc8876bb53c684881f2b64896c02765394b4fcbf1f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_babbage, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 10 17:30:36 compute-0 podman[256339]: 2026-01-10 17:30:36.5436187 +0000 UTC m=+0.154647995 container attach 1a3d2aa9cfa73c38328dbfc8876bb53c684881f2b64896c02765394b4fcbf1f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_babbage, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 10 17:30:36 compute-0 zen_babbage[256355]: 167 167
Jan 10 17:30:36 compute-0 systemd[1]: libpod-1a3d2aa9cfa73c38328dbfc8876bb53c684881f2b64896c02765394b4fcbf1f4.scope: Deactivated successfully.
Jan 10 17:30:36 compute-0 conmon[256355]: conmon 1a3d2aa9cfa73c38328d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1a3d2aa9cfa73c38328dbfc8876bb53c684881f2b64896c02765394b4fcbf1f4.scope/container/memory.events
Jan 10 17:30:36 compute-0 podman[256339]: 2026-01-10 17:30:36.548667002 +0000 UTC m=+0.159696317 container died 1a3d2aa9cfa73c38328dbfc8876bb53c684881f2b64896c02765394b4fcbf1f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_babbage, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 17:30:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-c85a9df13187350def4e010090d1b0859318f6f77b92cf4eddf3967508213551-merged.mount: Deactivated successfully.
Jan 10 17:30:36 compute-0 podman[256339]: 2026-01-10 17:30:36.590993148 +0000 UTC m=+0.202022443 container remove 1a3d2aa9cfa73c38328dbfc8876bb53c684881f2b64896c02765394b4fcbf1f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_babbage, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 17:30:36 compute-0 systemd[1]: libpod-conmon-1a3d2aa9cfa73c38328dbfc8876bb53c684881f2b64896c02765394b4fcbf1f4.scope: Deactivated successfully.
Jan 10 17:30:36 compute-0 podman[256378]: 2026-01-10 17:30:36.796855788 +0000 UTC m=+0.056522095 container create 204d2e52211c2fc7cd5275d0181616c953ac2f2e591b216d029372c0ce27da6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_torvalds, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 17:30:36 compute-0 podman[256378]: 2026-01-10 17:30:36.772627829 +0000 UTC m=+0.032294146 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:30:36 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1094: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:30:36 compute-0 systemd[1]: Started libpod-conmon-204d2e52211c2fc7cd5275d0181616c953ac2f2e591b216d029372c0ce27da6a.scope.
Jan 10 17:30:36 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:30:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47836a8f0b6d9a524b7d7640ccdc857fbd6da99b5c002f52665672c84a17fbd1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 17:30:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47836a8f0b6d9a524b7d7640ccdc857fbd6da99b5c002f52665672c84a17fbd1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 17:30:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47836a8f0b6d9a524b7d7640ccdc857fbd6da99b5c002f52665672c84a17fbd1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 17:30:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47836a8f0b6d9a524b7d7640ccdc857fbd6da99b5c002f52665672c84a17fbd1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 17:30:36 compute-0 podman[256378]: 2026-01-10 17:30:36.936618476 +0000 UTC m=+0.196284843 container init 204d2e52211c2fc7cd5275d0181616c953ac2f2e591b216d029372c0ce27da6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_torvalds, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 17:30:36 compute-0 podman[256378]: 2026-01-10 17:30:36.946091981 +0000 UTC m=+0.205758268 container start 204d2e52211c2fc7cd5275d0181616c953ac2f2e591b216d029372c0ce27da6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_torvalds, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 10 17:30:36 compute-0 podman[256378]: 2026-01-10 17:30:36.950275179 +0000 UTC m=+0.209941546 container attach 204d2e52211c2fc7cd5275d0181616c953ac2f2e591b216d029372c0ce27da6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_torvalds, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 17:30:37 compute-0 lvm[256470]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 10 17:30:37 compute-0 lvm[256470]: VG ceph_vg0 finished
Jan 10 17:30:37 compute-0 lvm[256473]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 10 17:30:37 compute-0 lvm[256473]: VG ceph_vg1 finished
Jan 10 17:30:37 compute-0 lvm[256475]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 10 17:30:37 compute-0 lvm[256475]: VG ceph_vg2 finished
Jan 10 17:30:37 compute-0 affectionate_torvalds[256394]: {}
Jan 10 17:30:37 compute-0 systemd[1]: libpod-204d2e52211c2fc7cd5275d0181616c953ac2f2e591b216d029372c0ce27da6a.scope: Deactivated successfully.
Jan 10 17:30:37 compute-0 podman[256378]: 2026-01-10 17:30:37.885540594 +0000 UTC m=+1.145206951 container died 204d2e52211c2fc7cd5275d0181616c953ac2f2e591b216d029372c0ce27da6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_torvalds, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 10 17:30:37 compute-0 systemd[1]: libpod-204d2e52211c2fc7cd5275d0181616c953ac2f2e591b216d029372c0ce27da6a.scope: Consumed 1.483s CPU time.
Jan 10 17:30:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-47836a8f0b6d9a524b7d7640ccdc857fbd6da99b5c002f52665672c84a17fbd1-merged.mount: Deactivated successfully.
Jan 10 17:30:37 compute-0 podman[256378]: 2026-01-10 17:30:37.935253437 +0000 UTC m=+1.194919694 container remove 204d2e52211c2fc7cd5275d0181616c953ac2f2e591b216d029372c0ce27da6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_torvalds, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 17:30:37 compute-0 systemd[1]: libpod-conmon-204d2e52211c2fc7cd5275d0181616c953ac2f2e591b216d029372c0ce27da6a.scope: Deactivated successfully.
Jan 10 17:30:37 compute-0 sudo[256302]: pam_unix(sudo:session): session closed for user root
Jan 10 17:30:38 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 10 17:30:38 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:30:38 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 10 17:30:38 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:30:38 compute-0 sudo[256493]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 10 17:30:38 compute-0 sudo[256493]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:30:38 compute-0 sudo[256493]: pam_unix(sudo:session): session closed for user root
Jan 10 17:30:38 compute-0 ceph-mgr[75538]: [balancer INFO root] Optimize plan auto_2026-01-10_17:30:38
Jan 10 17:30:38 compute-0 ceph-mgr[75538]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 10 17:30:38 compute-0 ceph-mgr[75538]: [balancer INFO root] do_upmap
Jan 10 17:30:38 compute-0 ceph-mgr[75538]: [balancer INFO root] pools ['.mgr', 'backups', 'vms', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'volumes', 'images']
Jan 10 17:30:38 compute-0 ceph-mgr[75538]: [balancer INFO root] prepared 0/10 upmap changes
Jan 10 17:30:38 compute-0 ceph-mon[75249]: pgmap v1094: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:30:38 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:30:38 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:30:38 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1095: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:30:39 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:30:39 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:30:39 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:30:39 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:30:39 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:30:39 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:30:39 compute-0 ceph-mon[75249]: pgmap v1095: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:30:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 10 17:30:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 10 17:30:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 10 17:30:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 10 17:30:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 10 17:30:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 10 17:30:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 10 17:30:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 10 17:30:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 10 17:30:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 10 17:30:39 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:30:40 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1096: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:30:41 compute-0 ceph-mon[75249]: pgmap v1096: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:30:42 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1097: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:30:43 compute-0 ceph-mon[75249]: pgmap v1097: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:30:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] _maybe_adjust
Jan 10 17:30:44 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:30:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:30:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 10 17:30:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:30:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 5.365931724612428e-07 of space, bias 1.0, pg target 0.00016097795173837282 quantized to 32 (current 32)
Jan 10 17:30:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:30:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.1924810223865999e-07 of space, bias 1.0, pg target 3.5774430671597993e-05 quantized to 32 (current 32)
Jan 10 17:30:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:30:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 17:30:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:30:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000668695260671586 of space, bias 1.0, pg target 0.2006085782014758 quantized to 32 (current 32)
Jan 10 17:30:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:30:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0462037643091811e-06 of space, bias 4.0, pg target 0.0012554445171710175 quantized to 16 (current 16)
Jan 10 17:30:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:30:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 17:30:44 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1098: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:30:45 compute-0 ceph-mon[75249]: pgmap v1098: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:30:46 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1099: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:30:47 compute-0 ceph-mon[75249]: pgmap v1099: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:30:48 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1100: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:30:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:30:48.949 152671 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 10 17:30:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:30:48.951 152671 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 10 17:30:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:30:48.952 152671 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 10 17:30:49 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:30:49 compute-0 ceph-mon[75249]: pgmap v1100: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:30:50 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1101: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:30:51 compute-0 ceph-mon[75249]: pgmap v1101: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:30:52 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1102: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:30:53 compute-0 podman[256518]: 2026-01-10 17:30:53.119009797 +0000 UTC m=+0.098152149 container health_status 4a66317a3a3660e903224f5e823ead0a57597ae17c6ac34919f8a3aeea25124f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Jan 10 17:30:53 compute-0 podman[256519]: 2026-01-10 17:30:53.186113905 +0000 UTC m=+0.165015100 container health_status a2049b55215718ead6719d161f51721cb7ba61ad8a59a8a1174e05b17008fa6f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 10 17:30:53 compute-0 ceph-mon[75249]: pgmap v1102: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:30:54 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:30:54 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1103: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:30:56 compute-0 ceph-mon[75249]: pgmap v1103: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:30:56 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1104: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:30:58 compute-0 ceph-mon[75249]: pgmap v1104: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:30:58 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1105: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:30:59 compute-0 nova_compute[237049]: 2026-01-10 17:30:59.345 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:30:59 compute-0 nova_compute[237049]: 2026-01-10 17:30:59.346 237053 DEBUG nova.compute.manager [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 10 17:30:59 compute-0 nova_compute[237049]: 2026-01-10 17:30:59.346 237053 DEBUG nova.compute.manager [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 10 17:30:59 compute-0 nova_compute[237049]: 2026-01-10 17:30:59.372 237053 DEBUG nova.compute.manager [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 10 17:30:59 compute-0 nova_compute[237049]: 2026-01-10 17:30:59.372 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:30:59 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:31:00 compute-0 ceph-mon[75249]: pgmap v1105: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:31:00 compute-0 nova_compute[237049]: 2026-01-10 17:31:00.346 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:31:00 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1106: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:31:02 compute-0 ceph-mon[75249]: pgmap v1106: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:31:02 compute-0 nova_compute[237049]: 2026-01-10 17:31:02.370 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:31:02 compute-0 nova_compute[237049]: 2026-01-10 17:31:02.392 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 10 17:31:02 compute-0 nova_compute[237049]: 2026-01-10 17:31:02.392 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 10 17:31:02 compute-0 nova_compute[237049]: 2026-01-10 17:31:02.392 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 10 17:31:02 compute-0 nova_compute[237049]: 2026-01-10 17:31:02.393 237053 DEBUG nova.compute.resource_tracker [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 10 17:31:02 compute-0 nova_compute[237049]: 2026-01-10 17:31:02.393 237053 DEBUG oslo_concurrency.processutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 10 17:31:02 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1107: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:31:02 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 10 17:31:02 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2476922189' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 10 17:31:03 compute-0 nova_compute[237049]: 2026-01-10 17:31:03.010 237053 DEBUG oslo_concurrency.processutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.617s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 10 17:31:03 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/2476922189' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 10 17:31:03 compute-0 nova_compute[237049]: 2026-01-10 17:31:03.281 237053 WARNING nova.virt.libvirt.driver [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 10 17:31:03 compute-0 nova_compute[237049]: 2026-01-10 17:31:03.282 237053 DEBUG nova.compute.resource_tracker [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5157MB free_disk=59.988249060697854GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 10 17:31:03 compute-0 nova_compute[237049]: 2026-01-10 17:31:03.283 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 10 17:31:03 compute-0 nova_compute[237049]: 2026-01-10 17:31:03.283 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 10 17:31:03 compute-0 nova_compute[237049]: 2026-01-10 17:31:03.463 237053 DEBUG nova.compute.resource_tracker [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 10 17:31:03 compute-0 nova_compute[237049]: 2026-01-10 17:31:03.464 237053 DEBUG nova.compute.resource_tracker [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 10 17:31:03 compute-0 nova_compute[237049]: 2026-01-10 17:31:03.552 237053 DEBUG nova.scheduler.client.report [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Refreshing inventories for resource provider 5f85855c-8a9b-43b5-ae49-f5846b9dcebe _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 10 17:31:03 compute-0 nova_compute[237049]: 2026-01-10 17:31:03.634 237053 DEBUG nova.scheduler.client.report [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Updating ProviderTree inventory for provider 5f85855c-8a9b-43b5-ae49-f5846b9dcebe from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 10 17:31:03 compute-0 nova_compute[237049]: 2026-01-10 17:31:03.635 237053 DEBUG nova.compute.provider_tree [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Updating inventory in ProviderTree for provider 5f85855c-8a9b-43b5-ae49-f5846b9dcebe with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 10 17:31:03 compute-0 nova_compute[237049]: 2026-01-10 17:31:03.657 237053 DEBUG nova.scheduler.client.report [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Refreshing aggregate associations for resource provider 5f85855c-8a9b-43b5-ae49-f5846b9dcebe, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 10 17:31:03 compute-0 nova_compute[237049]: 2026-01-10 17:31:03.682 237053 DEBUG nova.scheduler.client.report [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Refreshing trait associations for resource provider 5f85855c-8a9b-43b5-ae49-f5846b9dcebe, traits: COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AVX,HW_CPU_X86_CLMUL,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_MMX,HW_CPU_X86_SSE,HW_CPU_X86_F16C,COMPUTE_ACCELERATORS,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_ABM,HW_CPU_X86_SHA,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE41,HW_CPU_X86_AMD_SVM,HW_CPU_X86_FMA3,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_BMI2,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_RESCUE_BFV,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NODE,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_FDC,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_BMI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE42,HW_CPU_X86_AVX2,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_SOCKET_PCI_NUMA_AFFINITY _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 10 17:31:03 compute-0 nova_compute[237049]: 2026-01-10 17:31:03.699 237053 DEBUG oslo_concurrency.processutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 10 17:31:04 compute-0 ceph-mon[75249]: pgmap v1107: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:31:04 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 10 17:31:04 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1260666177' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 10 17:31:04 compute-0 nova_compute[237049]: 2026-01-10 17:31:04.305 237053 DEBUG oslo_concurrency.processutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.606s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 10 17:31:04 compute-0 nova_compute[237049]: 2026-01-10 17:31:04.312 237053 DEBUG nova.compute.provider_tree [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Inventory has not changed in ProviderTree for provider: 5f85855c-8a9b-43b5-ae49-f5846b9dcebe update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 10 17:31:04 compute-0 nova_compute[237049]: 2026-01-10 17:31:04.327 237053 DEBUG nova.scheduler.client.report [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Inventory has not changed for provider 5f85855c-8a9b-43b5-ae49-f5846b9dcebe based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 10 17:31:04 compute-0 nova_compute[237049]: 2026-01-10 17:31:04.329 237053 DEBUG nova.compute.resource_tracker [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 10 17:31:04 compute-0 nova_compute[237049]: 2026-01-10 17:31:04.329 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.046s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 10 17:31:04 compute-0 nova_compute[237049]: 2026-01-10 17:31:04.330 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:31:04 compute-0 nova_compute[237049]: 2026-01-10 17:31:04.330 237053 DEBUG nova.compute.manager [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 10 17:31:04 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:31:04 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1108: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:31:05 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/1260666177' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 10 17:31:05 compute-0 nova_compute[237049]: 2026-01-10 17:31:05.317 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:31:05 compute-0 nova_compute[237049]: 2026-01-10 17:31:05.317 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:31:05 compute-0 nova_compute[237049]: 2026-01-10 17:31:05.318 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:31:05 compute-0 nova_compute[237049]: 2026-01-10 17:31:05.318 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:31:05 compute-0 nova_compute[237049]: 2026-01-10 17:31:05.318 237053 DEBUG nova.compute.manager [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 10 17:31:05 compute-0 nova_compute[237049]: 2026-01-10 17:31:05.346 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:31:06 compute-0 ceph-mon[75249]: pgmap v1108: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:31:06 compute-0 ceph-mon[75249]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #51. Immutable memtables: 0.
Jan 10 17:31:06 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:31:06.082543) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 10 17:31:06 compute-0 ceph-mon[75249]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 51
Jan 10 17:31:06 compute-0 ceph-mon[75249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768066266082621, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 2048, "num_deletes": 251, "total_data_size": 2383581, "memory_usage": 2429832, "flush_reason": "Manual Compaction"}
Jan 10 17:31:06 compute-0 ceph-mon[75249]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #52: started
Jan 10 17:31:06 compute-0 ceph-mon[75249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768066266100876, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 52, "file_size": 2300057, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 20680, "largest_seqno": 22727, "table_properties": {"data_size": 2290858, "index_size": 5757, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18448, "raw_average_key_size": 19, "raw_value_size": 2272434, "raw_average_value_size": 2454, "num_data_blocks": 264, "num_entries": 926, "num_filter_entries": 926, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768066041, "oldest_key_time": 1768066041, "file_creation_time": 1768066266, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f71f9c2-f3c5-4fc3-bcd9-6ffe346ae9d4", "db_session_id": "VPFJD76VNV79HUMFHEYZ", "orig_file_number": 52, "seqno_to_time_mapping": "N/A"}}
Jan 10 17:31:06 compute-0 ceph-mon[75249]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 18413 microseconds, and 9694 cpu microseconds.
Jan 10 17:31:06 compute-0 ceph-mon[75249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 10 17:31:06 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:31:06.100968) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #52: 2300057 bytes OK
Jan 10 17:31:06 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:31:06.100999) [db/memtable_list.cc:519] [default] Level-0 commit table #52 started
Jan 10 17:31:06 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:31:06.102873) [db/memtable_list.cc:722] [default] Level-0 commit table #52: memtable #1 done
Jan 10 17:31:06 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:31:06.102903) EVENT_LOG_v1 {"time_micros": 1768066266102894, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 10 17:31:06 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:31:06.102931) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 10 17:31:06 compute-0 ceph-mon[75249]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 2375018, prev total WAL file size 2375018, number of live WAL files 2.
Jan 10 17:31:06 compute-0 ceph-mon[75249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000048.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 10 17:31:06 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:31:06.104247) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031373537' seq:72057594037927935, type:22 .. '7061786F730032303039' seq:0, type:0; will stop at (end)
Jan 10 17:31:06 compute-0 ceph-mon[75249]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 10 17:31:06 compute-0 ceph-mon[75249]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [52(2246KB)], [50(5788KB)]
Jan 10 17:31:06 compute-0 ceph-mon[75249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768066266104375, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [52], "files_L6": [50], "score": -1, "input_data_size": 8227567, "oldest_snapshot_seqno": -1}
Jan 10 17:31:06 compute-0 ceph-mon[75249]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #53: 4491 keys, 7012119 bytes, temperature: kUnknown
Jan 10 17:31:06 compute-0 ceph-mon[75249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768066266151009, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 53, "file_size": 7012119, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6978678, "index_size": 21107, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11269, "raw_key_size": 107499, "raw_average_key_size": 23, "raw_value_size": 6894788, "raw_average_value_size": 1535, "num_data_blocks": 896, "num_entries": 4491, "num_filter_entries": 4491, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768064235, "oldest_key_time": 0, "file_creation_time": 1768066266, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f71f9c2-f3c5-4fc3-bcd9-6ffe346ae9d4", "db_session_id": "VPFJD76VNV79HUMFHEYZ", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}}
Jan 10 17:31:06 compute-0 ceph-mon[75249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 10 17:31:06 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:31:06.151490) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 7012119 bytes
Jan 10 17:31:06 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:31:06.153472) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 175.9 rd, 149.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.2, 5.7 +0.0 blob) out(6.7 +0.0 blob), read-write-amplify(6.6) write-amplify(3.0) OK, records in: 5005, records dropped: 514 output_compression: NoCompression
Jan 10 17:31:06 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:31:06.153520) EVENT_LOG_v1 {"time_micros": 1768066266153493, "job": 26, "event": "compaction_finished", "compaction_time_micros": 46773, "compaction_time_cpu_micros": 23987, "output_level": 6, "num_output_files": 1, "total_output_size": 7012119, "num_input_records": 5005, "num_output_records": 4491, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 10 17:31:06 compute-0 ceph-mon[75249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000052.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 10 17:31:06 compute-0 ceph-mon[75249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768066266154555, "job": 26, "event": "table_file_deletion", "file_number": 52}
Jan 10 17:31:06 compute-0 ceph-mon[75249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 10 17:31:06 compute-0 ceph-mon[75249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768066266156663, "job": 26, "event": "table_file_deletion", "file_number": 50}
Jan 10 17:31:06 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:31:06.104116) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 10 17:31:06 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:31:06.156807) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 10 17:31:06 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:31:06.156817) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 10 17:31:06 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:31:06.156820) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 10 17:31:06 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:31:06.156823) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 10 17:31:06 compute-0 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:31:06.156825) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 10 17:31:06 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1109: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:31:07 compute-0 nova_compute[237049]: 2026-01-10 17:31:07.346 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:31:08 compute-0 ceph-mon[75249]: pgmap v1109: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:31:08 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1110: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:31:09 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:31:09 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:31:09 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:31:09 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:31:09 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:31:09 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:31:09 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:31:10 compute-0 ceph-mon[75249]: pgmap v1110: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:31:10 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1111: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:31:12 compute-0 ceph-mon[75249]: pgmap v1111: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:31:12 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1112: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:31:14 compute-0 ceph-mon[75249]: pgmap v1112: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:31:14 compute-0 nova_compute[237049]: 2026-01-10 17:31:14.347 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:31:14 compute-0 nova_compute[237049]: 2026-01-10 17:31:14.348 237053 DEBUG nova.compute.manager [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 10 17:31:14 compute-0 nova_compute[237049]: 2026-01-10 17:31:14.372 237053 DEBUG nova.compute.manager [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 10 17:31:14 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:31:14 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1113: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:31:16 compute-0 ceph-mon[75249]: pgmap v1113: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:31:16 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1114: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:31:18 compute-0 ceph-mon[75249]: pgmap v1114: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:31:18 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1115: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:31:19 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:31:20 compute-0 ceph-mon[75249]: pgmap v1115: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:31:20 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1116: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:31:22 compute-0 ceph-mon[75249]: pgmap v1116: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:31:22 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1117: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:31:24 compute-0 podman[256606]: 2026-01-10 17:31:24.07689955 +0000 UTC m=+0.068058456 container health_status 4a66317a3a3660e903224f5e823ead0a57597ae17c6ac34919f8a3aeea25124f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 10 17:31:24 compute-0 podman[256607]: 2026-01-10 17:31:24.11441918 +0000 UTC m=+0.107187551 container health_status a2049b55215718ead6719d161f51721cb7ba61ad8a59a8a1174e05b17008fa6f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3)
Jan 10 17:31:24 compute-0 ceph-mon[75249]: pgmap v1117: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:31:24 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:31:24 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1118: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:31:26 compute-0 ceph-mon[75249]: pgmap v1118: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:31:26 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1119: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:31:28 compute-0 ceph-mon[75249]: pgmap v1119: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:31:28 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1120: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:31:29 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:31:30 compute-0 ceph-mon[75249]: pgmap v1120: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:31:30 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1121: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:31:32 compute-0 ceph-mon[75249]: pgmap v1121: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:31:32 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1122: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:31:34 compute-0 ceph-mon[75249]: pgmap v1122: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:31:34 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:31:34 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1123: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:31:36 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 10 17:31:36 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/964493788' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 10 17:31:36 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 10 17:31:36 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/964493788' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 10 17:31:36 compute-0 ceph-mon[75249]: pgmap v1123: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:31:36 compute-0 ceph-mon[75249]: from='client.? 192.168.122.10:0/964493788' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 10 17:31:36 compute-0 ceph-mon[75249]: from='client.? 192.168.122.10:0/964493788' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 10 17:31:36 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1124: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:31:38 compute-0 ceph-mgr[75538]: [balancer INFO root] Optimize plan auto_2026-01-10_17:31:38
Jan 10 17:31:38 compute-0 ceph-mgr[75538]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 10 17:31:38 compute-0 ceph-mgr[75538]: [balancer INFO root] do_upmap
Jan 10 17:31:38 compute-0 ceph-mgr[75538]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'volumes', 'backups', 'images', '.mgr', 'vms', 'cephfs.cephfs.data']
Jan 10 17:31:38 compute-0 ceph-mgr[75538]: [balancer INFO root] prepared 0/10 upmap changes
Jan 10 17:31:38 compute-0 sudo[256653]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 17:31:38 compute-0 sudo[256653]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:31:38 compute-0 sudo[256653]: pam_unix(sudo:session): session closed for user root
Jan 10 17:31:38 compute-0 ceph-mon[75249]: pgmap v1124: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:31:38 compute-0 sudo[256678]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Jan 10 17:31:38 compute-0 sudo[256678]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:31:38 compute-0 podman[256748]: 2026-01-10 17:31:38.813582555 +0000 UTC m=+0.104531737 container exec 69622407e4b336ab6e593d34ac16bfb19f7f8835a32ed22c7a89e50ee8c8d8e7 (image=quay.io/ceph/ceph:v20, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-mon-compute-0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 10 17:31:38 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1125: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:31:38 compute-0 podman[256748]: 2026-01-10 17:31:38.927812703 +0000 UTC m=+0.218762045 container exec_died 69622407e4b336ab6e593d34ac16bfb19f7f8835a32ed22c7a89e50ee8c8d8e7 (image=quay.io/ceph/ceph:v20, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3)
Jan 10 17:31:39 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:31:39 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:31:39 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:31:39 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:31:39 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:31:39 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:31:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 10 17:31:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 10 17:31:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 10 17:31:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 10 17:31:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 10 17:31:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 10 17:31:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 10 17:31:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 10 17:31:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 10 17:31:39 compute-0 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 10 17:31:39 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:31:39 compute-0 sudo[256678]: pam_unix(sudo:session): session closed for user root
Jan 10 17:31:39 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 10 17:31:39 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:31:39 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 10 17:31:39 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:31:39 compute-0 sudo[256914]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 17:31:39 compute-0 sudo[256914]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:31:39 compute-0 sudo[256914]: pam_unix(sudo:session): session closed for user root
Jan 10 17:31:39 compute-0 sudo[256939]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 10 17:31:39 compute-0 sudo[256939]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:31:40 compute-0 ceph-mon[75249]: pgmap v1125: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:31:40 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:31:40 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:31:40 compute-0 sudo[256939]: pam_unix(sudo:session): session closed for user root
Jan 10 17:31:40 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 17:31:40 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 17:31:40 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 10 17:31:40 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 10 17:31:40 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 10 17:31:40 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:31:40 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 10 17:31:40 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 10 17:31:40 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 10 17:31:40 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 10 17:31:40 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 17:31:40 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 17:31:40 compute-0 sudo[256995]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 17:31:40 compute-0 sudo[256995]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:31:40 compute-0 sudo[256995]: pam_unix(sudo:session): session closed for user root
Jan 10 17:31:40 compute-0 sudo[257020]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 10 17:31:40 compute-0 sudo[257020]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:31:40 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1126: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:31:41 compute-0 podman[257057]: 2026-01-10 17:31:41.108989734 +0000 UTC m=+0.056655416 container create b699315245a68bfec7a5ea206e73aaebcc5fa368d189f861f1aa77d6df6efd01 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_stonebraker, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 17:31:41 compute-0 systemd[1]: Started libpod-conmon-b699315245a68bfec7a5ea206e73aaebcc5fa368d189f861f1aa77d6df6efd01.scope.
Jan 10 17:31:41 compute-0 podman[257057]: 2026-01-10 17:31:41.084064027 +0000 UTC m=+0.031729819 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:31:41 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:31:41 compute-0 podman[257057]: 2026-01-10 17:31:41.217573814 +0000 UTC m=+0.165239606 container init b699315245a68bfec7a5ea206e73aaebcc5fa368d189f861f1aa77d6df6efd01 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_stonebraker, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 10 17:31:41 compute-0 podman[257057]: 2026-01-10 17:31:41.228768367 +0000 UTC m=+0.176434099 container start b699315245a68bfec7a5ea206e73aaebcc5fa368d189f861f1aa77d6df6efd01 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_stonebraker, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 17:31:41 compute-0 podman[257057]: 2026-01-10 17:31:41.232528382 +0000 UTC m=+0.180194114 container attach b699315245a68bfec7a5ea206e73aaebcc5fa368d189f861f1aa77d6df6efd01 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_stonebraker, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 17:31:41 compute-0 hardcore_stonebraker[257073]: 167 167
Jan 10 17:31:41 compute-0 systemd[1]: libpod-b699315245a68bfec7a5ea206e73aaebcc5fa368d189f861f1aa77d6df6efd01.scope: Deactivated successfully.
Jan 10 17:31:41 compute-0 podman[257057]: 2026-01-10 17:31:41.236997658 +0000 UTC m=+0.184663410 container died b699315245a68bfec7a5ea206e73aaebcc5fa368d189f861f1aa77d6df6efd01 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_stonebraker, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 17:31:41 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 17:31:41 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 10 17:31:41 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:31:41 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 10 17:31:41 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 10 17:31:41 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 17:31:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-185260400ca50c3e4961ce223636abf17b336c004f27b6ef38024f79890bdbf5-merged.mount: Deactivated successfully.
Jan 10 17:31:41 compute-0 podman[257057]: 2026-01-10 17:31:41.289940819 +0000 UTC m=+0.237606541 container remove b699315245a68bfec7a5ea206e73aaebcc5fa368d189f861f1aa77d6df6efd01 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_stonebraker, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 10 17:31:41 compute-0 systemd[1]: libpod-conmon-b699315245a68bfec7a5ea206e73aaebcc5fa368d189f861f1aa77d6df6efd01.scope: Deactivated successfully.
Jan 10 17:31:41 compute-0 podman[257097]: 2026-01-10 17:31:41.506846191 +0000 UTC m=+0.059066194 container create 1b2d97d61a1cad1fda6e0e5b174a696eaadc8c8b0b4d7971d9dbe163f1e43833 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_chaplygin, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 17:31:41 compute-0 systemd[1]: Started libpod-conmon-1b2d97d61a1cad1fda6e0e5b174a696eaadc8c8b0b4d7971d9dbe163f1e43833.scope.
Jan 10 17:31:41 compute-0 podman[257097]: 2026-01-10 17:31:41.474522976 +0000 UTC m=+0.026742969 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:31:41 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:31:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cfd32e8ea4107f23876ea802ca6ddd1771077a31a661ef436ed6213df8ea19e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 17:31:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cfd32e8ea4107f23876ea802ca6ddd1771077a31a661ef436ed6213df8ea19e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 17:31:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cfd32e8ea4107f23876ea802ca6ddd1771077a31a661ef436ed6213df8ea19e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 17:31:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cfd32e8ea4107f23876ea802ca6ddd1771077a31a661ef436ed6213df8ea19e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 17:31:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cfd32e8ea4107f23876ea802ca6ddd1771077a31a661ef436ed6213df8ea19e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 10 17:31:41 compute-0 podman[257097]: 2026-01-10 17:31:41.617892459 +0000 UTC m=+0.170112482 container init 1b2d97d61a1cad1fda6e0e5b174a696eaadc8c8b0b4d7971d9dbe163f1e43833 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_chaplygin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 10 17:31:41 compute-0 podman[257097]: 2026-01-10 17:31:41.627717594 +0000 UTC m=+0.179937577 container start 1b2d97d61a1cad1fda6e0e5b174a696eaadc8c8b0b4d7971d9dbe163f1e43833 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_chaplygin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True)
Jan 10 17:31:41 compute-0 podman[257097]: 2026-01-10 17:31:41.631995404 +0000 UTC m=+0.184215417 container attach 1b2d97d61a1cad1fda6e0e5b174a696eaadc8c8b0b4d7971d9dbe163f1e43833 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_chaplygin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 10 17:31:42 compute-0 affectionate_chaplygin[257113]: --> passed data devices: 0 physical, 3 LVM
Jan 10 17:31:42 compute-0 affectionate_chaplygin[257113]: --> All data devices are unavailable
Jan 10 17:31:42 compute-0 systemd[1]: libpod-1b2d97d61a1cad1fda6e0e5b174a696eaadc8c8b0b4d7971d9dbe163f1e43833.scope: Deactivated successfully.
Jan 10 17:31:42 compute-0 podman[257097]: 2026-01-10 17:31:42.198677025 +0000 UTC m=+0.750897048 container died 1b2d97d61a1cad1fda6e0e5b174a696eaadc8c8b0b4d7971d9dbe163f1e43833 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_chaplygin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 10 17:31:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-7cfd32e8ea4107f23876ea802ca6ddd1771077a31a661ef436ed6213df8ea19e-merged.mount: Deactivated successfully.
Jan 10 17:31:42 compute-0 ceph-mon[75249]: pgmap v1126: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:31:42 compute-0 podman[257097]: 2026-01-10 17:31:42.281133483 +0000 UTC m=+0.833353466 container remove 1b2d97d61a1cad1fda6e0e5b174a696eaadc8c8b0b4d7971d9dbe163f1e43833 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_chaplygin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 17:31:42 compute-0 systemd[1]: libpod-conmon-1b2d97d61a1cad1fda6e0e5b174a696eaadc8c8b0b4d7971d9dbe163f1e43833.scope: Deactivated successfully.
Jan 10 17:31:42 compute-0 sudo[257020]: pam_unix(sudo:session): session closed for user root
Jan 10 17:31:42 compute-0 sudo[257144]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 17:31:42 compute-0 sudo[257144]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:31:42 compute-0 sudo[257144]: pam_unix(sudo:session): session closed for user root
Jan 10 17:31:42 compute-0 sudo[257169]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 -- lvm list --format json
Jan 10 17:31:42 compute-0 sudo[257169]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:31:42 compute-0 podman[257207]: 2026-01-10 17:31:42.844373868 +0000 UTC m=+0.066238475 container create 3535445077ce33c428c31db07114b41bb6b464b6e01cbfbf9b9cc9db0ba3639a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_chaum, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 17:31:42 compute-0 systemd[1]: Started libpod-conmon-3535445077ce33c428c31db07114b41bb6b464b6e01cbfbf9b9cc9db0ba3639a.scope.
Jan 10 17:31:42 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1127: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:31:42 compute-0 podman[257207]: 2026-01-10 17:31:42.817063323 +0000 UTC m=+0.038928020 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:31:42 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:31:42 compute-0 podman[257207]: 2026-01-10 17:31:42.944451549 +0000 UTC m=+0.166316146 container init 3535445077ce33c428c31db07114b41bb6b464b6e01cbfbf9b9cc9db0ba3639a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_chaum, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 10 17:31:42 compute-0 podman[257207]: 2026-01-10 17:31:42.956106075 +0000 UTC m=+0.177970712 container start 3535445077ce33c428c31db07114b41bb6b464b6e01cbfbf9b9cc9db0ba3639a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_chaum, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 17:31:42 compute-0 podman[257207]: 2026-01-10 17:31:42.961296251 +0000 UTC m=+0.183160878 container attach 3535445077ce33c428c31db07114b41bb6b464b6e01cbfbf9b9cc9db0ba3639a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_chaum, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 10 17:31:42 compute-0 modest_chaum[257223]: 167 167
Jan 10 17:31:42 compute-0 systemd[1]: libpod-3535445077ce33c428c31db07114b41bb6b464b6e01cbfbf9b9cc9db0ba3639a.scope: Deactivated successfully.
Jan 10 17:31:42 compute-0 podman[257207]: 2026-01-10 17:31:42.964276584 +0000 UTC m=+0.186141191 container died 3535445077ce33c428c31db07114b41bb6b464b6e01cbfbf9b9cc9db0ba3639a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_chaum, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 17:31:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-5392128390e656b8e6d89b5265eb911182d92622bae2ec34bc989cdfd07f599d-merged.mount: Deactivated successfully.
Jan 10 17:31:43 compute-0 podman[257207]: 2026-01-10 17:31:43.008140892 +0000 UTC m=+0.230005489 container remove 3535445077ce33c428c31db07114b41bb6b464b6e01cbfbf9b9cc9db0ba3639a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_chaum, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 17:31:43 compute-0 systemd[1]: libpod-conmon-3535445077ce33c428c31db07114b41bb6b464b6e01cbfbf9b9cc9db0ba3639a.scope: Deactivated successfully.
Jan 10 17:31:43 compute-0 podman[257248]: 2026-01-10 17:31:43.232838611 +0000 UTC m=+0.063870709 container create 1e1716e9aa1d57faf05af5fcf8ec186a8b262422cc3f0fb5f3b89c3897814573 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_gates, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 17:31:43 compute-0 systemd[1]: Started libpod-conmon-1e1716e9aa1d57faf05af5fcf8ec186a8b262422cc3f0fb5f3b89c3897814573.scope.
Jan 10 17:31:43 compute-0 podman[257248]: 2026-01-10 17:31:43.202619085 +0000 UTC m=+0.033651173 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:31:43 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:31:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57ab77d16a5d9908af97fa2220c4f10309903b5e270f0203c9218d418b046e65/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 17:31:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57ab77d16a5d9908af97fa2220c4f10309903b5e270f0203c9218d418b046e65/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 17:31:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57ab77d16a5d9908af97fa2220c4f10309903b5e270f0203c9218d418b046e65/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 17:31:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57ab77d16a5d9908af97fa2220c4f10309903b5e270f0203c9218d418b046e65/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 17:31:43 compute-0 podman[257248]: 2026-01-10 17:31:43.334843296 +0000 UTC m=+0.165875434 container init 1e1716e9aa1d57faf05af5fcf8ec186a8b262422cc3f0fb5f3b89c3897814573 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_gates, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 17:31:43 compute-0 podman[257248]: 2026-01-10 17:31:43.345165535 +0000 UTC m=+0.176197623 container start 1e1716e9aa1d57faf05af5fcf8ec186a8b262422cc3f0fb5f3b89c3897814573 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_gates, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 17:31:43 compute-0 podman[257248]: 2026-01-10 17:31:43.349233419 +0000 UTC m=+0.180265507 container attach 1e1716e9aa1d57faf05af5fcf8ec186a8b262422cc3f0fb5f3b89c3897814573 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_gates, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 17:31:43 compute-0 sshd-session[257270]: Accepted publickey for zuul from 192.168.122.10 port 45330 ssh2: ECDSA SHA256:YYROLJW/JwZAyyZtyl+88gzuUs1GqrQIhGb+AzXg9yc
Jan 10 17:31:43 compute-0 systemd-logind[798]: New session 55 of user zuul.
Jan 10 17:31:43 compute-0 systemd[1]: Started Session 55 of User zuul.
Jan 10 17:31:43 compute-0 sshd-session[257270]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 10 17:31:43 compute-0 funny_gates[257265]: {
Jan 10 17:31:43 compute-0 funny_gates[257265]:     "0": [
Jan 10 17:31:43 compute-0 funny_gates[257265]:         {
Jan 10 17:31:43 compute-0 funny_gates[257265]:             "devices": [
Jan 10 17:31:43 compute-0 funny_gates[257265]:                 "/dev/loop3"
Jan 10 17:31:43 compute-0 funny_gates[257265]:             ],
Jan 10 17:31:43 compute-0 funny_gates[257265]:             "lv_name": "ceph_lv0",
Jan 10 17:31:43 compute-0 funny_gates[257265]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 10 17:31:43 compute-0 funny_gates[257265]:             "lv_size": "21470642176",
Jan 10 17:31:43 compute-0 funny_gates[257265]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9aa1dcc9-88f4-49c0-be40-744313964d3e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 17:31:43 compute-0 funny_gates[257265]:             "lv_uuid": "fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2",
Jan 10 17:31:43 compute-0 funny_gates[257265]:             "name": "ceph_lv0",
Jan 10 17:31:43 compute-0 funny_gates[257265]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 10 17:31:43 compute-0 funny_gates[257265]:             "tags": {
Jan 10 17:31:43 compute-0 funny_gates[257265]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 10 17:31:43 compute-0 funny_gates[257265]:                 "ceph.block_uuid": "fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2",
Jan 10 17:31:43 compute-0 funny_gates[257265]:                 "ceph.cephx_lockbox_secret": "",
Jan 10 17:31:43 compute-0 funny_gates[257265]:                 "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 17:31:43 compute-0 funny_gates[257265]:                 "ceph.cluster_name": "ceph",
Jan 10 17:31:43 compute-0 funny_gates[257265]:                 "ceph.crush_device_class": "",
Jan 10 17:31:43 compute-0 funny_gates[257265]:                 "ceph.encrypted": "0",
Jan 10 17:31:43 compute-0 funny_gates[257265]:                 "ceph.objectstore": "bluestore",
Jan 10 17:31:43 compute-0 funny_gates[257265]:                 "ceph.osd_fsid": "9aa1dcc9-88f4-49c0-be40-744313964d3e",
Jan 10 17:31:43 compute-0 funny_gates[257265]:                 "ceph.osd_id": "0",
Jan 10 17:31:43 compute-0 funny_gates[257265]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 10 17:31:43 compute-0 funny_gates[257265]:                 "ceph.type": "block",
Jan 10 17:31:43 compute-0 funny_gates[257265]:                 "ceph.vdo": "0",
Jan 10 17:31:43 compute-0 funny_gates[257265]:                 "ceph.with_tpm": "0"
Jan 10 17:31:43 compute-0 funny_gates[257265]:             },
Jan 10 17:31:43 compute-0 funny_gates[257265]:             "type": "block",
Jan 10 17:31:43 compute-0 funny_gates[257265]:             "vg_name": "ceph_vg0"
Jan 10 17:31:43 compute-0 funny_gates[257265]:         }
Jan 10 17:31:43 compute-0 funny_gates[257265]:     ],
Jan 10 17:31:43 compute-0 funny_gates[257265]:     "1": [
Jan 10 17:31:43 compute-0 funny_gates[257265]:         {
Jan 10 17:31:43 compute-0 funny_gates[257265]:             "devices": [
Jan 10 17:31:43 compute-0 funny_gates[257265]:                 "/dev/loop4"
Jan 10 17:31:43 compute-0 funny_gates[257265]:             ],
Jan 10 17:31:43 compute-0 funny_gates[257265]:             "lv_name": "ceph_lv1",
Jan 10 17:31:43 compute-0 funny_gates[257265]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 10 17:31:43 compute-0 funny_gates[257265]:             "lv_size": "21470642176",
Jan 10 17:31:43 compute-0 funny_gates[257265]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e8e31518-65ae-476c-891c-e2fc550d0a1c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 17:31:43 compute-0 funny_gates[257265]:             "lv_uuid": "jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE",
Jan 10 17:31:43 compute-0 funny_gates[257265]:             "name": "ceph_lv1",
Jan 10 17:31:43 compute-0 funny_gates[257265]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 10 17:31:43 compute-0 funny_gates[257265]:             "tags": {
Jan 10 17:31:43 compute-0 funny_gates[257265]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 10 17:31:43 compute-0 funny_gates[257265]:                 "ceph.block_uuid": "jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE",
Jan 10 17:31:43 compute-0 funny_gates[257265]:                 "ceph.cephx_lockbox_secret": "",
Jan 10 17:31:43 compute-0 funny_gates[257265]:                 "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 17:31:43 compute-0 funny_gates[257265]:                 "ceph.cluster_name": "ceph",
Jan 10 17:31:43 compute-0 funny_gates[257265]:                 "ceph.crush_device_class": "",
Jan 10 17:31:43 compute-0 funny_gates[257265]:                 "ceph.encrypted": "0",
Jan 10 17:31:43 compute-0 funny_gates[257265]:                 "ceph.objectstore": "bluestore",
Jan 10 17:31:43 compute-0 funny_gates[257265]:                 "ceph.osd_fsid": "e8e31518-65ae-476c-891c-e2fc550d0a1c",
Jan 10 17:31:43 compute-0 funny_gates[257265]:                 "ceph.osd_id": "1",
Jan 10 17:31:43 compute-0 funny_gates[257265]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 10 17:31:43 compute-0 funny_gates[257265]:                 "ceph.type": "block",
Jan 10 17:31:43 compute-0 funny_gates[257265]:                 "ceph.vdo": "0",
Jan 10 17:31:43 compute-0 funny_gates[257265]:                 "ceph.with_tpm": "0"
Jan 10 17:31:43 compute-0 funny_gates[257265]:             },
Jan 10 17:31:43 compute-0 funny_gates[257265]:             "type": "block",
Jan 10 17:31:43 compute-0 funny_gates[257265]:             "vg_name": "ceph_vg1"
Jan 10 17:31:43 compute-0 funny_gates[257265]:         }
Jan 10 17:31:43 compute-0 funny_gates[257265]:     ],
Jan 10 17:31:43 compute-0 funny_gates[257265]:     "2": [
Jan 10 17:31:43 compute-0 funny_gates[257265]:         {
Jan 10 17:31:43 compute-0 funny_gates[257265]:             "devices": [
Jan 10 17:31:43 compute-0 funny_gates[257265]:                 "/dev/loop5"
Jan 10 17:31:43 compute-0 funny_gates[257265]:             ],
Jan 10 17:31:43 compute-0 funny_gates[257265]:             "lv_name": "ceph_lv2",
Jan 10 17:31:43 compute-0 funny_gates[257265]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 10 17:31:43 compute-0 funny_gates[257265]:             "lv_size": "21470642176",
Jan 10 17:31:43 compute-0 funny_gates[257265]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=87473727-6468-4f68-8371-e0bf60edaa43,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 17:31:43 compute-0 funny_gates[257265]:             "lv_uuid": "47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL",
Jan 10 17:31:43 compute-0 funny_gates[257265]:             "name": "ceph_lv2",
Jan 10 17:31:43 compute-0 funny_gates[257265]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 10 17:31:43 compute-0 funny_gates[257265]:             "tags": {
Jan 10 17:31:43 compute-0 funny_gates[257265]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 10 17:31:43 compute-0 funny_gates[257265]:                 "ceph.block_uuid": "47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL",
Jan 10 17:31:43 compute-0 funny_gates[257265]:                 "ceph.cephx_lockbox_secret": "",
Jan 10 17:31:43 compute-0 funny_gates[257265]:                 "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 17:31:43 compute-0 funny_gates[257265]:                 "ceph.cluster_name": "ceph",
Jan 10 17:31:43 compute-0 funny_gates[257265]:                 "ceph.crush_device_class": "",
Jan 10 17:31:43 compute-0 funny_gates[257265]:                 "ceph.encrypted": "0",
Jan 10 17:31:43 compute-0 funny_gates[257265]:                 "ceph.objectstore": "bluestore",
Jan 10 17:31:43 compute-0 funny_gates[257265]:                 "ceph.osd_fsid": "87473727-6468-4f68-8371-e0bf60edaa43",
Jan 10 17:31:43 compute-0 funny_gates[257265]:                 "ceph.osd_id": "2",
Jan 10 17:31:43 compute-0 funny_gates[257265]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 10 17:31:43 compute-0 funny_gates[257265]:                 "ceph.type": "block",
Jan 10 17:31:43 compute-0 funny_gates[257265]:                 "ceph.vdo": "0",
Jan 10 17:31:43 compute-0 funny_gates[257265]:                 "ceph.with_tpm": "0"
Jan 10 17:31:43 compute-0 funny_gates[257265]:             },
Jan 10 17:31:43 compute-0 funny_gates[257265]:             "type": "block",
Jan 10 17:31:43 compute-0 funny_gates[257265]:             "vg_name": "ceph_vg2"
Jan 10 17:31:43 compute-0 funny_gates[257265]:         }
Jan 10 17:31:43 compute-0 funny_gates[257265]:     ]
Jan 10 17:31:43 compute-0 funny_gates[257265]: }
Jan 10 17:31:43 compute-0 sudo[257278]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp  -p container,openstack_edpm,system,storage,virt'
Jan 10 17:31:43 compute-0 sudo[257278]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 10 17:31:43 compute-0 systemd[1]: libpod-1e1716e9aa1d57faf05af5fcf8ec186a8b262422cc3f0fb5f3b89c3897814573.scope: Deactivated successfully.
Jan 10 17:31:43 compute-0 podman[257248]: 2026-01-10 17:31:43.740139281 +0000 UTC m=+0.571171339 container died 1e1716e9aa1d57faf05af5fcf8ec186a8b262422cc3f0fb5f3b89c3897814573 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_gates, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 10 17:31:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-57ab77d16a5d9908af97fa2220c4f10309903b5e270f0203c9218d418b046e65-merged.mount: Deactivated successfully.
Jan 10 17:31:43 compute-0 podman[257248]: 2026-01-10 17:31:43.788270458 +0000 UTC m=+0.619302556 container remove 1e1716e9aa1d57faf05af5fcf8ec186a8b262422cc3f0fb5f3b89c3897814573 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_gates, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 10 17:31:43 compute-0 systemd[1]: libpod-conmon-1e1716e9aa1d57faf05af5fcf8ec186a8b262422cc3f0fb5f3b89c3897814573.scope: Deactivated successfully.
Jan 10 17:31:43 compute-0 sudo[257169]: pam_unix(sudo:session): session closed for user root
Jan 10 17:31:43 compute-0 sudo[257318]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 10 17:31:43 compute-0 sudo[257318]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:31:43 compute-0 sudo[257318]: pam_unix(sudo:session): session closed for user root
Jan 10 17:31:44 compute-0 ceph-mon[75249]: pgmap v1127: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:31:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] _maybe_adjust
Jan 10 17:31:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:31:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 10 17:31:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:31:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 5.365931724612428e-07 of space, bias 1.0, pg target 0.00016097795173837282 quantized to 32 (current 32)
Jan 10 17:31:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:31:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.1924810223865999e-07 of space, bias 1.0, pg target 3.5774430671597993e-05 quantized to 32 (current 32)
Jan 10 17:31:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:31:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 17:31:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:31:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000668695260671586 of space, bias 1.0, pg target 0.2006085782014758 quantized to 32 (current 32)
Jan 10 17:31:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:31:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0462037643091811e-06 of space, bias 4.0, pg target 0.0012554445171710175 quantized to 16 (current 16)
Jan 10 17:31:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 17:31:44 compute-0 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 17:31:44 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:31:44 compute-0 sudo[257347]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 -- raw list --format json
Jan 10 17:31:44 compute-0 sudo[257347]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:31:44 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1128: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:31:45 compute-0 podman[257434]: 2026-01-10 17:31:45.076469616 +0000 UTC m=+0.050334100 container create 4fcdf1b7e461ad68ed4433d0abef992a31fb9b1b2b4cb660c8a36bf08f082e1b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_rhodes, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 17:31:45 compute-0 systemd[1]: Started libpod-conmon-4fcdf1b7e461ad68ed4433d0abef992a31fb9b1b2b4cb660c8a36bf08f082e1b.scope.
Jan 10 17:31:45 compute-0 podman[257434]: 2026-01-10 17:31:45.05267295 +0000 UTC m=+0.026537454 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:31:45 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:31:45 compute-0 podman[257434]: 2026-01-10 17:31:45.176166496 +0000 UTC m=+0.150030990 container init 4fcdf1b7e461ad68ed4433d0abef992a31fb9b1b2b4cb660c8a36bf08f082e1b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_rhodes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 10 17:31:45 compute-0 podman[257434]: 2026-01-10 17:31:45.188528782 +0000 UTC m=+0.162393246 container start 4fcdf1b7e461ad68ed4433d0abef992a31fb9b1b2b4cb660c8a36bf08f082e1b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_rhodes, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 10 17:31:45 compute-0 podman[257434]: 2026-01-10 17:31:45.192801702 +0000 UTC m=+0.166666166 container attach 4fcdf1b7e461ad68ed4433d0abef992a31fb9b1b2b4cb660c8a36bf08f082e1b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_rhodes, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 10 17:31:45 compute-0 focused_rhodes[257450]: 167 167
Jan 10 17:31:45 compute-0 systemd[1]: libpod-4fcdf1b7e461ad68ed4433d0abef992a31fb9b1b2b4cb660c8a36bf08f082e1b.scope: Deactivated successfully.
Jan 10 17:31:45 compute-0 podman[257434]: 2026-01-10 17:31:45.197583436 +0000 UTC m=+0.171447900 container died 4fcdf1b7e461ad68ed4433d0abef992a31fb9b1b2b4cb660c8a36bf08f082e1b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_rhodes, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 10 17:31:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-bb3eaadf188c7484c0122bf159fe6b0f478d7df46569ce8473d4de69ca26a2d4-merged.mount: Deactivated successfully.
Jan 10 17:31:45 compute-0 podman[257434]: 2026-01-10 17:31:45.237955366 +0000 UTC m=+0.211819830 container remove 4fcdf1b7e461ad68ed4433d0abef992a31fb9b1b2b4cb660c8a36bf08f082e1b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_rhodes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 10 17:31:45 compute-0 systemd[1]: libpod-conmon-4fcdf1b7e461ad68ed4433d0abef992a31fb9b1b2b4cb660c8a36bf08f082e1b.scope: Deactivated successfully.
Jan 10 17:31:45 compute-0 podman[257492]: 2026-01-10 17:31:45.419048345 +0000 UTC m=+0.050521596 container create e1d7c8691300de99ee62000b45624a842b3a5efc5ce98bae4437f0983b8dec5e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_carson, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 10 17:31:45 compute-0 systemd[1]: Started libpod-conmon-e1d7c8691300de99ee62000b45624a842b3a5efc5ce98bae4437f0983b8dec5e.scope.
Jan 10 17:31:45 compute-0 systemd[1]: Started libcrun container.
Jan 10 17:31:45 compute-0 podman[257492]: 2026-01-10 17:31:45.393902421 +0000 UTC m=+0.025375722 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 17:31:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea49157059eb38ec875a50c970867bb151de63d46f2e962444b4290ad573b1ba/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 17:31:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea49157059eb38ec875a50c970867bb151de63d46f2e962444b4290ad573b1ba/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 17:31:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea49157059eb38ec875a50c970867bb151de63d46f2e962444b4290ad573b1ba/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 17:31:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea49157059eb38ec875a50c970867bb151de63d46f2e962444b4290ad573b1ba/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 17:31:45 compute-0 podman[257492]: 2026-01-10 17:31:45.50534928 +0000 UTC m=+0.136822551 container init e1d7c8691300de99ee62000b45624a842b3a5efc5ce98bae4437f0983b8dec5e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_carson, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 10 17:31:45 compute-0 podman[257492]: 2026-01-10 17:31:45.511944265 +0000 UTC m=+0.143417506 container start e1d7c8691300de99ee62000b45624a842b3a5efc5ce98bae4437f0983b8dec5e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_carson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 10 17:31:45 compute-0 podman[257492]: 2026-01-10 17:31:45.527533361 +0000 UTC m=+0.159006632 container attach e1d7c8691300de99ee62000b45624a842b3a5efc5ce98bae4437f0983b8dec5e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_carson, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 10 17:31:46 compute-0 lvm[257660]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 10 17:31:46 compute-0 lvm[257660]: VG ceph_vg0 finished
Jan 10 17:31:46 compute-0 lvm[257663]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 10 17:31:46 compute-0 lvm[257663]: VG ceph_vg2 finished
Jan 10 17:31:46 compute-0 lvm[257661]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 10 17:31:46 compute-0 lvm[257661]: VG ceph_vg1 finished
Jan 10 17:31:46 compute-0 ceph-mon[75249]: pgmap v1128: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:31:46 compute-0 sharp_carson[257512]: {}
Jan 10 17:31:46 compute-0 systemd[1]: libpod-e1d7c8691300de99ee62000b45624a842b3a5efc5ce98bae4437f0983b8dec5e.scope: Deactivated successfully.
Jan 10 17:31:46 compute-0 systemd[1]: libpod-e1d7c8691300de99ee62000b45624a842b3a5efc5ce98bae4437f0983b8dec5e.scope: Consumed 1.416s CPU time.
Jan 10 17:31:46 compute-0 podman[257492]: 2026-01-10 17:31:46.339153308 +0000 UTC m=+0.970626549 container died e1d7c8691300de99ee62000b45624a842b3a5efc5ce98bae4437f0983b8dec5e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_carson, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 17:31:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-ea49157059eb38ec875a50c970867bb151de63d46f2e962444b4290ad573b1ba-merged.mount: Deactivated successfully.
Jan 10 17:31:46 compute-0 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.15012 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 17:31:46 compute-0 podman[257492]: 2026-01-10 17:31:46.391452302 +0000 UTC m=+1.022925553 container remove e1d7c8691300de99ee62000b45624a842b3a5efc5ce98bae4437f0983b8dec5e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_carson, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 17:31:46 compute-0 systemd[1]: libpod-conmon-e1d7c8691300de99ee62000b45624a842b3a5efc5ce98bae4437f0983b8dec5e.scope: Deactivated successfully.
Jan 10 17:31:46 compute-0 sudo[257347]: pam_unix(sudo:session): session closed for user root
Jan 10 17:31:46 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 10 17:31:46 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:31:46 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 10 17:31:46 compute-0 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:31:46 compute-0 sudo[257683]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 10 17:31:46 compute-0 sudo[257683]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 10 17:31:46 compute-0 sudo[257683]: pam_unix(sudo:session): session closed for user root
Jan 10 17:31:46 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1129: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:31:47 compute-0 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.15014 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 17:31:47 compute-0 ceph-mon[75249]: from='client.15012 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 17:31:47 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:31:47 compute-0 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 17:31:47 compute-0 ceph-mon[75249]: pgmap v1129: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:31:47 compute-0 ceph-mon[75249]: from='client.15014 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 17:31:47 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0)
Jan 10 17:31:47 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/411466825' entity='client.admin' cmd={"prefix": "status"} : dispatch
Jan 10 17:31:48 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/411466825' entity='client.admin' cmd={"prefix": "status"} : dispatch
Jan 10 17:31:48 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1130: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:31:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:31:48.950 152671 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 10 17:31:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:31:48.953 152671 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 10 17:31:48 compute-0 ovn_metadata_agent[152665]: 2026-01-10 17:31:48.953 152671 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 10 17:31:49 compute-0 ceph-mon[75249]: pgmap v1130: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:31:49 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:31:50 compute-0 ovs-vsctl[257791]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Jan 10 17:31:50 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1131: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:31:51 compute-0 virtqemud[236762]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Jan 10 17:31:51 compute-0 virtqemud[236762]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Jan 10 17:31:51 compute-0 virtqemud[236762]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Jan 10 17:31:51 compute-0 ceph-mon[75249]: pgmap v1131: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:31:52 compute-0 ceph-mds[93917]: mds.cephfs.compute-0.anmivh asok_command: cache status {prefix=cache status} (starting...)
Jan 10 17:31:52 compute-0 ceph-mds[93917]: mds.cephfs.compute-0.anmivh asok_command: client ls {prefix=client ls} (starting...)
Jan 10 17:31:52 compute-0 lvm[258109]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 10 17:31:52 compute-0 lvm[258109]: VG ceph_vg1 finished
Jan 10 17:31:52 compute-0 lvm[258136]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 10 17:31:52 compute-0 lvm[258136]: VG ceph_vg2 finished
Jan 10 17:31:52 compute-0 lvm[258142]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 10 17:31:52 compute-0 lvm[258142]: VG ceph_vg0 finished
Jan 10 17:31:52 compute-0 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.15018 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 17:31:52 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1132: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:31:53 compute-0 ceph-mds[93917]: mds.cephfs.compute-0.anmivh asok_command: damage ls {prefix=damage ls} (starting...)
Jan 10 17:31:53 compute-0 ceph-mds[93917]: mds.cephfs.compute-0.anmivh asok_command: dump loads {prefix=dump loads} (starting...)
Jan 10 17:31:53 compute-0 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.15020 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 17:31:53 compute-0 ceph-mds[93917]: mds.cephfs.compute-0.anmivh asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Jan 10 17:31:53 compute-0 ceph-mds[93917]: mds.cephfs.compute-0.anmivh asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Jan 10 17:31:53 compute-0 ceph-mds[93917]: mds.cephfs.compute-0.anmivh asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Jan 10 17:31:53 compute-0 ceph-mds[93917]: mds.cephfs.compute-0.anmivh asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Jan 10 17:31:53 compute-0 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.15024 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 17:31:53 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "report"} v 0)
Jan 10 17:31:53 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2773928204' entity='client.admin' cmd={"prefix": "report"} : dispatch
Jan 10 17:31:53 compute-0 ceph-mon[75249]: from='client.15018 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 17:31:53 compute-0 ceph-mon[75249]: pgmap v1132: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:31:53 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/2773928204' entity='client.admin' cmd={"prefix": "report"} : dispatch
Jan 10 17:31:54 compute-0 ceph-mds[93917]: mds.cephfs.compute-0.anmivh asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Jan 10 17:31:54 compute-0 ceph-mds[93917]: mds.cephfs.compute-0.anmivh asok_command: get subtrees {prefix=get subtrees} (starting...)
Jan 10 17:31:54 compute-0 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.15026 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 17:31:54 compute-0 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-mgr-compute-0-mkxlpr[75534]: 2026-01-10T17:31:54.330+0000 7fd5c778b640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 10 17:31:54 compute-0 ceph-mgr[75538]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 10 17:31:54 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 17:31:54 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/214932499' entity='client.admin' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 17:31:54 compute-0 ceph-mds[93917]: mds.cephfs.compute-0.anmivh asok_command: ops {prefix=ops} (starting...)
Jan 10 17:31:54 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:31:54 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config log"} v 0)
Jan 10 17:31:54 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3104891342' entity='client.admin' cmd={"prefix": "config log"} : dispatch
Jan 10 17:31:54 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0)
Jan 10 17:31:54 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/905887018' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm"} : dispatch
Jan 10 17:31:54 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1133: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:31:54 compute-0 ceph-mon[75249]: from='client.15020 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 17:31:54 compute-0 ceph-mon[75249]: from='client.15024 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 17:31:54 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/214932499' entity='client.admin' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 17:31:54 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/3104891342' entity='client.admin' cmd={"prefix": "config log"} : dispatch
Jan 10 17:31:54 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/905887018' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm"} : dispatch
Jan 10 17:31:55 compute-0 podman[258406]: 2026-01-10 17:31:55.096667742 +0000 UTC m=+0.081784230 container health_status 4a66317a3a3660e903224f5e823ead0a57597ae17c6ac34919f8a3aeea25124f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 10 17:31:55 compute-0 podman[258418]: 2026-01-10 17:31:55.136209758 +0000 UTC m=+0.122174920 container health_status a2049b55215718ead6719d161f51721cb7ba61ad8a59a8a1174e05b17008fa6f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 10 17:31:55 compute-0 ceph-mds[93917]: mds.cephfs.compute-0.anmivh asok_command: session ls {prefix=session ls} (starting...)
Jan 10 17:31:55 compute-0 ceph-mds[93917]: mds.cephfs.compute-0.anmivh asok_command: status {prefix=status} (starting...)
Jan 10 17:31:55 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0)
Jan 10 17:31:55 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1124089951' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Jan 10 17:31:55 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config-key dump"} v 0)
Jan 10 17:31:55 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1300202661' entity='client.admin' cmd={"prefix": "config-key dump"} : dispatch
Jan 10 17:31:55 compute-0 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.15038 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 17:31:55 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Jan 10 17:31:55 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4113301332' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Jan 10 17:31:56 compute-0 ceph-mon[75249]: from='client.15026 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 17:31:56 compute-0 ceph-mon[75249]: pgmap v1133: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:31:56 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/1124089951' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Jan 10 17:31:56 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/1300202661' entity='client.admin' cmd={"prefix": "config-key dump"} : dispatch
Jan 10 17:31:56 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/4113301332' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Jan 10 17:31:56 compute-0 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.15042 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 17:31:56 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Jan 10 17:31:56 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/643842098' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Jan 10 17:31:56 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1134: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:31:56 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "features"} v 0)
Jan 10 17:31:56 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2843629315' entity='client.admin' cmd={"prefix": "features"} : dispatch
Jan 10 17:31:57 compute-0 ceph-mon[75249]: from='client.15038 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 17:31:57 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/643842098' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Jan 10 17:31:57 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/2843629315' entity='client.admin' cmd={"prefix": "features"} : dispatch
Jan 10 17:31:57 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Jan 10 17:31:57 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4279908591' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Jan 10 17:31:57 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0)
Jan 10 17:31:57 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/489480267' entity='client.admin' cmd={"prefix": "health", "detail": "detail"} : dispatch
Jan 10 17:31:57 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0)
Jan 10 17:31:57 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3820373565' entity='client.admin' cmd={"prefix": "mgr stat"} : dispatch
Jan 10 17:31:57 compute-0 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.15054 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 17:31:57 compute-0 ceph-mgr[75538]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 10 17:31:57 compute-0 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-mgr-compute-0-mkxlpr[75534]: 2026-01-10T17:31:57.947+0000 7fd5c778b640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 10 17:31:58 compute-0 ceph-mon[75249]: from='client.15042 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 17:31:58 compute-0 ceph-mon[75249]: pgmap v1134: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:31:58 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/4279908591' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Jan 10 17:31:58 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/489480267' entity='client.admin' cmd={"prefix": "health", "detail": "detail"} : dispatch
Jan 10 17:31:58 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/3820373565' entity='client.admin' cmd={"prefix": "mgr stat"} : dispatch
Jan 10 17:31:58 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0)
Jan 10 17:31:58 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3242109946' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Jan 10 17:31:58 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0)
Jan 10 17:31:58 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1639877083' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} : dispatch
Jan 10 17:31:58 compute-0 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.15060 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 17:31:58 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1135: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:31:59 compute-0 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.15064 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 17:31:59 compute-0 ceph-mon[75249]: from='client.15054 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 17:31:59 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/3242109946' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Jan 10 17:31:59 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/1639877083' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} : dispatch
Jan 10 17:31:59 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0)
Jan 10 17:31:59 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2982846763' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} : dispatch
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 59 handle_osd_map epochs [59,60], i have 59, src has [1,60]
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 60 heartbeat osd_stat(store_statfs(0x4fe12d000/0x0/0x4ffc00000, data 0x40351/0x9b000, compress 0x0/0x0/0x0, omap 0x6e9d, meta 0x1a29163), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T16:59:54.975393+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  log_queue is 2 last_log 37 sent 35 num 2 unsent 2 sending 2
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:00:24.188275+0000 osd.2 (osd.2) 36 : cluster [DBG] 5.a scrub starts
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:00:24.198813+0000 osd.2 (osd.2) 37 : cluster [DBG] 5.a scrub ok
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61628416 unmapped: 229376 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client handle_log_ack log(last 37)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:00:24.188275+0000 osd.2 (osd.2) 36 : cluster [DBG] 5.a scrub starts
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:00:24.198813+0000 osd.2 (osd.2) 37 : cluster [DBG] 5.a scrub ok
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T16:59:55.975641+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 2.c scrub starts
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 2.c scrub ok
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61644800 unmapped: 212992 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T16:59:56.975837+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  log_queue is 2 last_log 39 sent 37 num 2 unsent 2 sending 2
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:00:26.175652+0000 osd.2 (osd.2) 38 : cluster [DBG] 2.c scrub starts
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:00:26.185922+0000 osd.2 (osd.2) 39 : cluster [DBG] 2.c scrub ok
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 155648 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client handle_log_ack log(last 39)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:00:26.175652+0000 osd.2 (osd.2) 38 : cluster [DBG] 2.c scrub starts
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:00:26.185922+0000 osd.2 (osd.2) 39 : cluster [DBG] 2.c scrub ok
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T16:59:57.976063+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 60 handle_osd_map epochs [61,61], i have 60, src has [1,61]
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 386526 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 61 heartbeat osd_stat(store_statfs(0x4fe122000/0x0/0x4ffc00000, data 0x44267/0xa4000, compress 0x0/0x0/0x0, omap 0x763e, meta 0x1a289c2), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 61 handle_osd_map epochs [62,62], i have 61, src has [1,62]
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 61 handle_osd_map epochs [62,62], i have 62, src has [1,62]
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61652992 unmapped: 204800 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 62 handle_osd_map epochs [62,63], i have 62, src has [1,63]
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T16:59:58.976210+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61661184 unmapped: 196608 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T16:59:59.976505+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61661184 unmapped: 196608 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 63 handle_osd_map epochs [63,64], i have 63, src has [1,64]
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:00.976652+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 172032 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:01.976776+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 172032 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 64 handle_osd_map epochs [65,65], i have 64, src has [1,65]
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.029172897s of 11.149907112s, submitted: 15
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 65 handle_osd_map epochs [65,66], i have 65, src has [1,66]
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:02.976903+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 5.b scrub starts
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 402333 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 5.b scrub ok
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61644800 unmapped: 212992 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:03.977053+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  log_queue is 2 last_log 41 sent 39 num 2 unsent 2 sending 2
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:00:33.325897+0000 osd.2 (osd.2) 40 : cluster [DBG] 5.b scrub starts
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:00:33.336008+0000 osd.2 (osd.2) 41 : cluster [DBG] 5.b scrub ok
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 66 heartbeat osd_stat(store_statfs(0x4fe115000/0x0/0x4ffc00000, data 0x4af47/0xb3000, compress 0x0/0x0/0x0, omap 0x82f5, meta 0x1a27d0b), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61652992 unmapped: 204800 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client handle_log_ack log(last 41)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:00:33.325897+0000 osd.2 (osd.2) 40 : cluster [DBG] 5.b scrub starts
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:00:33.336008+0000 osd.2 (osd.2) 41 : cluster [DBG] 5.b scrub ok
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 66 handle_osd_map epochs [66,67], i have 66, src has [1,67]
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:04.977285+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 2.0 scrub starts
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 2.0 scrub ok
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61677568 unmapped: 180224 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:05.977457+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  log_queue is 2 last_log 43 sent 41 num 2 unsent 2 sending 2
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:00:35.349606+0000 osd.2 (osd.2) 42 : cluster [DBG] 2.0 scrub starts
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:00:35.360109+0000 osd.2 (osd.2) 43 : cluster [DBG] 2.0 scrub ok
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61751296 unmapped: 106496 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:06.977989+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client handle_log_ack log(last 43)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:00:35.349606+0000 osd.2 (osd.2) 42 : cluster [DBG] 2.0 scrub starts
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:00:35.360109+0000 osd.2 (osd.2) 43 : cluster [DBG] 2.0 scrub ok
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61751296 unmapped: 106496 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:07.978134+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 407980 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 67 heartbeat osd_stat(store_statfs(0x4fe110000/0x0/0x4ffc00000, data 0x4c55d/0xb6000, compress 0x0/0x0/0x0, omap 0x8580, meta 0x1a27a80), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61751296 unmapped: 106496 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:08.978280+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 67 pg[6.f(unlocked)] enter Initial
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 67 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=67) [2] r=0 lpr=0 pi=[48,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.001224 0 0.000000
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 67 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=67) [2] r=0 lpr=0 pi=[48,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 67 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=67) [2] r=0 lpr=67 pi=[48,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000076 1 0.000192
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 67 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=67) [2] r=0 lpr=67 pi=[48,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 67 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=67) [2] r=0 lpr=67 pi=[48,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 67 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=67) [2] r=0 lpr=67 pi=[48,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 67 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=67) [2] r=0 lpr=67 pi=[48,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000776 0 0.000000
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 67 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=67) [2] r=0 lpr=67 pi=[48,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 67 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=67) [2] r=0 lpr=67 pi=[48,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 67 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=67) [2] r=0 lpr=67 pi=[48,67)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 67 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=67) [2] r=0 lpr=67 pi=[48,67)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000444 1 0.001106
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 67 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=67) [2] r=0 lpr=67 pi=[48,67)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:31:59 compute-0 ceph-osd[87867]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 17:31:59 compute-0 ceph-osd[87867]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 67 pg[6.f( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=67) [2] r=0 lpr=67 pi=[48,67)/1 crt=33'39 mlcod 0'0 peering m=3 mbc={}] exit Started/Primary/Peering/GetLog 0.000831 2 0.000183
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 67 pg[6.f( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=67) [2] r=0 lpr=67 pi=[48,67)/1 crt=33'39 mlcod 0'0 peering m=3 mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 67 pg[6.f( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=67) [2] r=0 lpr=67 pi=[48,67)/1 crt=33'39 mlcod 0'0 peering m=3 mbc={}] exit Started/Primary/Peering/GetMissing 0.000019 0 0.000000
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 67 pg[6.f( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=67) [2] r=0 lpr=67 pi=[48,67)/1 crt=33'39 mlcod 0'0 peering m=3 mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61759488 unmapped: 98304 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:09.978382+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 67 handle_osd_map epochs [67,68], i have 67, src has [1,68]
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 68 pg[6.f( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=67) [2] r=0 lpr=67 pi=[48,67)/1 crt=33'39 mlcod 0'0 peering m=3 mbc={}] exit Started/Primary/Peering/WaitUpThru 1.012900 2 0.000203
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 68 pg[6.f( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=67) [2] r=0 lpr=67 pi=[48,67)/1 crt=33'39 mlcod 0'0 peering m=3 mbc={}] exit Started/Primary/Peering 1.014360 0 0.000000
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 68 pg[6.f( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=67) [2] r=0 lpr=67 pi=[48,67)/1 crt=33'39 mlcod 0'0 unknown m=3 mbc={}] enter Started/Primary/Active
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 68 pg[6.f( v 33'39 lc 33'1 (0'0,33'39] local-lis/les=67/68 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=67) [2] r=0 lpr=67 pi=[48,67)/1 crt=33'39 lcod 0'0 mlcod 0'0 activating+degraded m=3 mbc={255={(0+1)=3}}] enter Started/Primary/Active/Activating
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 68 pg[6.f( v 33'39 lc 33'1 (0'0,33'39] local-lis/les=67/68 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=67) [2] r=0 lpr=67 pi=[48,67)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 68 pg[6.f( v 33'39 lc 33'1 (0'0,33'39] local-lis/les=67/68 n=1 ec=39/23 lis/c=67/48 les/c/f=68/49/0 sis=67) [2] r=0 lpr=67 pi=[48,67)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] exit Started/Primary/Active/Activating 0.003906 3 0.000352
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 68 pg[6.f( v 33'39 lc 33'1 (0'0,33'39] local-lis/les=67/68 n=1 ec=39/23 lis/c=67/48 les/c/f=68/49/0 sis=67) [2] r=0 lpr=67 pi=[48,67)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 68 pg[6.f( v 33'39 lc 33'1 (0'0,33'39] local-lis/les=67/68 n=1 ec=39/23 lis/c=67/48 les/c/f=68/49/0 sis=67) [2] r=0 lpr=67 pi=[48,67)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=3 mbc={255={(0+1)=3}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000141 1 0.000221
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 68 pg[6.f( v 33'39 lc 33'1 (0'0,33'39] local-lis/les=67/68 n=1 ec=39/23 lis/c=67/48 les/c/f=68/49/0 sis=67) [2] r=0 lpr=67 pi=[48,67)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=3 mbc={255={(0+1)=3}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 68 pg[6.f( v 33'39 lc 33'1 (0'0,33'39] local-lis/les=67/68 n=1 ec=39/23 lis/c=67/48 les/c/f=68/49/0 sis=67) [2] r=0 lpr=67 pi=[48,67)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=3 mbc={255={(0+1)=3}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000014 0 0.000000
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 68 pg[6.f( v 33'39 lc 33'1 (0'0,33'39] local-lis/les=67/68 n=1 ec=39/23 lis/c=67/48 les/c/f=68/49/0 sis=67) [2] r=0 lpr=67 pi=[48,67)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=3 mbc={255={(0+1)=3}}] enter Started/Primary/Active/Recovering
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 handle_osd_map epochs [68,68], i have 68, src has [1,68]
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 68 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=67/68 n=1 ec=39/23 lis/c=67/48 les/c/f=68/49/0 sis=67) [2] r=0 lpr=67 pi=[48,67)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.137376 3 0.000191
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 68 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=67/68 n=1 ec=39/23 lis/c=67/48 les/c/f=68/49/0 sis=67) [2] r=0 lpr=67 pi=[48,67)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 68 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=67/68 n=1 ec=39/23 lis/c=67/48 les/c/f=68/49/0 sis=67) [2] r=0 lpr=67 pi=[48,67)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000012 0 0.000000
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 pg_epoch: 68 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=67/68 n=1 ec=39/23 lis/c=67/48 les/c/f=68/49/0 sis=67) [2] r=0 lpr=67 pi=[48,67)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61775872 unmapped: 81920 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe10f000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:10.978575+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61775872 unmapped: 81920 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:11.978792+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61775872 unmapped: 81920 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:12.978987+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 416166 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61784064 unmapped: 73728 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:13.979163+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61784064 unmapped: 73728 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:14.979304+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61792256 unmapped: 65536 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:15.979413+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe10f000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61792256 unmapped: 65536 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:16.979567+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe10f000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61800448 unmapped: 57344 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:17.979760+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe10f000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 5.0 scrub starts
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.526871681s of 15.590026855s, submitted: 16
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 5.0 scrub ok
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 418577 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61784064 unmapped: 73728 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:18.979911+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  log_queue is 2 last_log 45 sent 43 num 2 unsent 2 sending 2
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:00:48.328322+0000 osd.2 (osd.2) 44 : cluster [DBG] 5.0 scrub starts
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:00:48.338834+0000 osd.2 (osd.2) 45 : cluster [DBG] 5.0 scrub ok
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client handle_log_ack log(last 45)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:00:48.328322+0000 osd.2 (osd.2) 44 : cluster [DBG] 5.0 scrub starts
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:00:48.338834+0000 osd.2 (osd.2) 45 : cluster [DBG] 5.0 scrub ok
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61784064 unmapped: 73728 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:19.980276+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 2.1 scrub starts
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 2.1 scrub ok
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61849600 unmapped: 8192 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:20.980524+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  log_queue is 2 last_log 47 sent 45 num 2 unsent 2 sending 2
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:00:50.347104+0000 osd.2 (osd.2) 46 : cluster [DBG] 2.1 scrub starts
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:00:50.357581+0000 osd.2 (osd.2) 47 : cluster [DBG] 2.1 scrub ok
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 5.6 scrub starts
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 5.6 scrub ok
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client handle_log_ack log(last 47)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:00:50.347104+0000 osd.2 (osd.2) 46 : cluster [DBG] 2.1 scrub starts
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:00:50.357581+0000 osd.2 (osd.2) 47 : cluster [DBG] 2.1 scrub ok
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61865984 unmapped: 1040384 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:21.980830+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  log_queue is 2 last_log 49 sent 47 num 2 unsent 2 sending 2
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:00:51.376339+0000 osd.2 (osd.2) 48 : cluster [DBG] 5.6 scrub starts
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:00:51.386905+0000 osd.2 (osd.2) 49 : cluster [DBG] 5.6 scrub ok
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 5.e scrub starts
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 5.e scrub ok
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client handle_log_ack log(last 49)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:00:51.376339+0000 osd.2 (osd.2) 48 : cluster [DBG] 5.6 scrub starts
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:00:51.386905+0000 osd.2 (osd.2) 49 : cluster [DBG] 5.6 scrub ok
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61898752 unmapped: 1007616 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:22.981197+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  log_queue is 2 last_log 51 sent 49 num 2 unsent 2 sending 2
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:00:52.342343+0000 osd.2 (osd.2) 50 : cluster [DBG] 5.e scrub starts
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:00:52.352828+0000 osd.2 (osd.2) 51 : cluster [DBG] 5.e scrub ok
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 424802 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61898752 unmapped: 1007616 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client handle_log_ack log(last 51)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:00:52.342343+0000 osd.2 (osd.2) 50 : cluster [DBG] 5.e scrub starts
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:00:52.352828+0000 osd.2 (osd.2) 51 : cluster [DBG] 5.e scrub ok
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:23.981478+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 5.d scrub starts
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 5.d scrub ok
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61898752 unmapped: 1007616 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:24.981601+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  log_queue is 2 last_log 53 sent 51 num 2 unsent 2 sending 2
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:00:54.355494+0000 osd.2 (osd.2) 52 : cluster [DBG] 5.d scrub starts
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:00:54.365955+0000 osd.2 (osd.2) 53 : cluster [DBG] 5.d scrub ok
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61906944 unmapped: 999424 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client handle_log_ack log(last 53)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:00:54.355494+0000 osd.2 (osd.2) 52 : cluster [DBG] 5.d scrub starts
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:00:54.365955+0000 osd.2 (osd.2) 53 : cluster [DBG] 5.d scrub ok
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:25.981861+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61906944 unmapped: 999424 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:26.982112+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61915136 unmapped: 991232 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:27.982283+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 427213 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61915136 unmapped: 991232 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:28.982466+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61915136 unmapped: 991232 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:29.982629+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61923328 unmapped: 983040 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:30.982825+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61931520 unmapped: 974848 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:31.982962+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61939712 unmapped: 966656 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:32.983109+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 5.1b scrub starts
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.689327240s of 15.030009270s, submitted: 10
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 5.1b scrub ok
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 429626 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61939712 unmapped: 966656 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:33.983315+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  log_queue is 2 last_log 55 sent 53 num 2 unsent 2 sending 2
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:01:03.358290+0000 osd.2 (osd.2) 54 : cluster [DBG] 5.1b scrub starts
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:01:03.368852+0000 osd.2 (osd.2) 55 : cluster [DBG] 5.1b scrub ok
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client handle_log_ack log(last 55)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:01:03.358290+0000 osd.2 (osd.2) 54 : cluster [DBG] 5.1b scrub starts
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:01:03.368852+0000 osd.2 (osd.2) 55 : cluster [DBG] 5.1b scrub ok
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61947904 unmapped: 958464 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:34.985300+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 2.1e scrub starts
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 2.1e scrub ok
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61956096 unmapped: 950272 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:35.986338+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  log_queue is 2 last_log 57 sent 55 num 2 unsent 2 sending 2
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:01:05.312219+0000 osd.2 (osd.2) 56 : cluster [DBG] 2.1e scrub starts
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:01:05.322797+0000 osd.2 (osd.2) 57 : cluster [DBG] 2.1e scrub ok
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client handle_log_ack log(last 57)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:01:05.312219+0000 osd.2 (osd.2) 56 : cluster [DBG] 2.1e scrub starts
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:01:05.322797+0000 osd.2 (osd.2) 57 : cluster [DBG] 2.1e scrub ok
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61964288 unmapped: 942080 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:36.988291+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61980672 unmapped: 925696 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:37.988523+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 432039 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61980672 unmapped: 925696 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:38.988751+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61988864 unmapped: 917504 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:39.989852+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61988864 unmapped: 917504 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:40.990410+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61988864 unmapped: 917504 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:41.990619+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61997056 unmapped: 909312 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:42.991484+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 432039 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61997056 unmapped: 909312 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:43.991739+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 4.1b scrub starts
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.899189949s of 10.917829514s, submitted: 4
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 4.1b scrub ok
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62013440 unmapped: 892928 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:44.991974+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  log_queue is 2 last_log 59 sent 57 num 2 unsent 2 sending 2
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:01:14.276039+0000 osd.2 (osd.2) 58 : cluster [DBG] 4.1b scrub starts
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:01:14.286600+0000 osd.2 (osd.2) 59 : cluster [DBG] 4.1b scrub ok
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 4.1a scrub starts
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 4.1a scrub ok
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client handle_log_ack log(last 59)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:01:14.276039+0000 osd.2 (osd.2) 58 : cluster [DBG] 4.1b scrub starts
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:01:14.286600+0000 osd.2 (osd.2) 59 : cluster [DBG] 4.1b scrub ok
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62013440 unmapped: 892928 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:45.992254+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  log_queue is 2 last_log 61 sent 59 num 2 unsent 2 sending 2
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:01:15.261022+0000 osd.2 (osd.2) 60 : cluster [DBG] 4.1a scrub starts
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:01:15.271506+0000 osd.2 (osd.2) 61 : cluster [DBG] 4.1a scrub ok
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client handle_log_ack log(last 61)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:01:15.261022+0000 osd.2 (osd.2) 60 : cluster [DBG] 4.1a scrub starts
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:01:15.271506+0000 osd.2 (osd.2) 61 : cluster [DBG] 4.1a scrub ok
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62013440 unmapped: 892928 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:46.992605+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62029824 unmapped: 876544 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:47.992769+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 436865 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62029824 unmapped: 876544 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:48.992976+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62038016 unmapped: 868352 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:49.993177+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62046208 unmapped: 860160 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:50.993476+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 4.e scrub starts
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 4.e scrub ok
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62054400 unmapped: 851968 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:51.993781+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  log_queue is 2 last_log 63 sent 61 num 2 unsent 2 sending 2
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:01:21.328668+0000 osd.2 (osd.2) 62 : cluster [DBG] 4.e scrub starts
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:01:21.339212+0000 osd.2 (osd.2) 63 : cluster [DBG] 4.e scrub ok
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 4.1 scrub starts
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 4.1 scrub ok
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client handle_log_ack log(last 63)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:01:21.328668+0000 osd.2 (osd.2) 62 : cluster [DBG] 4.e scrub starts
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:01:21.339212+0000 osd.2 (osd.2) 63 : cluster [DBG] 4.e scrub ok
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62062592 unmapped: 843776 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:52.994051+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  log_queue is 2 last_log 65 sent 63 num 2 unsent 2 sending 2
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:01:22.301434+0000 osd.2 (osd.2) 64 : cluster [DBG] 4.1 scrub starts
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:01:22.311971+0000 osd.2 (osd.2) 65 : cluster [DBG] 4.1 scrub ok
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client handle_log_ack log(last 65)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:01:22.301434+0000 osd.2 (osd.2) 64 : cluster [DBG] 4.1 scrub starts
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:01:22.311971+0000 osd.2 (osd.2) 65 : cluster [DBG] 4.1 scrub ok
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 441687 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62062592 unmapped: 843776 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:53.994318+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:54.994451+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62070784 unmapped: 835584 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 4.13 scrub starts
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.884464264s of 10.945921898s, submitted: 8
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 4.13 scrub ok
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:55.994609+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  log_queue is 2 last_log 67 sent 65 num 2 unsent 2 sending 2
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:01:25.222328+0000 osd.2 (osd.2) 66 : cluster [DBG] 4.13 scrub starts
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:01:25.232911+0000 osd.2 (osd.2) 67 : cluster [DBG] 4.13 scrub ok
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62078976 unmapped: 827392 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client handle_log_ack log(last 67)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:01:25.222328+0000 osd.2 (osd.2) 66 : cluster [DBG] 4.13 scrub starts
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:01:25.232911+0000 osd.2 (osd.2) 67 : cluster [DBG] 4.13 scrub ok
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:56.994907+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62087168 unmapped: 819200 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 4.a scrub starts
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 4.a scrub ok
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:57.995156+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  log_queue is 2 last_log 69 sent 67 num 2 unsent 2 sending 2
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:01:27.257872+0000 osd.2 (osd.2) 68 : cluster [DBG] 4.a scrub starts
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:01:27.268483+0000 osd.2 (osd.2) 69 : cluster [DBG] 4.a scrub ok
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62095360 unmapped: 811008 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client handle_log_ack log(last 69)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:01:27.257872+0000 osd.2 (osd.2) 68 : cluster [DBG] 4.a scrub starts
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:01:27.268483+0000 osd.2 (osd.2) 69 : cluster [DBG] 4.a scrub ok
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 446511 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:58.995422+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62103552 unmapped: 802816 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:59.995608+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62111744 unmapped: 794624 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 4.11 scrub starts
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 4.11 scrub ok
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:00.995819+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  log_queue is 2 last_log 71 sent 69 num 2 unsent 2 sending 2
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:01:30.223299+0000 osd.2 (osd.2) 70 : cluster [DBG] 4.11 scrub starts
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:01:30.233862+0000 osd.2 (osd.2) 71 : cluster [DBG] 4.11 scrub ok
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62111744 unmapped: 794624 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client handle_log_ack log(last 71)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:01:30.223299+0000 osd.2 (osd.2) 70 : cluster [DBG] 4.11 scrub starts
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:01:30.233862+0000 osd.2 (osd.2) 71 : cluster [DBG] 4.11 scrub ok
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:01.996029+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62128128 unmapped: 778240 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 4.18 scrub starts
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 4.18 scrub ok
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:02.996279+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  log_queue is 2 last_log 73 sent 71 num 2 unsent 2 sending 2
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:01:32.234819+0000 osd.2 (osd.2) 72 : cluster [DBG] 4.18 scrub starts
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:01:32.245355+0000 osd.2 (osd.2) 73 : cluster [DBG] 4.18 scrub ok
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62136320 unmapped: 770048 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client handle_log_ack log(last 73)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:01:32.234819+0000 osd.2 (osd.2) 72 : cluster [DBG] 4.18 scrub starts
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:01:32.245355+0000 osd.2 (osd.2) 73 : cluster [DBG] 4.18 scrub ok
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 451337 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:03.996520+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62144512 unmapped: 761856 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:04.996674+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62144512 unmapped: 761856 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:05.996877+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62144512 unmapped: 761856 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:06.997197+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62152704 unmapped: 753664 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 3.18 scrub starts
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.035273552s of 12.054781914s, submitted: 8
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 3.18 scrub ok
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:07.997336+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  log_queue is 2 last_log 75 sent 73 num 2 unsent 2 sending 2
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:01:37.276984+0000 osd.2 (osd.2) 74 : cluster [DBG] 3.18 scrub starts
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:01:37.287497+0000 osd.2 (osd.2) 75 : cluster [DBG] 3.18 scrub ok
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62160896 unmapped: 745472 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 7.11 scrub starts
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 7.11 scrub ok
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client handle_log_ack log(last 75)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:01:37.276984+0000 osd.2 (osd.2) 74 : cluster [DBG] 3.18 scrub starts
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:01:37.287497+0000 osd.2 (osd.2) 75 : cluster [DBG] 3.18 scrub ok
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 456163 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:08.997746+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  log_queue is 2 last_log 77 sent 75 num 2 unsent 2 sending 2
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:01:38.320031+0000 osd.2 (osd.2) 76 : cluster [DBG] 7.11 scrub starts
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:01:38.330556+0000 osd.2 (osd.2) 77 : cluster [DBG] 7.11 scrub ok
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62169088 unmapped: 737280 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client handle_log_ack log(last 77)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:01:38.320031+0000 osd.2 (osd.2) 76 : cluster [DBG] 7.11 scrub starts
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:01:38.330556+0000 osd.2 (osd.2) 77 : cluster [DBG] 7.11 scrub ok
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:09.997996+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62177280 unmapped: 729088 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:10.998153+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62177280 unmapped: 729088 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:11.998312+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62185472 unmapped: 720896 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:12.998483+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62185472 unmapped: 720896 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 456163 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:13.998617+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62193664 unmapped: 712704 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:14.998825+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62210048 unmapped: 696320 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:15.999012+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62210048 unmapped: 696320 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 3.16 scrub starts
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 3.16 scrub ok
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:16.999730+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  log_queue is 2 last_log 79 sent 77 num 2 unsent 2 sending 2
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:01:46.342819+0000 osd.2 (osd.2) 78 : cluster [DBG] 3.16 scrub starts
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:01:46.353248+0000 osd.2 (osd.2) 79 : cluster [DBG] 3.16 scrub ok
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62226432 unmapped: 679936 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client handle_log_ack log(last 79)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:01:46.342819+0000 osd.2 (osd.2) 78 : cluster [DBG] 3.16 scrub starts
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:01:46.353248+0000 osd.2 (osd.2) 79 : cluster [DBG] 3.16 scrub ok
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:17.999970+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62234624 unmapped: 671744 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 458576 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:19.000112+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62242816 unmapped: 663552 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 7.15 scrub starts
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.947065353s of 12.031906128s, submitted: 6
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 7.15 scrub ok
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:20.000260+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  log_queue is 2 last_log 81 sent 79 num 2 unsent 2 sending 2
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:01:49.308973+0000 osd.2 (osd.2) 80 : cluster [DBG] 7.15 scrub starts
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:01:49.319768+0000 osd.2 (osd.2) 81 : cluster [DBG] 7.15 scrub ok
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62251008 unmapped: 655360 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client handle_log_ack log(last 81)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:01:49.308973+0000 osd.2 (osd.2) 80 : cluster [DBG] 7.15 scrub starts
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:01:49.319768+0000 osd.2 (osd.2) 81 : cluster [DBG] 7.15 scrub ok
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:21.000490+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62259200 unmapped: 647168 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:22.000719+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62259200 unmapped: 647168 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 7.1c scrub starts
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 7.1c scrub ok
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:23.000966+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  log_queue is 2 last_log 83 sent 81 num 2 unsent 2 sending 2
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:01:52.413294+0000 osd.2 (osd.2) 82 : cluster [DBG] 7.1c scrub starts
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:01:52.424019+0000 osd.2 (osd.2) 83 : cluster [DBG] 7.1c scrub ok
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62210048 unmapped: 696320 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client handle_log_ack log(last 83)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:01:52.413294+0000 osd.2 (osd.2) 82 : cluster [DBG] 7.1c scrub starts
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:01:52.424019+0000 osd.2 (osd.2) 83 : cluster [DBG] 7.1c scrub ok
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 463402 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:24.001282+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62218240 unmapped: 688128 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:25.001504+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62218240 unmapped: 688128 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 3.11 scrub starts
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 3.11 scrub ok
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:26.001807+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  log_queue is 2 last_log 85 sent 83 num 2 unsent 2 sending 2
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:01:55.371920+0000 osd.2 (osd.2) 84 : cluster [DBG] 3.11 scrub starts
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:01:55.382448+0000 osd.2 (osd.2) 85 : cluster [DBG] 3.11 scrub ok
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62259200 unmapped: 647168 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client handle_log_ack log(last 85)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:01:55.371920+0000 osd.2 (osd.2) 84 : cluster [DBG] 3.11 scrub starts
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:01:55.382448+0000 osd.2 (osd.2) 85 : cluster [DBG] 3.11 scrub ok
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:27.002235+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62267392 unmapped: 638976 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:28.002400+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62275584 unmapped: 630784 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 465815 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:29.002606+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62275584 unmapped: 630784 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:30.002882+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62275584 unmapped: 630784 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:31.003039+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62283776 unmapped: 622592 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:32.003221+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62283776 unmapped: 622592 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:33.003354+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62291968 unmapped: 614400 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 465815 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:34.003510+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62300160 unmapped: 606208 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 7.5 scrub starts
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.993368149s of 15.010603905s, submitted: 6
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 7.5 scrub ok
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:35.003657+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  log_queue is 2 last_log 87 sent 85 num 2 unsent 2 sending 2
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:02:04.319657+0000 osd.2 (osd.2) 86 : cluster [DBG] 7.5 scrub starts
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:02:04.330214+0000 osd.2 (osd.2) 87 : cluster [DBG] 7.5 scrub ok
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62300160 unmapped: 606208 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client handle_log_ack log(last 87)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:02:04.319657+0000 osd.2 (osd.2) 86 : cluster [DBG] 7.5 scrub starts
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:02:04.330214+0000 osd.2 (osd.2) 87 : cluster [DBG] 7.5 scrub ok
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 3.e scrub starts
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 3.e scrub ok
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:36.003957+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  log_queue is 2 last_log 89 sent 87 num 2 unsent 2 sending 2
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:02:05.354079+0000 osd.2 (osd.2) 88 : cluster [DBG] 3.e scrub starts
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:02:05.364685+0000 osd.2 (osd.2) 89 : cluster [DBG] 3.e scrub ok
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62308352 unmapped: 598016 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client handle_log_ack log(last 89)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:02:05.354079+0000 osd.2 (osd.2) 88 : cluster [DBG] 3.e scrub starts
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:02:05.364685+0000 osd.2 (osd.2) 89 : cluster [DBG] 3.e scrub ok
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:37.004253+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62308352 unmapped: 598016 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:38.004469+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62316544 unmapped: 589824 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 470637 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:39.004598+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62316544 unmapped: 589824 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:40.004784+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62324736 unmapped: 581632 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:41.004973+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62332928 unmapped: 573440 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:42.005136+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62341120 unmapped: 565248 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 7.2 scrub starts
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 7.2 scrub ok
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:43.005274+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  log_queue is 2 last_log 91 sent 89 num 2 unsent 2 sending 2
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:02:12.291897+0000 osd.2 (osd.2) 90 : cluster [DBG] 7.2 scrub starts
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:02:12.302215+0000 osd.2 (osd.2) 91 : cluster [DBG] 7.2 scrub ok
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62349312 unmapped: 557056 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client handle_log_ack log(last 91)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:02:12.291897+0000 osd.2 (osd.2) 90 : cluster [DBG] 7.2 scrub starts
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:02:12.302215+0000 osd.2 (osd.2) 91 : cluster [DBG] 7.2 scrub ok
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 473048 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:44.005426+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62349312 unmapped: 557056 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 7.a scrub starts
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 7.a scrub ok
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:45.005576+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  log_queue is 2 last_log 93 sent 91 num 2 unsent 2 sending 2
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:02:14.246625+0000 osd.2 (osd.2) 92 : cluster [DBG] 7.a scrub starts
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:02:14.257175+0000 osd.2 (osd.2) 93 : cluster [DBG] 7.a scrub ok
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62365696 unmapped: 540672 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client handle_log_ack log(last 93)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:02:14.246625+0000 osd.2 (osd.2) 92 : cluster [DBG] 7.a scrub starts
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:02:14.257175+0000 osd.2 (osd.2) 93 : cluster [DBG] 7.a scrub ok
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:46.005759+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62365696 unmapped: 540672 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 3.7 scrub starts
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.969283104s of 11.986205101s, submitted: 8
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 3.7 scrub ok
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:47.006134+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  log_queue is 2 last_log 95 sent 93 num 2 unsent 2 sending 2
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:02:16.306010+0000 osd.2 (osd.2) 94 : cluster [DBG] 3.7 scrub starts
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:02:16.316609+0000 osd.2 (osd.2) 95 : cluster [DBG] 3.7 scrub ok
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62373888 unmapped: 532480 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 3.5 scrub starts
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 3.5 scrub ok
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client handle_log_ack log(last 95)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:02:16.306010+0000 osd.2 (osd.2) 94 : cluster [DBG] 3.7 scrub starts
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:02:16.316609+0000 osd.2 (osd.2) 95 : cluster [DBG] 3.7 scrub ok
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:48.006352+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  log_queue is 2 last_log 97 sent 95 num 2 unsent 2 sending 2
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:02:17.259419+0000 osd.2 (osd.2) 96 : cluster [DBG] 3.5 scrub starts
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:02:17.269660+0000 osd.2 (osd.2) 97 : cluster [DBG] 3.5 scrub ok
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62390272 unmapped: 516096 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 7.c scrub starts
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 7.c scrub ok
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client handle_log_ack log(last 97)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:02:17.259419+0000 osd.2 (osd.2) 96 : cluster [DBG] 3.5 scrub starts
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:02:17.269660+0000 osd.2 (osd.2) 97 : cluster [DBG] 3.5 scrub ok
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 482692 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:49.006620+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  log_queue is 2 last_log 99 sent 97 num 2 unsent 2 sending 2
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:02:18.228062+0000 osd.2 (osd.2) 98 : cluster [DBG] 7.c scrub starts
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:02:18.238650+0000 osd.2 (osd.2) 99 : cluster [DBG] 7.c scrub ok
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62390272 unmapped: 516096 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client handle_log_ack log(last 99)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:02:18.228062+0000 osd.2 (osd.2) 98 : cluster [DBG] 7.c scrub starts
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:02:18.238650+0000 osd.2 (osd.2) 99 : cluster [DBG] 7.c scrub ok
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:50.006937+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62406656 unmapped: 499712 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:51.007293+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62414848 unmapped: 491520 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:52.007494+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62414848 unmapped: 491520 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:53.007899+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62423040 unmapped: 483328 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 482692 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:54.008179+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 7.8 scrub starts
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 7.8 scrub ok
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62423040 unmapped: 483328 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:55.008385+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  log_queue is 2 last_log 101 sent 99 num 2 unsent 2 sending 2
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:02:24.162907+0000 osd.2 (osd.2) 100 : cluster [DBG] 7.8 scrub starts
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:02:24.173427+0000 osd.2 (osd.2) 101 : cluster [DBG] 7.8 scrub ok
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62431232 unmapped: 475136 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client handle_log_ack log(last 101)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:02:24.162907+0000 osd.2 (osd.2) 100 : cluster [DBG] 7.8 scrub starts
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:02:24.173427+0000 osd.2 (osd.2) 101 : cluster [DBG] 7.8 scrub ok
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:56.008730+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62431232 unmapped: 475136 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:57.008970+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62439424 unmapped: 466944 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:58.009207+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62439424 unmapped: 466944 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 485103 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:59.009371+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62439424 unmapped: 466944 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:00.009553+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 3.1d scrub starts
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.836256981s of 13.859765053s, submitted: 8
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 3.1d scrub ok
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62447616 unmapped: 458752 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:01.009775+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  log_queue is 2 last_log 103 sent 101 num 2 unsent 2 sending 2
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:02:30.165880+0000 osd.2 (osd.2) 102 : cluster [DBG] 3.1d scrub starts
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:02:30.176364+0000 osd.2 (osd.2) 103 : cluster [DBG] 3.1d scrub ok
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client handle_log_ack log(last 103)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:02:30.165880+0000 osd.2 (osd.2) 102 : cluster [DBG] 3.1d scrub starts
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:02:30.176364+0000 osd.2 (osd.2) 103 : cluster [DBG] 3.1d scrub ok
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62455808 unmapped: 450560 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:02.010073+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62464000 unmapped: 442368 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:03.010252+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62464000 unmapped: 442368 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 487516 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:04.010419+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62464000 unmapped: 442368 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 7.e scrub starts
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 7.e scrub ok
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:05.010583+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  log_queue is 2 last_log 105 sent 103 num 2 unsent 2 sending 2
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:02:34.355948+0000 osd.2 (osd.2) 104 : cluster [DBG] 7.e scrub starts
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:02:34.366518+0000 osd.2 (osd.2) 105 : cluster [DBG] 7.e scrub ok
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62480384 unmapped: 425984 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client handle_log_ack log(last 105)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:02:34.355948+0000 osd.2 (osd.2) 104 : cluster [DBG] 7.e scrub starts
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:02:34.366518+0000 osd.2 (osd.2) 105 : cluster [DBG] 7.e scrub ok
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:06.010850+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62472192 unmapped: 434176 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:07.011038+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62480384 unmapped: 425984 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:08.011173+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62480384 unmapped: 425984 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 489927 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:09.011417+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62504960 unmapped: 401408 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:10.011594+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62513152 unmapped: 393216 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:11.011766+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62513152 unmapped: 393216 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 7.1 scrub starts
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.158172607s of 11.166720390s, submitted: 4
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 7.1 scrub ok
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:12.011918+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  log_queue is 2 last_log 107 sent 105 num 2 unsent 2 sending 2
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:02:41.332514+0000 osd.2 (osd.2) 106 : cluster [DBG] 7.1 scrub starts
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:02:41.342931+0000 osd.2 (osd.2) 107 : cluster [DBG] 7.1 scrub ok
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client handle_log_ack log(last 107)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:02:41.332514+0000 osd.2 (osd.2) 106 : cluster [DBG] 7.1 scrub starts
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:02:41.342931+0000 osd.2 (osd.2) 107 : cluster [DBG] 7.1 scrub ok
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62537728 unmapped: 368640 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:13.012244+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62537728 unmapped: 368640 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 7.1a scrub starts
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 7.1a scrub ok
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 494751 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:14.012396+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  log_queue is 2 last_log 109 sent 107 num 2 unsent 2 sending 2
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:02:43.346577+0000 osd.2 (osd.2) 108 : cluster [DBG] 7.1a scrub starts
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:02:43.357209+0000 osd.2 (osd.2) 109 : cluster [DBG] 7.1a scrub ok
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client handle_log_ack log(last 109)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:02:43.346577+0000 osd.2 (osd.2) 108 : cluster [DBG] 7.1a scrub starts
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:02:43.357209+0000 osd.2 (osd.2) 109 : cluster [DBG] 7.1a scrub ok
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62562304 unmapped: 344064 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:15.012668+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62562304 unmapped: 344064 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:16.012899+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62570496 unmapped: 335872 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:17.013110+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62578688 unmapped: 327680 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 3.8 scrub starts
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 3.8 scrub ok
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:18.013222+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  log_queue is 2 last_log 111 sent 109 num 2 unsent 2 sending 2
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:02:47.311792+0000 osd.2 (osd.2) 110 : cluster [DBG] 3.8 scrub starts
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:02:47.322191+0000 osd.2 (osd.2) 111 : cluster [DBG] 3.8 scrub ok
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client handle_log_ack log(last 111)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:02:47.311792+0000 osd.2 (osd.2) 110 : cluster [DBG] 3.8 scrub starts
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:02:47.322191+0000 osd.2 (osd.2) 111 : cluster [DBG] 3.8 scrub ok
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62578688 unmapped: 327680 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 497162 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:19.013462+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62586880 unmapped: 319488 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:20.013615+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 6.8 scrub starts
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 6.8 scrub ok
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62586880 unmapped: 319488 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:21.013800+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  log_queue is 2 last_log 113 sent 111 num 2 unsent 2 sending 2
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:02:50.290185+0000 osd.2 (osd.2) 112 : cluster [DBG] 6.8 scrub starts
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:02:50.300715+0000 osd.2 (osd.2) 113 : cluster [DBG] 6.8 scrub ok
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client handle_log_ack log(last 113)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:02:50.290185+0000 osd.2 (osd.2) 112 : cluster [DBG] 6.8 scrub starts
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:02:50.300715+0000 osd.2 (osd.2) 113 : cluster [DBG] 6.8 scrub ok
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 6.f scrub starts
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_channel(cluster) log [DBG] : 6.f scrub ok
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62595072 unmapped: 311296 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:22.014274+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  log_queue is 2 last_log 115 sent 113 num 2 unsent 2 sending 2
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:02:51.304651+0000 osd.2 (osd.2) 114 : cluster [DBG] 6.f scrub starts
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  will send 2026-01-10T17:02:51.325810+0000 osd.2 (osd.2) 115 : cluster [DBG] 6.f scrub ok
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client handle_log_ack log(last 115)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:02:51.304651+0000 osd.2 (osd.2) 114 : cluster [DBG] 6.f scrub starts
Jan 10 17:31:59 compute-0 ceph-osd[87867]: log_client  logged 2026-01-10T17:02:51.325810+0000 osd.2 (osd.2) 115 : cluster [DBG] 6.f scrub ok
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62611456 unmapped: 294912 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:23.014488+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62611456 unmapped: 294912 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:24.014650+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62619648 unmapped: 286720 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:25.014847+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62619648 unmapped: 286720 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:26.014987+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62619648 unmapped: 286720 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:27.015185+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62627840 unmapped: 278528 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:28.015365+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62627840 unmapped: 278528 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:29.015531+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62636032 unmapped: 270336 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:30.015822+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62636032 unmapped: 270336 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:31.015979+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62644224 unmapped: 262144 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:32.016107+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62644224 unmapped: 262144 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:33.016261+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62660608 unmapped: 245760 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:34.016406+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62660608 unmapped: 245760 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:35.016579+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62660608 unmapped: 245760 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:36.016767+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62668800 unmapped: 237568 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:37.016942+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62668800 unmapped: 237568 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:38.017077+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62676992 unmapped: 229376 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:39.017266+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62685184 unmapped: 221184 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:40.017468+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62685184 unmapped: 221184 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:41.017652+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62693376 unmapped: 212992 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:42.017791+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62693376 unmapped: 212992 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:43.017983+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62701568 unmapped: 204800 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:44.018142+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62701568 unmapped: 204800 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:45.018320+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62701568 unmapped: 204800 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:46.018550+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62709760 unmapped: 196608 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:47.018815+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62709760 unmapped: 196608 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:48.018965+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62717952 unmapped: 188416 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:49.019176+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62717952 unmapped: 188416 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:50.019314+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62717952 unmapped: 188416 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:51.019468+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62726144 unmapped: 180224 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:52.019667+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62726144 unmapped: 180224 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:53.019958+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62742528 unmapped: 163840 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:54.020123+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62742528 unmapped: 163840 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:55.020258+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62742528 unmapped: 163840 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:56.020421+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62750720 unmapped: 155648 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:57.020656+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62750720 unmapped: 155648 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:58.025908+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62758912 unmapped: 147456 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:59.026052+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62758912 unmapped: 147456 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:00.026210+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62758912 unmapped: 147456 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:01.026367+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62767104 unmapped: 139264 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:02.026573+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62767104 unmapped: 139264 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:03.026740+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62775296 unmapped: 131072 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:04.026939+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62775296 unmapped: 131072 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:05.027118+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62783488 unmapped: 122880 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:06.027295+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62783488 unmapped: 122880 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:07.027505+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62783488 unmapped: 122880 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:08.027628+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62791680 unmapped: 114688 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:09.027764+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62791680 unmapped: 114688 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:10.027973+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62808064 unmapped: 98304 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:11.028118+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62816256 unmapped: 90112 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:12.028296+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62816256 unmapped: 90112 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:13.028465+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62824448 unmapped: 81920 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:14.028594+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62824448 unmapped: 81920 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:15.028802+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62832640 unmapped: 73728 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:16.028953+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62832640 unmapped: 73728 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:17.029188+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62832640 unmapped: 73728 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:18.029344+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62840832 unmapped: 65536 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:19.029594+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62840832 unmapped: 65536 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:20.029798+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62849024 unmapped: 57344 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:21.029981+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62857216 unmapped: 49152 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:22.030110+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62857216 unmapped: 49152 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:23.030279+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62865408 unmapped: 40960 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:24.030422+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62865408 unmapped: 40960 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:25.030561+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62873600 unmapped: 32768 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:26.030721+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62873600 unmapped: 32768 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:27.030879+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62881792 unmapped: 24576 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:28.031019+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62881792 unmapped: 24576 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:29.031201+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62881792 unmapped: 24576 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:30.031419+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62889984 unmapped: 16384 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:31.031623+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62889984 unmapped: 16384 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:32.031785+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62889984 unmapped: 16384 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:33.032305+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62898176 unmapped: 8192 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:34.032751+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62898176 unmapped: 8192 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:35.033087+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62906368 unmapped: 0 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:36.033324+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62906368 unmapped: 0 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:37.033652+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62914560 unmapped: 1040384 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:38.034002+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62914560 unmapped: 1040384 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:39.036124+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62914560 unmapped: 1040384 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:40.036287+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62922752 unmapped: 1032192 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:41.036541+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62922752 unmapped: 1032192 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:42.036766+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62930944 unmapped: 1024000 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:43.036977+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62930944 unmapped: 1024000 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:44.037138+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62930944 unmapped: 1024000 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:45.037361+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62939136 unmapped: 1015808 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:46.037492+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62939136 unmapped: 1015808 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:47.037765+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62947328 unmapped: 1007616 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:48.037941+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62947328 unmapped: 1007616 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:49.038042+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62947328 unmapped: 1007616 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:50.038267+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62963712 unmapped: 991232 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:51.038426+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62971904 unmapped: 983040 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:52.038595+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62980096 unmapped: 974848 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:53.038788+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62980096 unmapped: 974848 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:54.039013+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62980096 unmapped: 974848 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:55.039231+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62980096 unmapped: 974848 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:56.039419+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62980096 unmapped: 974848 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:57.039680+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62988288 unmapped: 966656 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:58.039947+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62988288 unmapped: 966656 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:59.040126+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62988288 unmapped: 966656 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:00.058683+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62996480 unmapped: 958464 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:01.059033+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62996480 unmapped: 958464 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:02.059272+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63004672 unmapped: 950272 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:03.059472+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63004672 unmapped: 950272 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:04.059764+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63004672 unmapped: 950272 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:05.059949+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63012864 unmapped: 942080 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:06.060163+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63012864 unmapped: 942080 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:07.060477+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63021056 unmapped: 933888 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:08.060682+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63021056 unmapped: 933888 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:09.060906+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63021056 unmapped: 933888 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:10.061126+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63029248 unmapped: 925696 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:11.061351+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63029248 unmapped: 925696 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:12.061509+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63037440 unmapped: 917504 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:13.061788+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63037440 unmapped: 917504 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:14.061983+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63037440 unmapped: 917504 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:15.062286+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63053824 unmapped: 901120 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:16.062467+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63053824 unmapped: 901120 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:17.062747+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63053824 unmapped: 901120 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:18.062909+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63062016 unmapped: 892928 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:19.063167+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63062016 unmapped: 892928 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:20.063315+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63078400 unmapped: 876544 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:21.063566+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63078400 unmapped: 876544 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:22.063850+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63078400 unmapped: 876544 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:23.064031+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63086592 unmapped: 868352 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:24.064200+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63086592 unmapped: 868352 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:25.064345+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63102976 unmapped: 851968 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:26.064491+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63102976 unmapped: 851968 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:27.064786+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63102976 unmapped: 851968 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:28.064935+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63111168 unmapped: 843776 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:29.065086+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63111168 unmapped: 843776 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:30.065229+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63119360 unmapped: 835584 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:31.065427+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63119360 unmapped: 835584 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:32.065586+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63119360 unmapped: 835584 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:33.065729+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63119360 unmapped: 835584 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:34.065900+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63119360 unmapped: 835584 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:35.066053+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63127552 unmapped: 827392 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:36.066215+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63127552 unmapped: 827392 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:37.066419+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63135744 unmapped: 819200 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:38.066616+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63135744 unmapped: 819200 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:39.066813+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63135744 unmapped: 819200 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:40.066995+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63143936 unmapped: 811008 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:41.067137+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63143936 unmapped: 811008 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:42.067368+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63152128 unmapped: 802816 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:43.067508+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63152128 unmapped: 802816 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:44.067740+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63152128 unmapped: 802816 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:45.067951+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63160320 unmapped: 794624 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:46.068276+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63160320 unmapped: 794624 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:47.068563+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63160320 unmapped: 794624 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:48.068885+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63168512 unmapped: 786432 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:49.069069+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63168512 unmapped: 786432 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:50.069272+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63176704 unmapped: 778240 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:51.069491+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63176704 unmapped: 778240 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:52.069651+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63176704 unmapped: 778240 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:53.069845+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63184896 unmapped: 770048 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:54.070083+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63184896 unmapped: 770048 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:55.070283+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63193088 unmapped: 761856 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:56.070457+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:57.070800+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63193088 unmapped: 761856 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:58.071020+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63201280 unmapped: 753664 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:59.071253+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63201280 unmapped: 753664 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:00.071553+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63201280 unmapped: 753664 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:01.071798+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63209472 unmapped: 745472 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:02.072047+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63209472 unmapped: 745472 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:03.072239+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63209472 unmapped: 745472 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:04.072419+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63217664 unmapped: 737280 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:05.072613+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63217664 unmapped: 737280 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:06.072821+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63225856 unmapped: 729088 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:07.073065+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63225856 unmapped: 729088 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:08.073235+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63225856 unmapped: 729088 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:09.073373+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63234048 unmapped: 720896 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:10.073609+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63234048 unmapped: 720896 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:11.073858+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63242240 unmapped: 712704 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:12.074083+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63242240 unmapped: 712704 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:13.074295+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63250432 unmapped: 704512 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:14.074470+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63250432 unmapped: 704512 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:15.074647+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63250432 unmapped: 704512 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:16.074900+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63258624 unmapped: 696320 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:17.075098+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63258624 unmapped: 696320 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:18.075260+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63266816 unmapped: 688128 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:19.075426+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63266816 unmapped: 688128 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:20.075587+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63266816 unmapped: 688128 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:21.075803+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63266816 unmapped: 688128 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:22.075944+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63266816 unmapped: 688128 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:23.076120+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63275008 unmapped: 679936 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:24.076580+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63275008 unmapped: 679936 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:25.076804+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63275008 unmapped: 679936 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:26.076974+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63283200 unmapped: 671744 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:27.077329+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63283200 unmapped: 671744 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:28.077533+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63283200 unmapped: 671744 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:29.077689+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63291392 unmapped: 663552 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:30.077811+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63291392 unmapped: 663552 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:31.078009+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63299584 unmapped: 655360 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:32.078282+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63299584 unmapped: 655360 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:33.078507+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63299584 unmapped: 655360 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:34.078747+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63307776 unmapped: 647168 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:35.078898+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63307776 unmapped: 647168 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:36.079028+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63307776 unmapped: 647168 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:37.079194+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63315968 unmapped: 638976 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:38.079326+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63315968 unmapped: 638976 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:39.079473+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63324160 unmapped: 630784 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:40.079631+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63324160 unmapped: 630784 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:41.079853+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63324160 unmapped: 630784 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:42.079986+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63332352 unmapped: 622592 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:43.080176+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63332352 unmapped: 622592 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:44.080440+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63332352 unmapped: 622592 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:45.080692+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63340544 unmapped: 614400 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:46.080915+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63340544 unmapped: 614400 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:47.143253+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63348736 unmapped: 606208 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:48.143458+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63348736 unmapped: 606208 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:49.143763+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63356928 unmapped: 598016 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:50.143899+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63356928 unmapped: 598016 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:51.144099+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63356928 unmapped: 598016 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:52.144384+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63365120 unmapped: 589824 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:53.144550+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63365120 unmapped: 589824 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:54.144909+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63373312 unmapped: 581632 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:55.145212+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63373312 unmapped: 581632 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:56.145438+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63389696 unmapped: 565248 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:57.145802+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63389696 unmapped: 565248 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:58.146019+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63389696 unmapped: 565248 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:59.146203+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63397888 unmapped: 557056 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:00.146331+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63397888 unmapped: 557056 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:01.146460+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63406080 unmapped: 548864 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:02.146629+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63406080 unmapped: 548864 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:03.146831+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63406080 unmapped: 548864 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:04.147024+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63414272 unmapped: 540672 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:05.147162+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63414272 unmapped: 540672 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:06.147326+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63414272 unmapped: 540672 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:07.147733+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63422464 unmapped: 532480 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:08.147978+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63422464 unmapped: 532480 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:09.148185+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63430656 unmapped: 524288 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:10.148362+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63438848 unmapped: 516096 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:11.148518+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63438848 unmapped: 516096 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:12.148657+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63447040 unmapped: 507904 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:13.149021+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63447040 unmapped: 507904 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:14.149261+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63455232 unmapped: 499712 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:15.149590+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63455232 unmapped: 499712 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:16.149753+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63463424 unmapped: 491520 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:17.150056+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63463424 unmapped: 491520 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:18.150230+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63463424 unmapped: 491520 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:19.150389+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63471616 unmapped: 483328 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:20.150533+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63471616 unmapped: 483328 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:21.150791+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63479808 unmapped: 475136 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:22.150961+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63479808 unmapped: 475136 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:23.151092+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63479808 unmapped: 475136 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:24.151233+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63488000 unmapped: 466944 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:25.151383+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63488000 unmapped: 466944 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:26.151523+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63488000 unmapped: 466944 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:27.151841+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63496192 unmapped: 458752 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:28.152013+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63496192 unmapped: 458752 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:29.152317+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63504384 unmapped: 450560 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:30.152490+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63504384 unmapped: 450560 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:31.152629+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63504384 unmapped: 450560 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:32.152791+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63512576 unmapped: 442368 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:33.152977+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63512576 unmapped: 442368 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:34.153245+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63520768 unmapped: 434176 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:35.153411+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63520768 unmapped: 434176 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:36.153610+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63520768 unmapped: 434176 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:37.154047+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63528960 unmapped: 425984 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:38.154303+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63528960 unmapped: 425984 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:39.154576+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63537152 unmapped: 417792 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:40.154803+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63537152 unmapped: 417792 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:41.155055+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63545344 unmapped: 409600 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:42.155272+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63545344 unmapped: 409600 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:43.155442+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63545344 unmapped: 409600 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:44.155612+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63553536 unmapped: 401408 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:45.155868+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63553536 unmapped: 401408 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:46.156147+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63553536 unmapped: 401408 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:47.156587+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63561728 unmapped: 393216 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:48.156849+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63561728 unmapped: 393216 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:49.157061+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63561728 unmapped: 393216 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:50.157331+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63569920 unmapped: 385024 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:51.157517+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63569920 unmapped: 385024 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:52.157662+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63569920 unmapped: 385024 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:53.157832+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63578112 unmapped: 376832 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:54.157991+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63578112 unmapped: 376832 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:55.158119+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63586304 unmapped: 368640 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:56.158276+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63586304 unmapped: 368640 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:57.158461+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63586304 unmapped: 368640 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:58.158598+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63594496 unmapped: 360448 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:59.158788+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63594496 unmapped: 360448 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:00.159023+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63602688 unmapped: 352256 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:01.159248+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63594496 unmapped: 360448 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:02.159494+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63594496 unmapped: 360448 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:03.159620+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63602688 unmapped: 352256 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:04.159760+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63602688 unmapped: 352256 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:05.159877+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63610880 unmapped: 344064 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:06.159992+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63610880 unmapped: 344064 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:07.160183+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63610880 unmapped: 344064 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:08.160305+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63619072 unmapped: 335872 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:09.160422+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63627264 unmapped: 327680 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:10.160569+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63627264 unmapped: 327680 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:11.160693+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63635456 unmapped: 319488 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:12.160833+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63635456 unmapped: 319488 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:13.160985+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63643648 unmapped: 311296 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:14.161118+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63643648 unmapped: 311296 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:15.161313+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63643648 unmapped: 311296 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:16.161464+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63651840 unmapped: 303104 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:17.161638+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63651840 unmapped: 303104 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:18.161796+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63660032 unmapped: 294912 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:19.162018+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63660032 unmapped: 294912 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:20.162269+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63660032 unmapped: 294912 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:21.162486+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63660032 unmapped: 294912 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:22.162672+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63668224 unmapped: 286720 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:23.162925+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63668224 unmapped: 286720 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:24.163159+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63676416 unmapped: 278528 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:25.163412+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63676416 unmapped: 278528 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:26.163680+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63684608 unmapped: 270336 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:27.164236+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63684608 unmapped: 270336 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:28.164416+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63684608 unmapped: 270336 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:29.164633+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63692800 unmapped: 262144 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:30.164983+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63692800 unmapped: 262144 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:31.165213+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63700992 unmapped: 253952 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:32.165381+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63700992 unmapped: 253952 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:33.165539+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63700992 unmapped: 253952 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:34.165811+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63709184 unmapped: 245760 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:35.165944+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63709184 unmapped: 245760 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:36.166078+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63709184 unmapped: 245760 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:37.166283+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63717376 unmapped: 237568 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:38.166405+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63717376 unmapped: 237568 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:39.166505+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63725568 unmapped: 229376 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:40.166660+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63725568 unmapped: 229376 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:41.166758+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63733760 unmapped: 221184 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:42.166890+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63733760 unmapped: 221184 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:43.167020+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63733760 unmapped: 221184 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:44.167217+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63741952 unmapped: 212992 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:45.167372+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63750144 unmapped: 204800 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:46.167543+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63758336 unmapped: 196608 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:47.167771+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63758336 unmapped: 196608 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:48.168041+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63758336 unmapped: 196608 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:49.168160+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63766528 unmapped: 188416 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:50.170032+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63766528 unmapped: 188416 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:51.170224+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63766528 unmapped: 188416 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:52.170398+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63774720 unmapped: 180224 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:53.170553+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63774720 unmapped: 180224 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:54.170733+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63774720 unmapped: 180224 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:55.170910+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63782912 unmapped: 172032 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:56.171151+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63782912 unmapped: 172032 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:57.171436+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63791104 unmapped: 163840 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:58.171597+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63791104 unmapped: 163840 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:59.171761+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63799296 unmapped: 155648 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:00.171883+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63799296 unmapped: 155648 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:01.172041+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63799296 unmapped: 155648 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:02.172195+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63807488 unmapped: 147456 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:03.172357+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63807488 unmapped: 147456 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:04.172532+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63815680 unmapped: 139264 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:05.172688+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63815680 unmapped: 139264 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:06.172872+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63815680 unmapped: 139264 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:07.173101+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63823872 unmapped: 131072 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:08.173295+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63823872 unmapped: 131072 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:09.173474+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63832064 unmapped: 122880 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:10.173629+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63832064 unmapped: 122880 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:11.173784+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63832064 unmapped: 122880 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:12.173959+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Cumulative writes: 4222 writes, 19K keys, 4222 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 4222 writes, 393 syncs, 10.74 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 4222 writes, 19K keys, 4222 commit groups, 1.0 writes per commit group, ingest: 16.31 MB, 0.03 MB/s
                                           Interval WAL: 4222 writes, 393 syncs, 10.74 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5621ddea9a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5621ddea9a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5621ddea9a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5621ddea9a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5621ddea9a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5621ddea9a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5621ddea9a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5621ddea94b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5621ddea94b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5621ddea94b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5621ddea9a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5621ddea9a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63913984 unmapped: 40960 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:13.174139+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63913984 unmapped: 40960 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:14.174306+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63913984 unmapped: 40960 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:15.174575+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63922176 unmapped: 32768 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:16.174803+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63922176 unmapped: 32768 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:17.175009+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63922176 unmapped: 32768 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:18.175234+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63930368 unmapped: 24576 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:19.175523+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63930368 unmapped: 24576 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:20.175766+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63930368 unmapped: 24576 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:21.175940+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63938560 unmapped: 16384 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:22.176064+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63938560 unmapped: 16384 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:23.176245+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63946752 unmapped: 8192 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:24.176383+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63946752 unmapped: 8192 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:25.176507+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63946752 unmapped: 8192 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:26.176638+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63954944 unmapped: 0 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:27.176898+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63954944 unmapped: 0 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:28.177108+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63963136 unmapped: 1040384 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:29.177309+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63963136 unmapped: 1040384 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:30.177487+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63963136 unmapped: 1040384 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:31.177677+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63971328 unmapped: 1032192 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:32.177950+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63971328 unmapped: 1032192 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:33.178114+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63979520 unmapped: 1024000 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:34.178265+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63979520 unmapped: 1024000 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:35.178472+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63979520 unmapped: 1024000 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:36.178653+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63987712 unmapped: 1015808 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:37.178960+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63987712 unmapped: 1015808 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:38.179145+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63995904 unmapped: 1007616 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:39.179327+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63995904 unmapped: 1007616 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:40.179500+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63995904 unmapped: 1007616 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:41.179727+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63995904 unmapped: 1007616 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:42.179880+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64004096 unmapped: 999424 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:43.180041+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64004096 unmapped: 999424 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:44.180188+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64012288 unmapped: 991232 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:45.180359+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64004096 unmapped: 999424 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:46.180552+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64012288 unmapped: 991232 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:47.180753+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64012288 unmapped: 991232 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:48.180991+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64012288 unmapped: 991232 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:49.181213+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64020480 unmapped: 983040 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:50.181397+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64020480 unmapped: 983040 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:51.181552+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64020480 unmapped: 983040 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:52.181729+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64028672 unmapped: 974848 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:53.181859+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64028672 unmapped: 974848 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:54.182071+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64036864 unmapped: 966656 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:55.182224+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64036864 unmapped: 966656 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:56.197066+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64036864 unmapped: 966656 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:57.197351+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64045056 unmapped: 958464 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:58.197508+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64045056 unmapped: 958464 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:59.197658+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64045056 unmapped: 958464 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:00.371817+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64053248 unmapped: 950272 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:01.371940+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64053248 unmapped: 950272 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:02.372084+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64061440 unmapped: 942080 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:03.372314+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64061440 unmapped: 942080 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:04.372498+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64061440 unmapped: 942080 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:05.372661+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64069632 unmapped: 933888 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:06.372789+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64069632 unmapped: 933888 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:07.372990+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64077824 unmapped: 925696 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:08.373164+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64086016 unmapped: 917504 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:09.373450+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64086016 unmapped: 917504 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:10.373629+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64094208 unmapped: 909312 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:11.373821+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64094208 unmapped: 909312 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:12.373975+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64102400 unmapped: 901120 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:13.374103+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64102400 unmapped: 901120 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:14.374224+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64102400 unmapped: 901120 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:15.374410+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64110592 unmapped: 892928 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:16.374582+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64110592 unmapped: 892928 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:17.374779+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64118784 unmapped: 884736 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:18.374951+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64118784 unmapped: 884736 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:19.375237+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64118784 unmapped: 884736 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:20.375386+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64126976 unmapped: 876544 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:21.375542+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64126976 unmapped: 876544 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:22.375834+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64135168 unmapped: 868352 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:23.376049+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64135168 unmapped: 868352 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:24.376192+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64135168 unmapped: 868352 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:25.376359+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64143360 unmapped: 860160 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:26.376524+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64143360 unmapped: 860160 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:27.376769+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64151552 unmapped: 851968 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:28.376924+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64151552 unmapped: 851968 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:29.377124+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64151552 unmapped: 851968 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:30.377303+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 843776 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:31.377459+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 843776 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:32.377742+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 835584 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:33.377933+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 835584 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:34.378111+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 835584 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:35.378328+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 835584 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:36.378501+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 827392 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:37.378745+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 827392 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:38.378978+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 819200 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:39.380788+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 819200 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:40.380952+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64192512 unmapped: 811008 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:41.381098+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64200704 unmapped: 802816 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:42.381344+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64200704 unmapped: 802816 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:43.381583+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64200704 unmapped: 802816 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:44.381769+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64208896 unmapped: 794624 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:45.381879+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:46.382070+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64208896 unmapped: 794624 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:47.382252+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64208896 unmapped: 794624 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:48.382419+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64217088 unmapped: 786432 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:49.382575+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64217088 unmapped: 786432 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:50.382714+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64225280 unmapped: 778240 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:51.382897+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64225280 unmapped: 778240 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:52.383079+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64225280 unmapped: 778240 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:53.383217+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64233472 unmapped: 770048 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:54.383349+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64233472 unmapped: 770048 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:55.383478+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64241664 unmapped: 761856 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:56.383643+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64241664 unmapped: 761856 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:57.383800+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64241664 unmapped: 761856 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:58.383951+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64249856 unmapped: 753664 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:59.384229+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64249856 unmapped: 753664 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:00.384364+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64249856 unmapped: 753664 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:01.384667+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64258048 unmapped: 745472 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:02.384752+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64258048 unmapped: 745472 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:03.384887+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 737280 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:04.385005+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 737280 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:05.385135+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 729088 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:06.385289+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 729088 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:07.385494+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 729088 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:08.385659+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 720896 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:09.385838+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 720896 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:10.386025+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64290816 unmapped: 712704 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:11.386179+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64290816 unmapped: 712704 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:12.386360+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64290816 unmapped: 712704 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:13.386841+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:14.386976+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:15.387751+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:16.388350+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:17.388595+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:18.388799+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:19.390086+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:20.390245+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:21.390396+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:22.390569+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:23.391279+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:24.391798+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:25.392204+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:26.392402+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:27.393122+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:28.393365+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:29.393519+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:30.393744+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:31.394045+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:32.394209+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:33.394425+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:34.394666+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:35.394862+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:36.395016+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:37.395224+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:38.395386+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:39.395538+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:40.395692+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64290816 unmapped: 712704 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:41.395846+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64290816 unmapped: 712704 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:42.396028+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64290816 unmapped: 712704 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:43.396180+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64290816 unmapped: 712704 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:44.396384+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64290816 unmapped: 712704 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:45.396542+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:46.396727+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:47.397130+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:48.397276+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:49.397416+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:50.397587+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:51.397764+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:52.397965+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:53.398118+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:54.398279+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:55.398442+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:56.398592+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:57.398756+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:58.398921+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:59.399097+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:00.399253+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:01.399387+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:02.399552+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:03.399783+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:04.399930+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:05.400080+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:06.400238+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:07.400437+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:08.400601+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:09.400786+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:10.401002+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:11.401131+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:12.401330+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:13.401489+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:14.401665+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:15.401779+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:16.401962+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:17.402145+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:18.402325+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:19.402490+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:20.403004+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:21.403164+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:22.404621+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:23.404827+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:24.405019+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:25.405170+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:26.405507+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:27.405878+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:28.406522+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:29.407019+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:30.407225+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:31.407393+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:32.407536+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:33.408059+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:34.408212+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:35.408386+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:36.408526+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:37.408787+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:38.409493+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:39.409938+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:40.410094+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:41.410249+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:42.410515+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:43.410677+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:44.410885+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:45.411039+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:46.411175+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:47.411399+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:48.411588+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:49.411779+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:50.411992+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:51.412151+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:52.412505+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:53.412687+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:54.412957+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:55.413275+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:56.413486+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:57.413682+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:58.413821+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:59.414003+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:00.414194+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:01.414336+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:02.414508+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:03.414742+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:04.414916+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:05.415140+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:06.415356+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:07.415560+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:08.415725+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:09.415875+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:10.416058+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:11.416263+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:12.416633+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:13.416867+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:14.417147+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:15.417452+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:16.417608+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:17.418146+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:18.418314+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:19.418457+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:20.418600+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:21.418765+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:22.418902+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:23.419039+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:24.419184+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:25.419330+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:26.419507+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:27.419745+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:28.419861+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:29.420059+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:30.420314+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:31.420504+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:32.420749+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:33.420894+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:34.421060+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:35.421219+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:36.421423+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:37.421646+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:38.421794+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:39.421934+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:40.422071+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:41.422278+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:42.422462+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:43.422678+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:44.422912+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:45.423059+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:46.423275+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:47.423552+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:48.423716+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:49.423979+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:50.424249+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:51.424409+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:52.424571+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:53.424760+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:54.424928+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:55.425164+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:56.425379+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:57.425637+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:58.425809+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:59.425965+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:00.426143+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:01.426291+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:02.426463+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:03.426803+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:04.426980+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:05.427147+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:06.427309+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:07.427565+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:08.427755+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:09.427905+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:10.428090+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:11.428238+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:12.428388+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:13.428888+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:14.429069+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:15.429222+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:16.429367+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:17.429543+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:18.429688+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:19.429893+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:20.430016+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: mgrc ms_handle_reset ms_handle_reset con 0x5621df718000
Jan 10 17:31:59 compute-0 ceph-osd[87867]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3703679480
Jan 10 17:31:59 compute-0 ceph-osd[87867]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3703679480,v1:192.168.122.100:6801/3703679480]
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: get_auth_request con 0x5621dffbd400 auth_method 0
Jan 10 17:31:59 compute-0 ceph-osd[87867]: mgrc handle_mgr_configure stats_period=5
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:21.430148+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:22.430319+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:23.430536+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:24.430795+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:25.430993+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:26.431134+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:27.438064+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:28.438199+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:29.438403+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:30.438738+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:31.438910+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:32.439082+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:33.439263+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:34.439399+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:35.439630+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:36.439758+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:37.439954+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:38.440119+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:39.440354+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:40.440575+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:41.440908+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:42.441106+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:43.441275+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:44.441565+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:45.441784+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:46.442031+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:47.442265+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:48.442404+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:49.442565+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:50.442749+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:51.442935+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:52.443108+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:53.443294+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:54.443505+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:55.443734+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:56.443891+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:57.444137+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:58.444327+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:59.444528+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:00.444676+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:01.444906+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:02.445032+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:03.445237+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:04.445417+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:05.445592+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:06.445771+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:07.445941+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:08.446164+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:09.446369+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:10.446570+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:11.446766+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:12.446905+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:13.447076+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:14.447220+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:15.447414+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:16.447576+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:17.447852+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:18.448124+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:19.448397+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:20.448639+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:21.448826+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:22.449105+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:23.449292+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:24.449489+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:25.449758+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:26.449994+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:27.450195+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:28.450335+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:29.450480+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:30.450742+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:31.450913+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:32.451057+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:33.451313+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:34.451644+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:35.451916+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:36.452180+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:37.452542+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:38.452841+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:39.453261+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:40.453477+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:41.453853+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:42.454013+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:43.454242+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:44.454580+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:45.454873+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:46.455073+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:47.455343+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:48.455477+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:49.455617+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:50.455831+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:51.456019+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:52.456283+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:53.456427+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:54.456651+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:55.456816+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:56.457595+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:57.458028+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:58.482374+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:59.482549+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:00.482847+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:01.482995+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:02.483169+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:03.483335+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:04.483482+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:05.483605+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:06.483887+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:07.484155+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:08.484327+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:09.484487+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64503808 unmapped: 499712 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:10.484642+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64503808 unmapped: 499712 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:11.484821+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64503808 unmapped: 499712 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:12.484999+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64503808 unmapped: 499712 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:13.485196+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64503808 unmapped: 499712 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:14.485343+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64503808 unmapped: 499712 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:15.485592+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:16.485777+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:17.485961+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:18.486089+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:19.486221+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:20.486328+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:21.486585+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:22.486753+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:23.487015+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:24.487242+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:25.487550+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:26.487821+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:27.488082+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:28.488332+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:29.488598+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:30.488806+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:31.489074+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:32.489252+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:33.489496+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:34.489772+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:35.489929+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:36.490085+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:37.491950+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:38.492082+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:39.493319+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:40.494282+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:41.494659+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:42.494846+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:43.495448+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:44.495887+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:45.496206+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:46.496411+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:47.496767+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:48.497045+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:49.497234+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:50.497436+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:51.497621+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:52.497793+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:53.497945+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:54.498359+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:55.498748+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:56.499249+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:57.499562+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:58.499955+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:59.500363+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:00.500550+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:01.500855+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:02.501128+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:03.501355+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:04.501833+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:05.502015+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:06.502190+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:07.502466+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:08.502635+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:09.502785+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:10.503005+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:11.503174+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:12.503348+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:13.503518+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:14.503687+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:15.503910+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:16.504102+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:17.504359+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:18.504469+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:19.504754+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:20.504895+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:21.505052+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:22.505212+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:23.505408+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:24.505644+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:25.505810+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:26.505997+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:27.506281+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:28.506433+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:29.506517+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:30.506805+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:31.506992+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:32.507230+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:33.507427+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:34.507600+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:35.507768+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:36.507936+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:37.508215+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:38.508400+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:39.508570+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:40.508768+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:41.508914+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:42.509062+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:43.509223+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:44.509763+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:45.509920+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:46.510328+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:47.511224+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:48.511786+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:49.512635+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:50.513133+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:51.513550+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:52.513829+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:53.514076+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:54.514323+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:55.514516+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:56.514878+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:57.515153+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:58.515360+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:59.515537+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:00.515940+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:01.516123+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:02.516405+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:03.516583+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:04.516821+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:05.516959+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:06.517110+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:07.517285+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:08.517475+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:09.517789+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:10.517939+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:11.518123+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:12.518357+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:13.518593+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:14.518819+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:15.519043+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:16.519256+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:17.519499+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:18.519747+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:19.519918+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:20.520047+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:21.520220+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:22.520402+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:23.520583+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:24.520792+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:25.520988+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:26.521225+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:27.521444+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:28.521612+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:29.521778+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:30.521973+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:31.522107+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:32.522509+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:33.522641+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:34.522837+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:35.523039+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:36.523221+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:37.523420+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:38.523653+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:39.523821+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:40.524038+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:41.524238+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:42.524433+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:43.524560+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:44.524798+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:45.524970+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:46.525115+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:47.525295+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:48.525507+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:49.525683+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:50.526007+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:51.526281+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:52.526497+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:53.526815+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:54.527009+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:55.527270+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:56.527489+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:57.527742+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:58.527939+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:59.528135+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:00.528376+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:01.528652+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:02.528855+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:03.529072+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:04.529258+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:05.529463+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:06.529639+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:07.529983+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:08.530272+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:09.530510+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64512000 unmapped: 491520 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:10.530824+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64512000 unmapped: 491520 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:11.531062+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64512000 unmapped: 491520 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:12.531347+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Cumulative writes: 4222 writes, 19K keys, 4222 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 4222 writes, 393 syncs, 10.74 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5621ddea9a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 0.000109 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5621ddea9a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 0.000109 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5621ddea9a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 0.000109 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5621ddea9a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 0.000109 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5621ddea9a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 0.000109 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5621ddea9a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 0.000109 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5621ddea9a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 0.000109 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5621ddea94b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5621ddea94b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5621ddea94b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5621ddea9a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 0.000109 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5621ddea9a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 0.000109 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:13.531687+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:14.532053+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:15.532448+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:16.532821+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:17.533203+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:18.533536+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:19.533831+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:20.534158+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:21.534310+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:22.534436+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:23.534625+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:24.534783+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:25.534932+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:26.535223+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:27.535509+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:28.535788+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:29.535966+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:30.536155+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:31.536461+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:32.536680+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:33.536971+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:34.537172+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:35.537406+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:36.537742+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:37.537976+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:38.538170+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:39.538435+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:40.538739+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:41.539005+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:42.539266+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:43.539535+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:44.539751+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:45.539974+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:46.540280+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:47.540689+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:48.540969+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:49.541191+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:50.541428+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:51.541785+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:52.542455+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:53.543629+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:54.543814+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:55.544008+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:56.544199+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:57.544414+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:58.544543+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:59.546048+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:00.546522+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:01.546957+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:02.547283+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:03.547516+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:04.547831+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:05.548059+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64536576 unmapped: 466944 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:06.548340+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64536576 unmapped: 466944 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:07.548953+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64536576 unmapped: 466944 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:08.549310+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: handle_auth_request added challenge on 0x5621e1456c00
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64667648 unmapped: 335872 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:09.549795+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 68 handle_osd_map epochs [69,69], i have 68, src has [1,69]
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 1018.819763184s of 1018.845520020s, submitted: 10
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64667648 unmapped: 335872 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:10.550073+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _renew_subs
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 69 handle_osd_map epochs [70,70], i have 69, src has [1,70]
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 69369856 unmapped: 1335296 heap: 70705152 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:11.550366+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 70 heartbeat osd_stat(store_statfs(0x4fe10c000/0x0/0x4ffc00000, data 0x4f210/0xbe000, compress 0x0/0x0/0x0, omap 0x8be6, meta 0x1a2741a), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 70 ms_handle_reset con 0x5621e1456c00 session 0x5621e19bee00
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 70 handle_osd_map epochs [71,71], i have 70, src has [1,71]
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64831488 unmapped: 10534912 heap: 75366400 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:12.550557+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: handle_auth_request added challenge on 0x5621e1457000
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 583003 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65159168 unmapped: 18604032 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:13.550886+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 71 handle_osd_map epochs [71,72], i have 71, src has [1,72]
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 72 ms_handle_reset con 0x5621e1457000 session 0x5621e19db180
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65200128 unmapped: 18563072 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:14.551094+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 72 heartbeat osd_stat(store_statfs(0x4fd48d000/0x0/0x4ffc00000, data 0xcc3469/0xd3b000, compress 0x0/0x0/0x0, omap 0x9501, meta 0x1a26aff), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 18522112 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:15.551449+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 18522112 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:16.551761+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 18522112 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:17.552335+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 587971 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 18522112 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:18.552589+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 72 heartbeat osd_stat(store_statfs(0x4fd48d000/0x0/0x4ffc00000, data 0xcc3469/0xd3b000, compress 0x0/0x0/0x0, omap 0x9501, meta 0x1a26aff), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 18522112 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:19.552794+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 18522112 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:20.552993+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 72 heartbeat osd_stat(store_statfs(0x4fd48d000/0x0/0x4ffc00000, data 0xcc3469/0xd3b000, compress 0x0/0x0/0x0, omap 0x9501, meta 0x1a26aff), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 72 handle_osd_map epochs [73,73], i have 72, src has [1,73]
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.282839775s of 10.690481186s, submitted: 34
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 18513920 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:21.553240+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 73 heartbeat osd_stat(store_statfs(0x4fd48c000/0x0/0x4ffc00000, data 0xcc4919/0xd3e000, compress 0x0/0x0/0x0, omap 0x97d9, meta 0x1a26827), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:22.553478+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 18513920 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 589687 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:23.553663+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 18513920 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:24.553847+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 18513920 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:25.553973+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 18513920 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:26.554187+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 18513920 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:27.554456+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 18513920 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 73 heartbeat osd_stat(store_statfs(0x4fd48c000/0x0/0x4ffc00000, data 0xcc4919/0xd3e000, compress 0x0/0x0/0x0, omap 0x97d9, meta 0x1a26827), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 73 heartbeat osd_stat(store_statfs(0x4fd48c000/0x0/0x4ffc00000, data 0xcc4919/0xd3e000, compress 0x0/0x0/0x0, omap 0x97d9, meta 0x1a26827), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 589687 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:28.554686+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 18513920 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:29.554943+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 18513920 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:30.555148+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 18513920 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:31.555366+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 18513920 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:32.555476+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 18513920 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 589687 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:33.555621+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 18513920 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 73 heartbeat osd_stat(store_statfs(0x4fd48c000/0x0/0x4ffc00000, data 0xcc4919/0xd3e000, compress 0x0/0x0/0x0, omap 0x97d9, meta 0x1a26827), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:34.555798+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 18513920 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:35.556020+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 18513920 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:36.556172+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 18513920 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:37.556344+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 18513920 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 73 heartbeat osd_stat(store_statfs(0x4fd48c000/0x0/0x4ffc00000, data 0xcc4919/0xd3e000, compress 0x0/0x0/0x0, omap 0x97d9, meta 0x1a26827), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:38.556483+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 589687 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 18513920 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:39.556675+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 18513920 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 73 heartbeat osd_stat(store_statfs(0x4fd48c000/0x0/0x4ffc00000, data 0xcc4919/0xd3e000, compress 0x0/0x0/0x0, omap 0x97d9, meta 0x1a26827), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:40.556910+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 18513920 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:41.557116+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 18513920 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 73 heartbeat osd_stat(store_statfs(0x4fd48c000/0x0/0x4ffc00000, data 0xcc4919/0xd3e000, compress 0x0/0x0/0x0, omap 0x97d9, meta 0x1a26827), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:42.557314+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 18513920 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:43.557460+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 589687 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 18513920 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:44.557631+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 18513920 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:45.557768+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 18513920 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:46.557950+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 18513920 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:47.558109+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 18513920 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 73 heartbeat osd_stat(store_statfs(0x4fd48c000/0x0/0x4ffc00000, data 0xcc4919/0xd3e000, compress 0x0/0x0/0x0, omap 0x97d9, meta 0x1a26827), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:48.558252+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 589687 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 18513920 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:49.558402+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 18513920 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:50.558636+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 18513920 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:51.558816+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 18513920 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 73 heartbeat osd_stat(store_statfs(0x4fd48c000/0x0/0x4ffc00000, data 0xcc4919/0xd3e000, compress 0x0/0x0/0x0, omap 0x97d9, meta 0x1a26827), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:52.558947+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 18513920 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:53.559119+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 589687 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 18513920 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:54.559297+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 18513920 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:55.559518+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 18513920 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:56.559732+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: handle_auth_request added challenge on 0x5621e1457400
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 35.774143219s of 35.782062531s, submitted: 13
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65388544 unmapped: 18374656 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: handle_auth_request added challenge on 0x5621e1457800
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 73 handle_osd_map epochs [73,74], i have 73, src has [1,74]
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 74 ms_handle_reset con 0x5621e1457400 session 0x5621deefbdc0
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 74 heartbeat osd_stat(store_statfs(0x4fd48d000/0x0/0x4ffc00000, data 0xcc493c/0xd3f000, compress 0x0/0x0/0x0, omap 0x9a85, meta 0x1a2657b), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:57.559898+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: handle_auth_request added challenge on 0x5621e1c03c00
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 74 ms_handle_reset con 0x5621e1c03c00 session 0x5621e195a000
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65716224 unmapped: 18046976 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: handle_auth_request added challenge on 0x5621e1c03400
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 74 ms_handle_reset con 0x5621e1c03400 session 0x5621e1462a80
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 74 handle_osd_map epochs [74,75], i have 74, src has [1,75]
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 75 ms_handle_reset con 0x5621e1457800 session 0x5621e1462fc0
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: handle_auth_request added challenge on 0x5621e1456c00
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: handle_auth_request added challenge on 0x5621e1457400
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:58.560034+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 604284 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 66658304 unmapped: 17104896 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _renew_subs
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 75 handle_osd_map epochs [76,76], i have 75, src has [1,76]
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 76 ms_handle_reset con 0x5621e1457400 session 0x5621e077b6c0
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 76 ms_handle_reset con 0x5621e1456c00 session 0x5621e0a22e00
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: handle_auth_request added challenge on 0x5621e1c02c00
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 76 ms_handle_reset con 0x5621e1c02c00 session 0x5621e067ea80
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 76 heartbeat osd_stat(store_statfs(0x4fd481000/0x0/0x4ffc00000, data 0xcc78e6/0xd47000, compress 0x0/0x0/0x0, omap 0xa435, meta 0x1a25bcb), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: handle_auth_request added challenge on 0x5621e1c02800
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 76 ms_handle_reset con 0x5621e1c02800 session 0x5621df500000
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:59.560333+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 66682880 unmapped: 17080320 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: handle_auth_request added challenge on 0x5621e1456c00
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:00.560469+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 66674688 unmapped: 17088512 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _renew_subs
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 76 handle_osd_map epochs [77,77], i have 76, src has [1,77]
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 77 ms_handle_reset con 0x5621e1456c00 session 0x5621e077b180
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:01.561081+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: handle_auth_request added challenge on 0x5621e1457400
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 67878912 unmapped: 15884288 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _renew_subs
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 77 handle_osd_map epochs [78,78], i have 77, src has [1,78]
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 78 ms_handle_reset con 0x5621e1457400 session 0x5621e19da380
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:02.561256+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 68059136 unmapped: 15704064 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:03.561398+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 610354 data_alloc: 218103808 data_used: 858
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 15663104 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: handle_auth_request added challenge on 0x5621e1bcec00
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 78 handle_osd_map epochs [78,79], i have 78, src has [1,79]
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 79 ms_handle_reset con 0x5621e1bcec00 session 0x5621e14636c0
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:04.561528+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 79 heartbeat osd_stat(store_statfs(0x4fd47a000/0x0/0x4ffc00000, data 0xccb716/0xd4d000, compress 0x0/0x0/0x0, omap 0xa7da, meta 0x1a25826), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 79 handle_osd_map epochs [80,80], i have 79, src has [1,80]
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 68009984 unmapped: 15753216 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: handle_auth_request added challenge on 0x5621e1c03400
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _renew_subs
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 80 handle_osd_map epochs [81,81], i have 80, src has [1,81]
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 81 ms_handle_reset con 0x5621e1c03400 session 0x5621e0a23a40
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:05.561717+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 68222976 unmapped: 15540224 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: handle_auth_request added challenge on 0x5621e1c02c00
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _renew_subs
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 81 handle_osd_map epochs [82,82], i have 81, src has [1,82]
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 82 ms_handle_reset con 0x5621e1c02c00 session 0x5621df8eca80
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:06.562030+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 68370432 unmapped: 15392768 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: handle_auth_request added challenge on 0x5621e1c03000
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.066205978s of 10.445180893s, submitted: 195
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: handle_auth_request added challenge on 0x5621e1457000
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _renew_subs
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 82 handle_osd_map epochs [83,83], i have 82, src has [1,83]
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 83 ms_handle_reset con 0x5621e1c03000 session 0x5621e1947dc0
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 83 ms_handle_reset con 0x5621e1457000 session 0x5621df500380
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:07.562302+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 68591616 unmapped: 15171584 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: handle_auth_request added challenge on 0x5621e1457800
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 83 handle_osd_map epochs [83,84], i have 83, src has [1,84]
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 84 ms_handle_reset con 0x5621e1457800 session 0x5621e0a22700
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:08.562465+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 645550 data_alloc: 218103808 data_used: 4919
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 15007744 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 84 heartbeat osd_stat(store_statfs(0x4fd468000/0x0/0x4ffc00000, data 0xcd301e/0xd60000, compress 0x0/0x0/0x0, omap 0xcff3, meta 0x1a2300d), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: handle_auth_request added challenge on 0x5621e1457400
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _renew_subs
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 84 handle_osd_map epochs [85,85], i have 84, src has [1,85]
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 85 ms_handle_reset con 0x5621e1457400 session 0x5621e19be8c0
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:09.562683+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: handle_auth_request added challenge on 0x5621e1456c00
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 68984832 unmapped: 14778368 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 85 heartbeat osd_stat(store_statfs(0x4fd45f000/0x0/0x4ffc00000, data 0xcd6c27/0xd6b000, compress 0x0/0x0/0x0, omap 0xdcb6, meta 0x1a2234a), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:10.562892+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 78708736 unmapped: 13451264 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:11.563079+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 85 handle_osd_map epochs [85,86], i have 85, src has [1,86]
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 86 ms_handle_reset con 0x5621e1456c00 session 0x5621e19da8c0
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 70606848 unmapped: 21553152 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: handle_auth_request added challenge on 0x5621e1457000
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:12.563215+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 86 handle_osd_map epochs [86,87], i have 86, src has [1,87]
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: handle_auth_request added challenge on 0x5621e1456000
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 71835648 unmapped: 20324352 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 87 ms_handle_reset con 0x5621e1457000 session 0x5621df500e00
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: handle_auth_request added challenge on 0x5621e1bcec00
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:13.563427+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _renew_subs
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 87 handle_osd_map epochs [88,88], i have 87, src has [1,88]
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 664901 data_alloc: 218103808 data_used: 4919
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73129984 unmapped: 19030016 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 88 ms_handle_reset con 0x5621e1456000 session 0x5621e19468c0
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 88 ms_handle_reset con 0x5621e1bcec00 session 0x5621e1947a40
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 88 heartbeat osd_stat(store_statfs(0x4fbab4000/0x0/0x4ffc00000, data 0xcdadfd/0xd71000, compress 0x0/0x0/0x0, omap 0xea36, meta 0x2bc15ca), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: handle_auth_request added challenge on 0x5621e1bd8800
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:14.563573+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 88 handle_osd_map epochs [88,89], i have 88, src has [1,89]
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73310208 unmapped: 18849792 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 89 ms_handle_reset con 0x5621e1bd8800 session 0x5621e19801c0
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: handle_auth_request added challenge on 0x5621e1bd8c00
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:15.563855+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _renew_subs
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 89 handle_osd_map epochs [90,90], i have 89, src has [1,90]
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73129984 unmapped: 19030016 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 90 ms_handle_reset con 0x5621e1bd8c00 session 0x5621e0a22a80
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: handle_auth_request added challenge on 0x5621e1456000
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:16.564029+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _renew_subs
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 90 handle_osd_map epochs [91,91], i have 90, src has [1,91]
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73170944 unmapped: 18989056 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 91 ms_handle_reset con 0x5621e1456000 session 0x5621e1981180
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: handle_auth_request added challenge on 0x5621e1457000
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.402823448s of 10.174050331s, submitted: 340
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:17.564279+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: handle_auth_request added challenge on 0x5621e1bcec00
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _renew_subs
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 91 handle_osd_map epochs [92,92], i have 91, src has [1,92]
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73367552 unmapped: 18792448 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 92 ms_handle_reset con 0x5621e1457000 session 0x5621e1946e00
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: handle_auth_request added challenge on 0x5621e1bd8800
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:18.564459+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 92 heartbeat osd_stat(store_statfs(0x4fc2b3000/0x0/0x4ffc00000, data 0xcdeaed/0xd79000, compress 0x0/0x0/0x0, omap 0x108a3, meta 0x2bbf75d), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 92 handle_osd_map epochs [92,93], i have 92, src has [1,93]
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 675212 data_alloc: 218103808 data_used: 21160
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 93 ms_handle_reset con 0x5621e1bcec00 session 0x5621e19bf880
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 18677760 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 93 ms_handle_reset con 0x5621e1bd8800 session 0x5621e19db880
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 93 handle_osd_map epochs [93,94], i have 93, src has [1,94]
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:19.564664+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 18669568 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:20.564793+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 18669568 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:21.565089+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 18669568 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 94 heartbeat osd_stat(store_statfs(0x4fc2a8000/0x0/0x4ffc00000, data 0xce15b7/0xd7e000, compress 0x0/0x0/0x0, omap 0x1100f, meta 0x2bbeff1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:22.565327+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 94 heartbeat osd_stat(store_statfs(0x4fc2a8000/0x0/0x4ffc00000, data 0xce15b7/0xd7e000, compress 0x0/0x0/0x0, omap 0x1100f, meta 0x2bbeff1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 18669568 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 94 heartbeat osd_stat(store_statfs(0x4fc2a8000/0x0/0x4ffc00000, data 0xce15b7/0xd7e000, compress 0x0/0x0/0x0, omap 0x1100f, meta 0x2bbeff1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:23.565446+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 677676 data_alloc: 218103808 data_used: 21160
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 18669568 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: handle_auth_request added challenge on 0x5621e1c03000
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 94 ms_handle_reset con 0x5621e1c03000 session 0x5621e1947340
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: handle_auth_request added challenge on 0x5621e1456000
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 94 heartbeat osd_stat(store_statfs(0x4fc2a8000/0x0/0x4ffc00000, data 0xce15b7/0xd7e000, compress 0x0/0x0/0x0, omap 0x1100f, meta 0x2bbeff1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:24.565669+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73498624 unmapped: 18661376 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 94 handle_osd_map epochs [95,95], i have 94, src has [1,95]
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 95 heartbeat osd_stat(store_statfs(0x4fc2a8000/0x0/0x4ffc00000, data 0xce15b7/0xd7e000, compress 0x0/0x0/0x0, omap 0x1100f, meta 0x2bbeff1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 95 ms_handle_reset con 0x5621e1456000 session 0x5621e1981500
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: handle_auth_request added challenge on 0x5621e1c02400
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _renew_subs
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 95 handle_osd_map epochs [96,96], i have 95, src has [1,96]
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 96 ms_handle_reset con 0x5621e1c02400 session 0x5621e196b6c0
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:25.565897+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 18653184 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 96 heartbeat osd_stat(store_statfs(0x4fc2a4000/0x0/0x4ffc00000, data 0xce41dd/0xd84000, compress 0x0/0x0/0x0, omap 0x11669, meta 0x2bbe997), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:26.566124+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 18653184 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:27.566416+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 18653184 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:28.566599+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 683060 data_alloc: 218103808 data_used: 29317
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 18653184 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:29.566863+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 18653184 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:30.567041+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 18653184 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:31.567147+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 18653184 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 96 heartbeat osd_stat(store_statfs(0x4fc2a4000/0x0/0x4ffc00000, data 0xce41dd/0xd84000, compress 0x0/0x0/0x0, omap 0x11669, meta 0x2bbe997), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:32.567300+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 18653184 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:33.567473+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 683060 data_alloc: 218103808 data_used: 29317
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 18653184 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: handle_auth_request added challenge on 0x5621e1c03800
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 96 ms_handle_reset con 0x5621e1c03800 session 0x5621e1991c00
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 96 handle_osd_map epochs [96,97], i have 96, src has [1,97]
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.875562668s of 17.118930817s, submitted: 126
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:34.567661+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 18669568 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: handle_auth_request added challenge on 0x5621e1457000
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 97 ms_handle_reset con 0x5621e1457000 session 0x5621e0061880
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:35.567874+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 18669568 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 97 heartbeat osd_stat(store_statfs(0x4fc2a3000/0x0/0x4ffc00000, data 0xce568d/0xd87000, compress 0x0/0x0/0x0, omap 0x11946, meta 0x2bbe6ba), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: handle_auth_request added challenge on 0x5621e1bcec00
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 97 ms_handle_reset con 0x5621e1bcec00 session 0x5621e0061500
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:36.568000+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: handle_auth_request added challenge on 0x5621e1456000
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 18669568 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:37.568198+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 97 handle_osd_map epochs [97,98], i have 97, src has [1,98]
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73498624 unmapped: 18661376 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:38.568423+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _renew_subs
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 98 handle_osd_map epochs [99,99], i have 98, src has [1,99]
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 99 ms_handle_reset con 0x5621e1456000 session 0x5621dfe81a40
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 698132 data_alloc: 218103808 data_used: 29317
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73531392 unmapped: 18628608 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:39.568597+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: handle_auth_request added challenge on 0x5621e1457000
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: handle_auth_request added challenge on 0x5621e1c02400
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73555968 unmapped: 18604032 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 99 heartbeat osd_stat(store_statfs(0x4fc298000/0x0/0x4ffc00000, data 0xce82c1/0xd90000, compress 0x0/0x0/0x0, omap 0x11fd8, meta 0x2bbe028), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:40.568796+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73555968 unmapped: 18604032 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:41.568956+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73555968 unmapped: 18604032 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:42.569156+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: handle_auth_request added challenge on 0x5621e1c03800
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73613312 unmapped: 18546688 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: handle_auth_request added challenge on 0x5621e1bd8800
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:43.569344+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _renew_subs
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 99 handle_osd_map epochs [100,100], i have 99, src has [1,100]
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 100 ms_handle_reset con 0x5621e1bd8800 session 0x5621e0a22380
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 100 ms_handle_reset con 0x5621e1c03800 session 0x5621e195b180
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 705238 data_alloc: 218103808 data_used: 33413
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73637888 unmapped: 18522112 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:44.569478+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: handle_auth_request added challenge on 0x5621e3561800
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _renew_subs
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 100 handle_osd_map epochs [101,101], i have 100, src has [1,101]
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.190230370s of 10.334465027s, submitted: 49
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73752576 unmapped: 18407424 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 101 ms_handle_reset con 0x5621e3561800 session 0x5621df8edc00
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 101 heartbeat osd_stat(store_statfs(0x4fc290000/0x0/0x4ffc00000, data 0xceaedd/0xd98000, compress 0x0/0x0/0x0, omap 0x12af4, meta 0x2bbd50c), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: handle_auth_request added challenge on 0x5621e3561400
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 101 ms_handle_reset con 0x5621e3561400 session 0x5621e19bfc00
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: handle_auth_request added challenge on 0x5621e1456000
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:45.569632+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 101 heartbeat osd_stat(store_statfs(0x4fc290000/0x0/0x4ffc00000, data 0xceaedd/0xd98000, compress 0x0/0x0/0x0, omap 0x12af4, meta 0x2bbd50c), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73949184 unmapped: 18210816 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 101 handle_osd_map epochs [101,102], i have 101, src has [1,102]
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 102 ms_handle_reset con 0x5621e1456000 session 0x5621e1980000
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:46.569830+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: handle_auth_request added challenge on 0x5621e1bd8800
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 18178048 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _renew_subs
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 102 handle_osd_map epochs [103,103], i have 102, src has [1,103]
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:47.570049+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 103 ms_handle_reset con 0x5621e1bd8800 session 0x5621e199fc00
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74047488 unmapped: 18112512 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: handle_auth_request added challenge on 0x5621e1c03800
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _renew_subs
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 103 handle_osd_map epochs [104,104], i have 103, src has [1,104]
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 104 ms_handle_reset con 0x5621e1c03800 session 0x5621e077a1c0
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:48.572031+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 717689 data_alloc: 218103808 data_used: 33413
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73949184 unmapped: 18210816 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:49.572208+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73949184 unmapped: 18210816 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:50.572382+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73949184 unmapped: 18210816 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 104 heartbeat osd_stat(store_statfs(0x4fc289000/0x0/0x4ffc00000, data 0xcef535/0xda1000, compress 0x0/0x0/0x0, omap 0x1364f, meta 0x2bbc9b1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:51.572578+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73949184 unmapped: 18210816 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 104 heartbeat osd_stat(store_statfs(0x4fc289000/0x0/0x4ffc00000, data 0xcef535/0xda1000, compress 0x0/0x0/0x0, omap 0x1364f, meta 0x2bbc9b1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:52.572827+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73949184 unmapped: 18210816 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 104 heartbeat osd_stat(store_statfs(0x4fc289000/0x0/0x4ffc00000, data 0xcef535/0xda1000, compress 0x0/0x0/0x0, omap 0x1364f, meta 0x2bbc9b1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:53.573017+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 104 heartbeat osd_stat(store_statfs(0x4fc289000/0x0/0x4ffc00000, data 0xcef535/0xda1000, compress 0x0/0x0/0x0, omap 0x1364f, meta 0x2bbc9b1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 717945 data_alloc: 218103808 data_used: 34639
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73949184 unmapped: 18210816 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:54.573172+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: handle_auth_request added challenge on 0x5621e3561400
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _renew_subs
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 104 handle_osd_map epochs [105,105], i have 104, src has [1,105]
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 105 ms_handle_reset con 0x5621e3561400 session 0x5621df8ec000
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: handle_auth_request added challenge on 0x5621e3561800
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: handle_auth_request added challenge on 0x5621e3561000
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 105 ms_handle_reset con 0x5621e3561000 session 0x5621e1980fc0
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 105 ms_handle_reset con 0x5621e3561800 session 0x5621e1946a80
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: handle_auth_request added challenge on 0x5621e1456000
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 105 ms_handle_reset con 0x5621e1456000 session 0x5621e038ddc0
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: handle_auth_request added challenge on 0x5621e1bd8800
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 105 ms_handle_reset con 0x5621e1bd8800 session 0x5621e196b340
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: handle_auth_request added challenge on 0x5621e1c03800
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 105 ms_handle_reset con 0x5621e1c03800 session 0x5621e038dc00
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: handle_auth_request added challenge on 0x5621e3561400
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74072064 unmapped: 18087936 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.056212425s of 10.232484818s, submitted: 119
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 105 ms_handle_reset con 0x5621e3561400 session 0x5621e19be000
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:55.573405+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: handle_auth_request added challenge on 0x5621e1456000
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 105 ms_handle_reset con 0x5621e1456000 session 0x5621e05b8000
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74063872 unmapped: 18096128 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: handle_auth_request added challenge on 0x5621e1bd8800
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 105 ms_handle_reset con 0x5621e1bd8800 session 0x5621ddecf340
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:56.573554+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: handle_auth_request added challenge on 0x5621e1c03800
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 105 ms_handle_reset con 0x5621e1c03800 session 0x5621df8ed880
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: handle_auth_request added challenge on 0x5621e3561400
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 105 ms_handle_reset con 0x5621e3561400 session 0x5621e1980c40
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74047488 unmapped: 18112512 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: handle_auth_request added challenge on 0x5621e3561800
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:57.573769+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: handle_auth_request added challenge on 0x5621e3560c00
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74186752 unmapped: 17973248 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:58.573983+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 726131 data_alloc: 218103808 data_used: 35205
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74186752 unmapped: 17973248 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 105 heartbeat osd_stat(store_statfs(0x4fc284000/0x0/0x4ffc00000, data 0xcf0a6f/0xda6000, compress 0x0/0x0/0x0, omap 0x13c81, meta 0x2bbc37f), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:59.574189+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74186752 unmapped: 17973248 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:00.574394+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74186752 unmapped: 17973248 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: handle_auth_request added challenge on 0x5621e3560800
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 105 ms_handle_reset con 0x5621e3560800 session 0x5621e070c380
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: handle_auth_request added challenge on 0x5621e1456000
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:01.574572+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _renew_subs
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 105 handle_osd_map epochs [106,106], i have 105, src has [1,106]
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 106 ms_handle_reset con 0x5621e1456000 session 0x5621deefbdc0
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: handle_auth_request added challenge on 0x5621e1bd8800
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: handle_auth_request added challenge on 0x5621e1c03800
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 106 ms_handle_reset con 0x5621e1c03800 session 0x5621e0061340
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 106 ms_handle_reset con 0x5621e1bd8800 session 0x5621e19ee1c0
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: handle_auth_request added challenge on 0x5621e3561400
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74383360 unmapped: 17776640 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fc280000/0x0/0x4ffc00000, data 0xcf207e/0xdaa000, compress 0x0/0x0/0x0, omap 0x13ff0, meta 0x2bbc010), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 106 ms_handle_reset con 0x5621e3561400 session 0x5621deefbc00
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: handle_auth_request added challenge on 0x5621e3560400
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:02.574826+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _renew_subs
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 106 handle_osd_map epochs [107,107], i have 106, src has [1,107]
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 107 ms_handle_reset con 0x5621e3560400 session 0x5621dfe81880
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: handle_auth_request added challenge on 0x5621e1456000
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74440704 unmapped: 17719296 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:03.575043+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _renew_subs
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 107 handle_osd_map epochs [108,108], i have 107, src has [1,108]
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 108 ms_handle_reset con 0x5621e1456000 session 0x5621e1946380
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 737866 data_alloc: 218103808 data_used: 35783
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 17727488 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:04.575279+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: handle_auth_request added challenge on 0x5621e1bd8800
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 108 ms_handle_reset con 0x5621e1bd8800 session 0x5621e1462540
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: handle_auth_request added challenge on 0x5621e1c03800
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 108 ms_handle_reset con 0x5621e1c03800 session 0x5621e0a23500
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: handle_auth_request added challenge on 0x5621e3561400
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 108 ms_handle_reset con 0x5621e3561400 session 0x5621e19db500
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 17727488 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:05.575462+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: handle_auth_request added challenge on 0x5621e3560000
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.728271484s of 10.840334892s, submitted: 55
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 108 ms_handle_reset con 0x5621e3560000 session 0x5621e19be1c0
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 17727488 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: handle_auth_request added challenge on 0x5621e1456000
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:06.575655+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 108 handle_osd_map epochs [108,109], i have 108, src has [1,109]
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 109 ms_handle_reset con 0x5621e1456000 session 0x5621e196ba40
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74440704 unmapped: 17719296 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc277000/0x0/0x4ffc00000, data 0xcf62c5/0xdb3000, compress 0x0/0x0/0x0, omap 0x14ca4, meta 0x2bbb35c), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:07.575921+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74440704 unmapped: 17719296 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 109 ms_handle_reset con 0x5621e3561800 session 0x5621e070cc40
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 109 ms_handle_reset con 0x5621e3560c00 session 0x5621e0a23340
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc277000/0x0/0x4ffc00000, data 0xcf6283/0xdb2000, compress 0x0/0x0/0x0, omap 0x14ca4, meta 0x2bbb35c), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: handle_auth_request added challenge on 0x5621e1bd8800
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:08.576070+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _renew_subs
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 109 handle_osd_map epochs [110,110], i have 109, src has [1,110]
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 740491 data_alloc: 218103808 data_used: 36295
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74391552 unmapped: 17768448 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 110 ms_handle_reset con 0x5621e1bd8800 session 0x5621dfe81500
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:09.576246+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74391552 unmapped: 17768448 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:10.576377+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc27a000/0x0/0x4ffc00000, data 0xcf784b/0xdb2000, compress 0x0/0x0/0x0, omap 0x15133, meta 0x2bbaecd), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 110 handle_osd_map epochs [111,111], i have 110, src has [1,111]
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 110 handle_osd_map epochs [111,111], i have 111, src has [1,111]
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74399744 unmapped: 17760256 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:11.576563+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 111 ms_handle_reset con 0x5621e1457000 session 0x5621e196b180
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 111 ms_handle_reset con 0x5621e1c02400 session 0x5621e1981340
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: handle_auth_request added challenge on 0x5621e1456000
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 17924096 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 111 ms_handle_reset con 0x5621e1456000 session 0x5621df8ec380
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:12.576718+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: handle_auth_request added challenge on 0x5621e1457000
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 111 ms_handle_reset con 0x5621e1457000 session 0x5621e077a700
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 111 heartbeat osd_stat(store_statfs(0x4fc278000/0x0/0x4ffc00000, data 0xcf8ce4/0xdb3000, compress 0x0/0x0/0x0, omap 0x15543, meta 0x2bbaabd), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 17924096 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: handle_auth_request added challenge on 0x5621e1bd8800
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:13.576922+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 737991 data_alloc: 218103808 data_used: 32777
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 111 handle_osd_map epochs [111,112], i have 111, src has [1,112]
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74260480 unmapped: 17899520 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 112 ms_handle_reset con 0x5621e1bd8800 session 0x5621ddecf180
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:14.577112+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74260480 unmapped: 17899520 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 112 heartbeat osd_stat(store_statfs(0x4fc274000/0x0/0x4ffc00000, data 0xcfa2f2/0xdb5000, compress 0x0/0x0/0x0, omap 0x15a01, meta 0x2bba5ff), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 112 handle_osd_map epochs [113,113], i have 112, src has [1,113]
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:15.577313+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74260480 unmapped: 17899520 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:16.577502+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74260480 unmapped: 17899520 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:17.577821+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74260480 unmapped: 17899520 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fc272000/0x0/0x4ffc00000, data 0xcfb7be/0xdb8000, compress 0x0/0x0/0x0, omap 0x15c9a, meta 0x2bba366), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:18.578059+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 743972 data_alloc: 218103808 data_used: 36838
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74260480 unmapped: 17899520 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 113 handle_osd_map epochs [113,114], i have 113, src has [1,114]
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.406369209s of 13.619614601s, submitted: 155
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:19.578260+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74260480 unmapped: 17899520 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:20.582254+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74260480 unmapped: 17899520 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:21.582775+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:22.582979+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:23.583114+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:24.583383+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:25.583555+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:26.583812+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:27.584275+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:28.584462+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:29.584817+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:30.584999+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:31.585214+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:32.586066+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:33.586357+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:34.586827+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:35.587288+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:36.587544+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:37.587975+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:38.588397+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:39.588772+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:40.589046+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:41.589259+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:42.589482+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:43.589759+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:44.589993+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:45.590177+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:46.590445+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:47.590739+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:48.591056+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:49.591313+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:50.591517+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:51.591828+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:52.592005+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:53.592202+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:54.592391+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:55.592596+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:56.592776+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:57.593053+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:58.593208+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:59.593386+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:00.593571+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:01.593813+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:02.765019+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:03.765148+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:04.765337+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:05.765565+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:06.765809+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:07.766036+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:08.766266+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:09.766410+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:10.766595+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:11.766804+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:12.767046+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:13.767186+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:14.767514+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 17883136 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:15.767768+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 17883136 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:16.767955+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 17883136 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:17.768227+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 17883136 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:18.768398+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 17883136 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:19.768616+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 17883136 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:20.768818+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 17883136 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:21.769024+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 17883136 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:22.769243+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 17883136 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:23.769426+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 17883136 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:24.769613+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 17883136 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:25.769774+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 17883136 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:26.769953+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 17883136 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:27.770209+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 17883136 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:28.770463+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 17883136 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:29.770619+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 17883136 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:30.770786+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 17883136 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:31.770931+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 17883136 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:32.771117+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 17883136 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:33.771289+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 17883136 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:34.771417+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 17883136 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:35.771616+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 17883136 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:36.771751+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 17883136 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:37.771909+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 17883136 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:38.772051+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 17883136 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:39.772234+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74416128 unmapped: 17743872 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:40.772370+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: do_command 'config diff' '{prefix=config diff}'
Jan 10 17:31:59 compute-0 ceph-osd[87867]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Jan 10 17:31:59 compute-0 ceph-osd[87867]: do_command 'config show' '{prefix=config show}'
Jan 10 17:31:59 compute-0 ceph-osd[87867]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Jan 10 17:31:59 compute-0 ceph-osd[87867]: do_command 'counter dump' '{prefix=counter dump}'
Jan 10 17:31:59 compute-0 ceph-osd[87867]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Jan 10 17:31:59 compute-0 ceph-osd[87867]: do_command 'counter schema' '{prefix=counter schema}'
Jan 10 17:31:59 compute-0 ceph-osd[87867]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75022336 unmapped: 17137664 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:41.772533+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75030528 unmapped: 17129472 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:42.772753+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75030528 unmapped: 17129472 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:43.773024+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: do_command 'log dump' '{prefix=log dump}'
Jan 10 17:31:59 compute-0 ceph-osd[87867]: do_command 'log dump' '{prefix=log dump}' result is 0 bytes
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75038720 unmapped: 17121280 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: do_command 'perf dump' '{prefix=perf dump}'
Jan 10 17:31:59 compute-0 ceph-osd[87867]: do_command 'perf dump' '{prefix=perf dump}' result is 0 bytes
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:44.773211+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: do_command 'perf histogram dump' '{prefix=perf histogram dump}'
Jan 10 17:31:59 compute-0 ceph-osd[87867]: do_command 'perf histogram dump' '{prefix=perf histogram dump}' result is 0 bytes
Jan 10 17:31:59 compute-0 ceph-osd[87867]: do_command 'perf schema' '{prefix=perf schema}'
Jan 10 17:31:59 compute-0 ceph-osd[87867]: do_command 'perf schema' '{prefix=perf schema}' result is 0 bytes
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74883072 unmapped: 17276928 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:45.773385+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74883072 unmapped: 17276928 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:46.773540+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74883072 unmapped: 17276928 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:47.773747+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74883072 unmapped: 17276928 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:48.773872+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74883072 unmapped: 17276928 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:49.774007+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74883072 unmapped: 17276928 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:50.774139+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74883072 unmapped: 17276928 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:51.774305+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74883072 unmapped: 17276928 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:52.774455+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74891264 unmapped: 17268736 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:53.774604+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74891264 unmapped: 17268736 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:54.774734+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74891264 unmapped: 17268736 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:55.774878+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74891264 unmapped: 17268736 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:56.775048+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74891264 unmapped: 17268736 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:57.775262+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74891264 unmapped: 17268736 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:58.775398+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74891264 unmapped: 17268736 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:59.775589+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74891264 unmapped: 17268736 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:00.775773+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74891264 unmapped: 17268736 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:01.775913+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74891264 unmapped: 17268736 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:02.776098+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74891264 unmapped: 17268736 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:03.776259+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74891264 unmapped: 17268736 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:04.776391+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74891264 unmapped: 17268736 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:05.776532+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74891264 unmapped: 17268736 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:06.776661+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74891264 unmapped: 17268736 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:07.776912+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74891264 unmapped: 17268736 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:08.777126+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74891264 unmapped: 17268736 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:09.777291+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74891264 unmapped: 17268736 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:10.777430+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 17260544 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:11.777533+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 17260544 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:12.777679+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 17260544 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:13.777846+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 17260544 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:14.778020+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 17260544 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:15.778157+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 17260544 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:16.778330+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 17260544 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:17.778539+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 17260544 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:18.778658+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread fragmentation_score=0.000195 took=0.000113s
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 17260544 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:19.778747+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 17260544 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:20.778886+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 17260544 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:21.779053+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 17260544 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:22.779215+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 17260544 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:23.779409+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 17260544 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:24.779566+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 17260544 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:25.779719+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 17260544 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:26.779867+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:27.780176+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 17260544 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:28.780360+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 17260544 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:29.780545+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 17260544 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:30.780738+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 17260544 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:31.780884+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 17260544 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:32.781029+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 17260544 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:33.781179+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 17260544 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:34.781316+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 17260544 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:35.781456+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 17260544 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:36.781608+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 17260544 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:37.781941+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 17260544 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:38.782209+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 17260544 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:39.782459+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 17260544 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:40.782671+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 17260544 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:41.782882+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 17260544 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:42.783134+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 17260544 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:43.783353+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 17260544 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:44.783565+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 17260544 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:45.783778+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 17260544 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:46.783977+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 17260544 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:47.784224+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 17260544 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:48.784411+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 17260544 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:49.784604+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 17260544 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:50.784783+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 17260544 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:51.784987+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 17260544 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:52.785168+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 17260544 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:53.785350+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 17260544 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:54.785569+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 17260544 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:55.785770+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 17260544 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:56.786013+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 17260544 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:57.786238+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 17260544 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:58.786431+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 17260544 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:59.786652+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 17260544 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:00.786833+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 17260544 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:01.787102+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 17260544 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:02.787316+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 17260544 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:03.787548+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 17260544 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:04.787795+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 17260544 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:05.788072+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 17260544 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:06.788259+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 17260544 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:07.788617+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 17260544 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:08.788817+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 17260544 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:09.788996+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 17260544 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:10.789163+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 17260544 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:11.789363+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 17260544 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:12.789877+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 17260544 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:13.790147+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 17260544 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:14.790415+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74907648 unmapped: 17252352 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:15.790831+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74907648 unmapped: 17252352 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:16.791038+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74907648 unmapped: 17252352 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:17.791501+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74907648 unmapped: 17252352 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:18.791728+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74907648 unmapped: 17252352 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:19.791972+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74907648 unmapped: 17252352 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:20.792248+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74907648 unmapped: 17252352 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:21.792436+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74907648 unmapped: 17252352 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:22.792765+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74907648 unmapped: 17252352 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:23.793007+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74907648 unmapped: 17252352 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:24.793312+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74907648 unmapped: 17252352 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:25.793521+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74907648 unmapped: 17252352 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:26.793741+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74907648 unmapped: 17252352 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:27.794071+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74907648 unmapped: 17252352 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:28.794373+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74915840 unmapped: 17244160 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:29.794975+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74915840 unmapped: 17244160 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:30.795230+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74915840 unmapped: 17244160 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:31.795529+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74924032 unmapped: 17235968 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:32.795714+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74924032 unmapped: 17235968 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:33.795914+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74924032 unmapped: 17235968 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:34.796169+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74924032 unmapped: 17235968 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:35.796421+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74924032 unmapped: 17235968 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:36.796641+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74924032 unmapped: 17235968 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:37.796900+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74932224 unmapped: 17227776 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:38.797225+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74932224 unmapped: 17227776 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:39.797419+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74932224 unmapped: 17227776 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:40.797663+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74932224 unmapped: 17227776 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:41.797866+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74932224 unmapped: 17227776 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:42.798044+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74932224 unmapped: 17227776 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:43.798375+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74932224 unmapped: 17227776 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:44.798668+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74932224 unmapped: 17227776 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:45.798975+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74932224 unmapped: 17227776 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:46.799147+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74932224 unmapped: 17227776 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:47.799432+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74932224 unmapped: 17227776 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:48.799605+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74932224 unmapped: 17227776 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:49.799788+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74940416 unmapped: 17219584 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:50.799982+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74940416 unmapped: 17219584 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:51.800255+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74940416 unmapped: 17219584 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:52.800473+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74940416 unmapped: 17219584 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:53.800640+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74940416 unmapped: 17219584 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:54.800837+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74940416 unmapped: 17219584 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:55.801027+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74940416 unmapped: 17219584 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:56.801229+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74940416 unmapped: 17219584 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:57.801448+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74940416 unmapped: 17219584 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:58.801649+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74940416 unmapped: 17219584 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:59.801801+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74940416 unmapped: 17219584 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:00.801962+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74940416 unmapped: 17219584 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:01.802138+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74940416 unmapped: 17219584 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:02.802306+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74940416 unmapped: 17219584 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:03.802515+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74948608 unmapped: 17211392 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:04.802764+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74948608 unmapped: 17211392 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:05.802947+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74948608 unmapped: 17211392 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:06.803173+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74948608 unmapped: 17211392 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:07.803435+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74948608 unmapped: 17211392 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:08.803756+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74948608 unmapped: 17211392 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:09.803991+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74948608 unmapped: 17211392 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:10.804236+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74948608 unmapped: 17211392 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:11.804434+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74948608 unmapped: 17211392 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:12.804645+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74948608 unmapped: 17211392 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:13.804992+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74948608 unmapped: 17211392 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:14.805180+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74948608 unmapped: 17211392 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:15.805370+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74948608 unmapped: 17211392 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:16.805588+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74948608 unmapped: 17211392 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:17.805785+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74948608 unmapped: 17211392 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:18.806849+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74948608 unmapped: 17211392 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:19.807463+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74956800 unmapped: 17203200 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:20.807767+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74956800 unmapped: 17203200 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:21.807968+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74956800 unmapped: 17203200 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:22.808524+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74956800 unmapped: 17203200 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:23.809044+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74956800 unmapped: 17203200 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:24.809345+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74956800 unmapped: 17203200 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:25.809847+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74956800 unmapped: 17203200 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:26.810109+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74956800 unmapped: 17203200 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:27.810611+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74956800 unmapped: 17203200 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:28.810977+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74956800 unmapped: 17203200 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:29.811431+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74956800 unmapped: 17203200 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:30.811590+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74956800 unmapped: 17203200 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:31.811836+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74956800 unmapped: 17203200 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:32.812046+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74956800 unmapped: 17203200 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:33.812205+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74956800 unmapped: 17203200 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:34.812432+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74964992 unmapped: 17195008 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:35.812792+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74964992 unmapped: 17195008 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:36.812993+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74964992 unmapped: 17195008 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:37.813214+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74964992 unmapped: 17195008 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:38.813402+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74964992 unmapped: 17195008 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:39.813688+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74964992 unmapped: 17195008 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:40.814066+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74964992 unmapped: 17195008 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:41.814369+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74964992 unmapped: 17195008 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:42.814671+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74964992 unmapped: 17195008 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:43.814898+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74964992 unmapped: 17195008 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:44.815149+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74964992 unmapped: 17195008 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:45.815375+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74973184 unmapped: 17186816 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:46.815593+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74973184 unmapped: 17186816 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:47.815846+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74973184 unmapped: 17186816 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:48.816016+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74973184 unmapped: 17186816 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:49.816231+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74973184 unmapped: 17186816 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:50.816390+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74973184 unmapped: 17186816 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:51.816630+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74973184 unmapped: 17186816 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:52.816879+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74973184 unmapped: 17186816 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:53.817069+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74981376 unmapped: 17178624 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:54.817293+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74981376 unmapped: 17178624 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:55.817485+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74981376 unmapped: 17178624 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:56.817739+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74981376 unmapped: 17178624 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:57.817977+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74981376 unmapped: 17178624 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:58.818200+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74981376 unmapped: 17178624 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:59.818369+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74981376 unmapped: 17178624 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:00.818582+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74981376 unmapped: 17178624 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:01.818756+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74981376 unmapped: 17178624 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:02.818926+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74981376 unmapped: 17178624 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:03.819121+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74981376 unmapped: 17178624 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:04.819391+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74981376 unmapped: 17178624 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:05.819676+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74981376 unmapped: 17178624 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:06.820025+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74981376 unmapped: 17178624 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:07.820315+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74981376 unmapped: 17178624 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:08.820522+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74981376 unmapped: 17178624 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:09.820788+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74981376 unmapped: 17178624 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:10.820974+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74981376 unmapped: 17178624 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:11.821162+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74981376 unmapped: 17178624 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:12.821367+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74981376 unmapped: 17178624 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:13.821557+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74981376 unmapped: 17178624 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:14.821819+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74981376 unmapped: 17178624 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:15.822007+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74989568 unmapped: 17170432 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:16.822231+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74989568 unmapped: 17170432 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:17.822520+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74989568 unmapped: 17170432 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:18.822740+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74989568 unmapped: 17170432 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:19.823042+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74989568 unmapped: 17170432 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:20.823250+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74989568 unmapped: 17170432 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:21.823469+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74989568 unmapped: 17170432 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:22.823659+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74989568 unmapped: 17170432 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:23.823896+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74989568 unmapped: 17170432 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:24.824080+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74989568 unmapped: 17170432 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:25.824316+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74997760 unmapped: 17162240 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:26.824482+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74997760 unmapped: 17162240 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:27.824666+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74997760 unmapped: 17162240 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:28.824824+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74997760 unmapped: 17162240 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:29.825006+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74997760 unmapped: 17162240 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:30.825154+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74997760 unmapped: 17162240 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:31.825269+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:32.825428+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74997760 unmapped: 17162240 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:33.825604+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74997760 unmapped: 17162240 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:34.825805+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74997760 unmapped: 17162240 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:35.826013+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74997760 unmapped: 17162240 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:36.826121+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74997760 unmapped: 17162240 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:37.826314+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74997760 unmapped: 17162240 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:38.826503+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74997760 unmapped: 17162240 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:39.826687+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74997760 unmapped: 17162240 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:40.826876+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75005952 unmapped: 17154048 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:41.827062+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75005952 unmapped: 17154048 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:42.827300+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75005952 unmapped: 17154048 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:43.827497+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75005952 unmapped: 17154048 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:44.827841+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75005952 unmapped: 17154048 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:45.828138+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75005952 unmapped: 17154048 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:46.828322+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75005952 unmapped: 17154048 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.15066 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:47.828637+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75014144 unmapped: 17145856 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:48.828858+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75014144 unmapped: 17145856 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:49.829098+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75014144 unmapped: 17145856 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:50.829302+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75014144 unmapped: 17145856 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:51.829473+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75014144 unmapped: 17145856 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:52.829736+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75014144 unmapped: 17145856 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:53.829949+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75014144 unmapped: 17145856 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:54.830268+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75014144 unmapped: 17145856 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:55.830501+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75014144 unmapped: 17145856 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:56.830728+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75014144 unmapped: 17145856 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:57.831016+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75014144 unmapped: 17145856 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:58.831246+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75014144 unmapped: 17145856 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:59.831512+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75022336 unmapped: 17137664 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:00.831895+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75022336 unmapped: 17137664 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:01.832103+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75022336 unmapped: 17137664 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:02.832358+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75022336 unmapped: 17137664 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:03.832610+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75022336 unmapped: 17137664 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:04.832793+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75022336 unmapped: 17137664 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:05.832996+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75022336 unmapped: 17137664 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:06.833171+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75022336 unmapped: 17137664 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:07.833465+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75022336 unmapped: 17137664 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:08.833682+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75022336 unmapped: 17137664 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:09.833940+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75022336 unmapped: 17137664 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:10.834206+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75022336 unmapped: 17137664 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:11.834452+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75022336 unmapped: 17137664 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:12.834625+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75022336 unmapped: 17137664 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:13.834830+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75022336 unmapped: 17137664 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:14.835051+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75022336 unmapped: 17137664 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:15.835343+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75022336 unmapped: 17137664 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:16.835538+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75022336 unmapped: 17137664 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:17.835797+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75022336 unmapped: 17137664 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:18.836019+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75022336 unmapped: 17137664 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:19.836250+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75022336 unmapped: 17137664 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:20.836540+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75022336 unmapped: 17137664 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:21.836815+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75022336 unmapped: 17137664 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:22.837052+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75022336 unmapped: 17137664 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:23.837368+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75022336 unmapped: 17137664 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:24.837574+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75022336 unmapped: 17137664 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:25.837812+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75030528 unmapped: 17129472 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:26.838069+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75030528 unmapped: 17129472 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:27.838286+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75030528 unmapped: 17129472 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:28.838482+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75030528 unmapped: 17129472 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:29.838781+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75030528 unmapped: 17129472 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:30.838937+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75030528 unmapped: 17129472 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:31.839137+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75030528 unmapped: 17129472 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:32.839333+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75030528 unmapped: 17129472 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:33.839520+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75030528 unmapped: 17129472 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:34.839794+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75030528 unmapped: 17129472 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:35.840097+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75030528 unmapped: 17129472 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:36.840347+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75030528 unmapped: 17129472 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:37.840645+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75030528 unmapped: 17129472 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:38.840898+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75038720 unmapped: 17121280 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:39.841186+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75038720 unmapped: 17121280 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:40.841473+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75038720 unmapped: 17121280 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:41.841832+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75038720 unmapped: 17121280 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:42.842138+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75038720 unmapped: 17121280 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:43.842486+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75038720 unmapped: 17121280 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:44.842851+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75038720 unmapped: 17121280 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:45.843064+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75038720 unmapped: 17121280 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:46.843264+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75038720 unmapped: 17121280 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:47.843471+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75038720 unmapped: 17121280 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:48.843617+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75038720 unmapped: 17121280 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:49.843832+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75038720 unmapped: 17121280 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:50.844111+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75038720 unmapped: 17121280 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:51.844439+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75038720 unmapped: 17121280 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:52.844805+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75038720 unmapped: 17121280 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:53.845175+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75038720 unmapped: 17121280 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:54.845495+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75046912 unmapped: 17113088 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:55.845852+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75046912 unmapped: 17113088 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:56.846150+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75046912 unmapped: 17113088 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:57.846429+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75055104 unmapped: 17104896 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:58.846778+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75055104 unmapped: 17104896 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:59.847029+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75055104 unmapped: 17104896 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:00.847328+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75055104 unmapped: 17104896 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:01.847651+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75055104 unmapped: 17104896 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:02.847884+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75055104 unmapped: 17104896 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:03.848269+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75055104 unmapped: 17104896 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:04.848486+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75055104 unmapped: 17104896 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:05.848887+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75055104 unmapped: 17104896 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:06.849092+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75055104 unmapped: 17104896 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:07.849340+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75063296 unmapped: 17096704 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:08.849550+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75063296 unmapped: 17096704 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:09.849922+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75063296 unmapped: 17096704 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:10.850331+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75063296 unmapped: 17096704 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:11.850586+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75063296 unmapped: 17096704 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:12.850928+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.1 total, 600.0 interval
                                           Cumulative writes: 6272 writes, 24K keys, 6272 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 6272 writes, 1344 syncs, 4.67 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2050 writes, 5141 keys, 2050 commit groups, 1.0 writes per commit group, ingest: 2.88 MB, 0.00 MB/s
                                           Interval WAL: 2050 writes, 951 syncs, 2.16 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75063296 unmapped: 17096704 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:13.851303+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75063296 unmapped: 17096704 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:14.851538+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75063296 unmapped: 17096704 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:15.851824+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75063296 unmapped: 17096704 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:16.852050+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75063296 unmapped: 17096704 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:17.852313+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75063296 unmapped: 17096704 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:18.852589+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75063296 unmapped: 17096704 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:19.852813+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75063296 unmapped: 17096704 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: mgrc ms_handle_reset ms_handle_reset con 0x5621dffbd400
Jan 10 17:31:59 compute-0 ceph-osd[87867]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3703679480
Jan 10 17:31:59 compute-0 ceph-osd[87867]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3703679480,v1:192.168.122.100:6801/3703679480]
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: get_auth_request con 0x5621e3560000 auth_method 0
Jan 10 17:31:59 compute-0 ceph-osd[87867]: mgrc handle_mgr_configure stats_period=5
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:20.853116+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 16883712 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:21.853315+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 16883712 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:22.853640+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 16883712 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:23.853886+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 16883712 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:24.854096+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 16883712 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:25.854445+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 16883712 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:26.854739+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 16883712 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:27.855066+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 16883712 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:28.855305+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 16883712 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:29.856474+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 16883712 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:30.856989+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 16883712 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:31.857624+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 16883712 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:32.857783+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 16883712 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:33.858928+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 16883712 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:34.859391+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 16883712 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:35.859639+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 16883712 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:36.859954+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 16883712 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:37.860624+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 16883712 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:38.861041+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 16883712 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:39.861508+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 16883712 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:40.861808+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 16883712 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:41.862111+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 16883712 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:42.862318+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 16883712 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:43.863450+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 16883712 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:44.863832+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 16883712 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:45.864189+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 16883712 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:46.864596+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 16883712 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:47.864919+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 16883712 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:48.865204+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 16883712 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:49.865454+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 16883712 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:50.865803+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 16883712 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:51.866124+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 16883712 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:52.866300+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 16883712 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:53.866831+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 16883712 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:54.867108+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 16883712 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:55.867394+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 16883712 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:56.867815+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 16883712 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:57.868277+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 16883712 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:58.868626+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 16883712 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:59.868840+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 16883712 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:00.869106+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 16883712 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:01.869406+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 16883712 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:02.869890+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 16883712 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:03.870238+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 16883712 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:04.870549+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 16883712 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:05.870862+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 16883712 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:06.871201+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 16883712 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:07.871583+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 16883712 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:08.871862+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 16883712 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:09.872153+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 16883712 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:10.872485+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 16883712 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:11.872790+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 16883712 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:12.873207+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 16883712 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:13.873540+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 16883712 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:14.873933+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 16883712 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:15.874469+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 16883712 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:16.874850+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75284480 unmapped: 16875520 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:17.875113+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75284480 unmapped: 16875520 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:18.875304+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75284480 unmapped: 16875520 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:19.875479+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75284480 unmapped: 16875520 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:20.875641+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75284480 unmapped: 16875520 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:21.875940+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75284480 unmapped: 16875520 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:22.876210+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75284480 unmapped: 16875520 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:23.876385+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75284480 unmapped: 16875520 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:24.876584+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75284480 unmapped: 16875520 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:25.876786+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75284480 unmapped: 16875520 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:26.876996+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75284480 unmapped: 16875520 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:27.877323+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75284480 unmapped: 16875520 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:28.877670+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75284480 unmapped: 16875520 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:29.877984+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75284480 unmapped: 16875520 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:30.878224+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75284480 unmapped: 16875520 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:31.878431+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75284480 unmapped: 16875520 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:32.878677+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75284480 unmapped: 16875520 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:33.878965+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75284480 unmapped: 16875520 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:34.879186+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75284480 unmapped: 16875520 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:35.879473+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75292672 unmapped: 16867328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:36.879987+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75292672 unmapped: 16867328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:37.880985+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75292672 unmapped: 16867328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:38.881420+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75292672 unmapped: 16867328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:39.881752+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75292672 unmapped: 16867328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:40.882161+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75292672 unmapped: 16867328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:41.882483+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75292672 unmapped: 16867328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:42.882824+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75292672 unmapped: 16867328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:43.883326+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75300864 unmapped: 16859136 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:44.883559+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75300864 unmapped: 16859136 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:45.883791+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75300864 unmapped: 16859136 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:46.884189+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75300864 unmapped: 16859136 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:47.884778+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75300864 unmapped: 16859136 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:48.885022+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75300864 unmapped: 16859136 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:49.885402+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75300864 unmapped: 16859136 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:50.885591+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75300864 unmapped: 16859136 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:51.885791+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75300864 unmapped: 16859136 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:52.886043+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75300864 unmapped: 16859136 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:53.886249+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75300864 unmapped: 16859136 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:54.886439+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:55.886675+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75300864 unmapped: 16859136 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:56.886968+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75300864 unmapped: 16859136 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:57.887267+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75300864 unmapped: 16859136 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:58.887485+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75300864 unmapped: 16859136 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:59.887783+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75300864 unmapped: 16859136 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:00.888052+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75300864 unmapped: 16859136 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:01.888272+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75300864 unmapped: 16859136 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:02.888461+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75300864 unmapped: 16859136 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:03.888621+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75300864 unmapped: 16859136 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:04.888857+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75309056 unmapped: 16850944 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:05.889048+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75309056 unmapped: 16850944 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:06.889242+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75309056 unmapped: 16850944 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:07.889456+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75309056 unmapped: 16850944 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:08.889839+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75309056 unmapped: 16850944 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:09.890811+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75309056 unmapped: 16850944 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:10.890985+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75309056 unmapped: 16850944 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:11.891425+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75309056 unmapped: 16850944 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:12.891616+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75309056 unmapped: 16850944 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:13.892128+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75309056 unmapped: 16850944 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:14.892390+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75309056 unmapped: 16850944 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:15.892953+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75309056 unmapped: 16850944 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:16.893330+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75309056 unmapped: 16850944 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:17.893612+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75309056 unmapped: 16850944 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:18.893922+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75309056 unmapped: 16850944 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:19.894230+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75317248 unmapped: 16842752 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:20.894569+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75317248 unmapped: 16842752 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:21.894799+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75317248 unmapped: 16842752 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:22.894951+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75317248 unmapped: 16842752 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:23.895278+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75317248 unmapped: 16842752 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:24.895520+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75317248 unmapped: 16842752 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:25.896269+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75317248 unmapped: 16842752 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:26.896536+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75317248 unmapped: 16842752 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:27.896902+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75317248 unmapped: 16842752 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:28.897247+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75317248 unmapped: 16842752 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:29.897517+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75317248 unmapped: 16842752 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:30.897770+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75317248 unmapped: 16842752 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:31.898088+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75317248 unmapped: 16842752 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:32.898346+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75317248 unmapped: 16842752 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:33.898579+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75317248 unmapped: 16842752 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:34.898813+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75325440 unmapped: 16834560 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:35.899023+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75325440 unmapped: 16834560 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:36.899254+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75325440 unmapped: 16834560 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:37.899585+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75325440 unmapped: 16834560 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:38.899788+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75333632 unmapped: 16826368 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:39.900656+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75333632 unmapped: 16826368 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:40.901283+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75333632 unmapped: 16826368 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:41.901620+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75333632 unmapped: 16826368 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:42.902075+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75333632 unmapped: 16826368 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:43.902402+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75333632 unmapped: 16826368 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:44.902794+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75341824 unmapped: 16818176 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:45.903141+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75341824 unmapped: 16818176 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:46.903459+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75341824 unmapped: 16818176 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:47.904307+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75341824 unmapped: 16818176 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:48.904633+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75341824 unmapped: 16818176 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:49.904876+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75341824 unmapped: 16818176 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:50.905121+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75341824 unmapped: 16818176 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:51.905585+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75341824 unmapped: 16818176 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:52.905816+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75341824 unmapped: 16818176 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:53.906160+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75341824 unmapped: 16818176 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:54.906433+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75341824 unmapped: 16818176 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:55.906660+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75341824 unmapped: 16818176 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:56.906857+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75341824 unmapped: 16818176 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:57.907313+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75341824 unmapped: 16818176 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:58.907484+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75341824 unmapped: 16818176 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:59.907671+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75341824 unmapped: 16818176 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:31:00.907894+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75341824 unmapped: 16818176 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:31:01.908146+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75350016 unmapped: 16809984 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:31:02.908375+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75350016 unmapped: 16809984 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:31:03.908600+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75350016 unmapped: 16809984 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:31:04.908770+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75350016 unmapped: 16809984 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:31:05.909000+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75350016 unmapped: 16809984 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:31:06.909198+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75350016 unmapped: 16809984 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:31:07.909409+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75350016 unmapped: 16809984 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:31:08.909598+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75350016 unmapped: 16809984 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:31:09.909796+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75350016 unmapped: 16809984 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:31:10.909958+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75251712 unmapped: 16908288 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:31:11.910177+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75251712 unmapped: 16908288 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:31:12.910406+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75251712 unmapped: 16908288 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:31:13.910668+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75251712 unmapped: 16908288 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:31:14.910878+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75251712 unmapped: 16908288 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:31:15.911121+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75251712 unmapped: 16908288 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:31:16.911372+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75251712 unmapped: 16908288 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:31:17.911819+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75251712 unmapped: 16908288 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:31:18.912085+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75251712 unmapped: 16908288 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:31:19.913059+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75251712 unmapped: 16908288 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:31:20.913287+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75251712 unmapped: 16908288 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:31:21.913611+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75251712 unmapped: 16908288 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:31:22.913851+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75251712 unmapped: 16908288 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:31:23.914104+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75251712 unmapped: 16908288 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:31:24.914338+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75251712 unmapped: 16908288 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:31:25.914601+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: do_command 'config diff' '{prefix=config diff}'
Jan 10 17:31:59 compute-0 ceph-osd[87867]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75268096 unmapped: 16891904 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: do_command 'config show' '{prefix=config show}'
Jan 10 17:31:59 compute-0 ceph-osd[87867]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Jan 10 17:31:59 compute-0 ceph-osd[87867]: do_command 'counter dump' '{prefix=counter dump}'
Jan 10 17:31:59 compute-0 ceph-osd[87867]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Jan 10 17:31:59 compute-0 ceph-osd[87867]: do_command 'counter schema' '{prefix=counter schema}'
Jan 10 17:31:59 compute-0 ceph-osd[87867]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:31:26.914893+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:31:59 compute-0 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:31:59 compute-0 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75431936 unmapped: 16728064 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:31:27.915171+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 17:31:59 compute-0 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75456512 unmapped: 16703488 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: tick
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_tickets
Jan 10 17:31:59 compute-0 ceph-osd[87867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:31:28.915403+0000)
Jan 10 17:31:59 compute-0 ceph-osd[87867]: do_command 'log dump' '{prefix=log dump}'
Jan 10 17:31:59 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0)
Jan 10 17:31:59 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2640597616' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Jan 10 17:31:59 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:31:59 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 10 17:31:59 compute-0 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.15070 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 10 17:32:00 compute-0 ceph-mon[75249]: from='client.15060 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 17:32:00 compute-0 ceph-mon[75249]: pgmap v1135: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:32:00 compute-0 ceph-mon[75249]: from='client.15064 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 17:32:00 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/2982846763' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} : dispatch
Jan 10 17:32:00 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/2640597616' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Jan 10 17:32:00 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Jan 10 17:32:00 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1644995543' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Jan 10 17:32:00 compute-0 nova_compute[237049]: 2026-01-10 17:32:00.370 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:32:00 compute-0 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.15074 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 17:32:00 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Jan 10 17:32:00 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4061084163' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Jan 10 17:32:00 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1136: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:32:00 compute-0 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.15078 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 10 17:32:01 compute-0 nova_compute[237049]: 2026-01-10 17:32:01.346 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:32:01 compute-0 nova_compute[237049]: 2026-01-10 17:32:01.346 237053 DEBUG nova.compute.manager [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 10 17:32:01 compute-0 nova_compute[237049]: 2026-01-10 17:32:01.347 237053 DEBUG nova.compute.manager [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 10 17:32:01 compute-0 crontab[259192]: (root) LIST (root)
Jan 10 17:32:01 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Jan 10 17:32:01 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4206728935' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Jan 10 17:32:01 compute-0 ceph-mon[75249]: from='client.15066 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 17:32:01 compute-0 ceph-mon[75249]: from='client.15070 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 10 17:32:01 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/1644995543' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Jan 10 17:32:01 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/4061084163' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Jan 10 17:32:01 compute-0 nova_compute[237049]: 2026-01-10 17:32:01.492 237053 DEBUG nova.compute.manager [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 10 17:32:01 compute-0 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.15082 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 17:32:02 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0)
Jan 10 17:32:02 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2573740819' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Jan 10 17:32:02 compute-0 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.15086 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 10 17:32:02 compute-0 ceph-mon[75249]: from='client.15074 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 17:32:02 compute-0 ceph-mon[75249]: pgmap v1136: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:32:02 compute-0 ceph-mon[75249]: from='client.15078 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 10 17:32:02 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/4206728935' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Jan 10 17:32:02 compute-0 ceph-mon[75249]: from='client.15082 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 17:32:02 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/2573740819' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Jan 10 17:32:02 compute-0 ceph-mon[75249]: from='client.15086 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 10 17:32:02 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon stat"} v 0)
Jan 10 17:32:02 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2399650282' entity='client.admin' cmd={"prefix": "mon stat"} : dispatch
Jan 10 17:32:02 compute-0 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.15090 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 10 17:32:02 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1137: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:32:03 compute-0 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.15094 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 10 17:32:03 compute-0 nova_compute[237049]: 2026-01-10 17:32:03.346 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:32:03 compute-0 nova_compute[237049]: 2026-01-10 17:32:03.378 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 10 17:32:03 compute-0 nova_compute[237049]: 2026-01-10 17:32:03.380 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 10 17:32:03 compute-0 nova_compute[237049]: 2026-01-10 17:32:03.380 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 10 17:32:03 compute-0 nova_compute[237049]: 2026-01-10 17:32:03.381 237053 DEBUG nova.compute.resource_tracker [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 10 17:32:03 compute-0 nova_compute[237049]: 2026-01-10 17:32:03.382 237053 DEBUG oslo_concurrency.processutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 10 17:32:03 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/2399650282' entity='client.admin' cmd={"prefix": "mon stat"} : dispatch
Jan 10 17:32:03 compute-0 ceph-mon[75249]: from='client.15090 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 10 17:32:03 compute-0 ceph-mon[75249]: pgmap v1137: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:32:03 compute-0 ceph-mon[75249]: from='client.15094 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 10 17:32:03 compute-0 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.15096 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 10 17:32:03 compute-0 ceph-mgr[75538]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 10 17:32:03 compute-0 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-mgr-compute-0-mkxlpr[75534]: 2026-01-10T17:32:03.704+0000 7fd5c778b640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 10 17:32:03 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "node ls"} v 0)
Jan 10 17:32:03 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/341943870' entity='client.admin' cmd={"prefix": "node ls"} : dispatch
Jan 10 17:32:03 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 10 17:32:03 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1174244532' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 10 17:32:03 compute-0 nova_compute[237049]: 2026-01-10 17:32:03.971 237053 DEBUG oslo_concurrency.processutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.589s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 10 17:32:04 compute-0 nova_compute[237049]: 2026-01-10 17:32:04.155 237053 WARNING nova.virt.libvirt.driver [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 10 17:32:04 compute-0 nova_compute[237049]: 2026-01-10 17:32:04.157 237053 DEBUG nova.compute.resource_tracker [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4848MB free_disk=59.988249060697854GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 10 17:32:04 compute-0 nova_compute[237049]: 2026-01-10 17:32:04.157 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 10 17:32:04 compute-0 nova_compute[237049]: 2026-01-10 17:32:04.157 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 10 17:32:04 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0)
Jan 10 17:32:04 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3265205213' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} : dispatch
Jan 10 17:32:04 compute-0 nova_compute[237049]: 2026-01-10 17:32:04.265 237053 DEBUG nova.compute.resource_tracker [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 10 17:32:04 compute-0 nova_compute[237049]: 2026-01-10 17:32:04.265 237053 DEBUG nova.compute.resource_tracker [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 10 17:32:04 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush class ls"} v 0)
Jan 10 17:32:04 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/562716068' entity='client.admin' cmd={"prefix": "osd crush class ls"} : dispatch
Jan 10 17:32:04 compute-0 nova_compute[237049]: 2026-01-10 17:32:04.284 237053 DEBUG oslo_concurrency.processutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 65 handle_osd_map epochs [66,66], i have 65, src has [1,66]
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.386300087s of 10.565129280s, submitted: 38
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65830912 unmapped: 1269760 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:03.410913+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 66 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbb7f3/0x128000, compress 0x0/0x0/0x0, omap 0xb23d, meta 0x1a24dc3), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65830912 unmapped: 1269760 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:04.411059+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 66 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbb7f3/0x128000, compress 0x0/0x0/0x0, omap 0xb23d, meta 0x1a24dc3), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 3.4 scrub starts
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 3.4 scrub ok
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65765376 unmapped: 1335296 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:05.411212+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  log_queue is 2 last_log 49 sent 47 num 2 unsent 2 sending 2
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:00:35.243461+0000 osd.1 (osd.1) 48 : cluster [DBG] 3.4 scrub starts
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:00:35.254012+0000 osd.1 (osd.1) 49 : cluster [DBG] 3.4 scrub ok
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 461170 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client handle_log_ack log(last 49)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:00:35.243461+0000 osd.1 (osd.1) 48 : cluster [DBG] 3.4 scrub starts
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:00:35.254012+0000 osd.1 (osd.1) 49 : cluster [DBG] 3.4 scrub ok
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65765376 unmapped: 1335296 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:06.411422+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65765376 unmapped: 1335296 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:07.411638+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 66 heartbeat osd_stat(store_statfs(0x4fe0a4000/0x0/0x4ffc00000, data 0xbb7f3/0x128000, compress 0x0/0x0/0x0, omap 0xb23d, meta 0x1a24dc3), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 66 handle_osd_map epochs [67,67], i have 66, src has [1,67]
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 67 heartbeat osd_stat(store_statfs(0x4fe0a4000/0x0/0x4ffc00000, data 0xbb7f3/0x128000, compress 0x0/0x0/0x0, omap 0xb23d, meta 0x1a24dc3), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:08.411814+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65781760 unmapped: 1318912 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:09.412002+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65781760 unmapped: 1318912 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 67 handle_osd_map epochs [67,68], i have 67, src has [1,68]
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 7.7 scrub starts
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 7.7 scrub ok
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:10.412196+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  log_queue is 2 last_log 51 sent 49 num 2 unsent 2 sending 2
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:00:40.242252+0000 osd.1 (osd.1) 50 : cluster [DBG] 7.7 scrub starts
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:00:40.252828+0000 osd.1 (osd.1) 51 : cluster [DBG] 7.7 scrub ok
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65806336 unmapped: 1294336 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 469845 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client handle_log_ack log(last 51)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:00:40.242252+0000 osd.1 (osd.1) 50 : cluster [DBG] 7.7 scrub starts
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:00:40.252828+0000 osd.1 (osd.1) 51 : cluster [DBG] 7.7 scrub ok
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:11.412459+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65806336 unmapped: 1294336 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09c000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:12.412626+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65814528 unmapped: 1286144 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:13.412924+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65814528 unmapped: 1286144 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:14.413089+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65814528 unmapped: 1286144 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:15.413248+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65822720 unmapped: 1277952 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 469845 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:16.413425+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65822720 unmapped: 1277952 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 7.d scrub starts
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.835360527s of 14.069359779s, submitted: 7
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 7.d scrub ok
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:17.413605+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  log_queue is 2 last_log 53 sent 51 num 2 unsent 2 sending 2
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:00:47.308225+0000 osd.1 (osd.1) 52 : cluster [DBG] 7.d scrub starts
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:00:47.318630+0000 osd.1 (osd.1) 53 : cluster [DBG] 7.d scrub ok
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65830912 unmapped: 1269760 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09c000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:18.413941+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client handle_log_ack log(last 53)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:00:47.308225+0000 osd.1 (osd.1) 52 : cluster [DBG] 7.d scrub starts
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:00:47.318630+0000 osd.1 (osd.1) 53 : cluster [DBG] 7.d scrub ok
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65839104 unmapped: 1261568 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:19.414209+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65839104 unmapped: 1261568 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:20.414404+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65839104 unmapped: 1261568 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 471536 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 7.19 scrub starts
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 7.19 scrub ok
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:21.414604+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  log_queue is 2 last_log 55 sent 53 num 2 unsent 2 sending 2
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:00:51.357784+0000 osd.1 (osd.1) 54 : cluster [DBG] 7.19 scrub starts
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:00:51.368201+0000 osd.1 (osd.1) 55 : cluster [DBG] 7.19 scrub ok
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client handle_log_ack log(last 55)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:00:51.357784+0000 osd.1 (osd.1) 54 : cluster [DBG] 7.19 scrub starts
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:00:51.368201+0000 osd.1 (osd.1) 55 : cluster [DBG] 7.19 scrub ok
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65847296 unmapped: 1253376 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:22.414888+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65847296 unmapped: 1253376 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:23.415039+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65855488 unmapped: 1245184 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:24.415179+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65855488 unmapped: 1245184 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:25.415377+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65871872 unmapped: 1228800 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 473949 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:26.415526+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65880064 unmapped: 1220608 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:27.415678+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65880064 unmapped: 1220608 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 4.d scrub starts
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.012902260s of 11.086735725s, submitted: 4
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 4.d scrub ok
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:28.415857+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  log_queue is 2 last_log 57 sent 55 num 2 unsent 2 sending 2
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:00:58.395083+0000 osd.1 (osd.1) 56 : cluster [DBG] 4.d scrub starts
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:00:58.405465+0000 osd.1 (osd.1) 57 : cluster [DBG] 4.d scrub ok
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65888256 unmapped: 1212416 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client handle_log_ack log(last 57)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:00:58.395083+0000 osd.1 (osd.1) 56 : cluster [DBG] 4.d scrub starts
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:00:58.405465+0000 osd.1 (osd.1) 57 : cluster [DBG] 4.d scrub ok
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 4.f scrub starts
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 4.f scrub ok
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:29.416612+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  log_queue is 2 last_log 59 sent 57 num 2 unsent 2 sending 2
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:00:59.387366+0000 osd.1 (osd.1) 58 : cluster [DBG] 4.f scrub starts
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:00:59.397948+0000 osd.1 (osd.1) 59 : cluster [DBG] 4.f scrub ok
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65888256 unmapped: 1212416 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client handle_log_ack log(last 59)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:00:59.387366+0000 osd.1 (osd.1) 58 : cluster [DBG] 4.f scrub starts
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:00:59.397948+0000 osd.1 (osd.1) 59 : cluster [DBG] 4.f scrub ok
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 4.4 scrub starts
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 4.4 scrub ok
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:30.416908+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  log_queue is 2 last_log 61 sent 59 num 2 unsent 2 sending 2
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:01:00.364031+0000 osd.1 (osd.1) 60 : cluster [DBG] 4.4 scrub starts
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:01:00.374534+0000 osd.1 (osd.1) 61 : cluster [DBG] 4.4 scrub ok
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65929216 unmapped: 1171456 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 481182 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client handle_log_ack log(last 61)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:01:00.364031+0000 osd.1 (osd.1) 60 : cluster [DBG] 4.4 scrub starts
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:01:00.374534+0000 osd.1 (osd.1) 61 : cluster [DBG] 4.4 scrub ok
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:31.417377+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65929216 unmapped: 1171456 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:32.417548+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65929216 unmapped: 1171456 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:33.417781+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65937408 unmapped: 1163264 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:34.417948+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65937408 unmapped: 1163264 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:35.418963+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65945600 unmapped: 1155072 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 481182 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:36.419923+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65945600 unmapped: 1155072 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 4.9 scrub starts
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 4.9 scrub ok
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:37.420799+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  log_queue is 2 last_log 63 sent 61 num 2 unsent 2 sending 2
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:01:07.365231+0000 osd.1 (osd.1) 62 : cluster [DBG] 4.9 scrub starts
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:01:07.375798+0000 osd.1 (osd.1) 63 : cluster [DBG] 4.9 scrub ok
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65970176 unmapped: 1130496 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client handle_log_ack log(last 63)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:01:07.365231+0000 osd.1 (osd.1) 62 : cluster [DBG] 4.9 scrub starts
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:01:07.375798+0000 osd.1 (osd.1) 63 : cluster [DBG] 4.9 scrub ok
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:38.421280+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65978368 unmapped: 1122304 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:39.422288+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65978368 unmapped: 1122304 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:40.422772+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 2.1b scrub starts
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.026464462s of 12.046990395s, submitted: 8
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 2.1b scrub ok
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65986560 unmapped: 1114112 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 486006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:41.423906+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  log_queue is 2 last_log 65 sent 63 num 2 unsent 2 sending 2
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:01:10.441201+0000 osd.1 (osd.1) 64 : cluster [DBG] 2.1b scrub starts
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:01:10.451643+0000 osd.1 (osd.1) 65 : cluster [DBG] 2.1b scrub ok
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65986560 unmapped: 1114112 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client handle_log_ack log(last 65)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:01:10.441201+0000 osd.1 (osd.1) 64 : cluster [DBG] 2.1b scrub starts
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:01:10.451643+0000 osd.1 (osd.1) 65 : cluster [DBG] 2.1b scrub ok
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:42.424829+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65994752 unmapped: 1105920 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:43.425084+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65994752 unmapped: 1105920 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 4.10 scrub starts
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 4.10 scrub ok
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:44.425283+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  log_queue is 2 last_log 67 sent 65 num 2 unsent 2 sending 2
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:01:14.407051+0000 osd.1 (osd.1) 66 : cluster [DBG] 4.10 scrub starts
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:01:14.417566+0000 osd.1 (osd.1) 67 : cluster [DBG] 4.10 scrub ok
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65994752 unmapped: 1105920 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client handle_log_ack log(last 67)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:01:14.407051+0000 osd.1 (osd.1) 66 : cluster [DBG] 4.10 scrub starts
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:01:14.417566+0000 osd.1 (osd.1) 67 : cluster [DBG] 4.10 scrub ok
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:45.426039+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 1089536 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 488419 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:46.426463+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 1089536 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:47.426841+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66019328 unmapped: 1081344 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 4.2 scrub starts
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 4.2 scrub ok
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:48.427169+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  log_queue is 2 last_log 69 sent 67 num 2 unsent 2 sending 2
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:01:18.376581+0000 osd.1 (osd.1) 68 : cluster [DBG] 4.2 scrub starts
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:01:18.387161+0000 osd.1 (osd.1) 69 : cluster [DBG] 4.2 scrub ok
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66035712 unmapped: 1064960 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client handle_log_ack log(last 69)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:01:18.376581+0000 osd.1 (osd.1) 68 : cluster [DBG] 4.2 scrub starts
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:01:18.387161+0000 osd.1 (osd.1) 69 : cluster [DBG] 4.2 scrub ok
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:49.427380+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66035712 unmapped: 1064960 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:50.427584+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66043904 unmapped: 1056768 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490830 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:51.427784+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66043904 unmapped: 1056768 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:52.427953+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66052096 unmapped: 1048576 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:53.428092+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66052096 unmapped: 1048576 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 2.17 scrub starts
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.865109444s of 13.878160477s, submitted: 6
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 2.17 scrub ok
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:54.428342+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  log_queue is 2 last_log 71 sent 69 num 2 unsent 2 sending 2
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:01:24.320246+0000 osd.1 (osd.1) 70 : cluster [DBG] 2.17 scrub starts
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:01:24.330920+0000 osd.1 (osd.1) 71 : cluster [DBG] 2.17 scrub ok
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66060288 unmapped: 1040384 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:55.428541+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client handle_log_ack log(last 71)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:01:24.320246+0000 osd.1 (osd.1) 70 : cluster [DBG] 2.17 scrub starts
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:01:24.330920+0000 osd.1 (osd.1) 71 : cluster [DBG] 2.17 scrub ok
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66060288 unmapped: 1040384 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 493243 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:56.428750+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66068480 unmapped: 1032192 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:57.428930+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66076672 unmapped: 1024000 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:58.429086+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66076672 unmapped: 1024000 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 5.13 scrub starts
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 5.13 scrub ok
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:59.429442+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  log_queue is 2 last_log 73 sent 71 num 2 unsent 2 sending 2
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:01:29.373329+0000 osd.1 (osd.1) 72 : cluster [DBG] 5.13 scrub starts
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:01:29.383874+0000 osd.1 (osd.1) 73 : cluster [DBG] 5.13 scrub ok
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66076672 unmapped: 1024000 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client handle_log_ack log(last 73)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:01:29.373329+0000 osd.1 (osd.1) 72 : cluster [DBG] 5.13 scrub starts
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:01:29.383874+0000 osd.1 (osd.1) 73 : cluster [DBG] 5.13 scrub ok
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:00.429883+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66084864 unmapped: 1015808 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 495656 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:01.430022+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66093056 unmapped: 1007616 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:02.430209+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66101248 unmapped: 999424 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:03.430353+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 2.15 scrub starts
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 2.15 scrub ok
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66101248 unmapped: 999424 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:04.430562+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  log_queue is 2 last_log 75 sent 73 num 2 unsent 2 sending 2
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:01:33.475452+0000 osd.1 (osd.1) 74 : cluster [DBG] 2.15 scrub starts
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:01:33.486066+0000 osd.1 (osd.1) 75 : cluster [DBG] 2.15 scrub ok
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66109440 unmapped: 991232 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client handle_log_ack log(last 75)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:01:33.475452+0000 osd.1 (osd.1) 74 : cluster [DBG] 2.15 scrub starts
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:01:33.486066+0000 osd.1 (osd.1) 75 : cluster [DBG] 2.15 scrub ok
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:05.430836+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66117632 unmapped: 983040 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 498069 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:06.431039+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66117632 unmapped: 983040 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:07.431170+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66117632 unmapped: 983040 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 5.12 scrub starts
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.043815613s of 14.055473328s, submitted: 6
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 5.12 scrub ok
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:08.431384+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  log_queue is 2 last_log 77 sent 75 num 2 unsent 2 sending 2
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:01:38.375811+0000 osd.1 (osd.1) 76 : cluster [DBG] 5.12 scrub starts
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:01:38.386375+0000 osd.1 (osd.1) 77 : cluster [DBG] 5.12 scrub ok
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66142208 unmapped: 958464 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client handle_log_ack log(last 77)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:01:38.375811+0000 osd.1 (osd.1) 76 : cluster [DBG] 5.12 scrub starts
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:01:38.386375+0000 osd.1 (osd.1) 77 : cluster [DBG] 5.12 scrub ok
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:09.431987+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66150400 unmapped: 950272 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:10.432129+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66158592 unmapped: 942080 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 500482 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 5.16 scrub starts
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 5.16 scrub ok
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:11.432267+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  log_queue is 2 last_log 79 sent 77 num 2 unsent 2 sending 2
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:01:41.376259+0000 osd.1 (osd.1) 78 : cluster [DBG] 5.16 scrub starts
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:01:41.386831+0000 osd.1 (osd.1) 79 : cluster [DBG] 5.16 scrub ok
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66158592 unmapped: 942080 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client handle_log_ack log(last 79)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:01:41.376259+0000 osd.1 (osd.1) 78 : cluster [DBG] 5.16 scrub starts
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:01:41.386831+0000 osd.1 (osd.1) 79 : cluster [DBG] 5.16 scrub ok
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:12.432474+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66166784 unmapped: 933888 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 5.9 scrub starts
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 5.9 scrub ok
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:13.432576+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  log_queue is 2 last_log 81 sent 79 num 2 unsent 2 sending 2
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:01:43.350088+0000 osd.1 (osd.1) 80 : cluster [DBG] 5.9 scrub starts
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:01:43.360517+0000 osd.1 (osd.1) 81 : cluster [DBG] 5.9 scrub ok
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client handle_log_ack log(last 81)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:01:43.350088+0000 osd.1 (osd.1) 80 : cluster [DBG] 5.9 scrub starts
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:01:43.360517+0000 osd.1 (osd.1) 81 : cluster [DBG] 5.9 scrub ok
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66166784 unmapped: 933888 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:14.432779+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66174976 unmapped: 925696 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:15.432930+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66174976 unmapped: 925696 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 505306 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:16.433084+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66174976 unmapped: 925696 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:17.433237+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66183168 unmapped: 917504 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:18.433401+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66183168 unmapped: 917504 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:19.433599+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66191360 unmapped: 909312 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:20.433780+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66199552 unmapped: 901120 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 505306 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 4.12 scrub starts
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.805700302s of 12.849143982s, submitted: 6
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 4.12 scrub ok
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:21.433934+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  log_queue is 2 last_log 83 sent 81 num 2 unsent 2 sending 2
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:01:51.225123+0000 osd.1 (osd.1) 82 : cluster [DBG] 4.12 scrub starts
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:01:51.235716+0000 osd.1 (osd.1) 83 : cluster [DBG] 4.12 scrub ok
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66207744 unmapped: 892928 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 2.d scrub starts
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 2.d scrub ok
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:22.434228+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  log_queue is 4 last_log 85 sent 83 num 4 unsent 2 sending 2
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:01:52.270027+0000 osd.1 (osd.1) 84 : cluster [DBG] 2.d scrub starts
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:01:52.280461+0000 osd.1 (osd.1) 85 : cluster [DBG] 2.d scrub ok
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client handle_log_ack log(last 83)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:01:51.225123+0000 osd.1 (osd.1) 82 : cluster [DBG] 4.12 scrub starts
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:01:51.235716+0000 osd.1 (osd.1) 83 : cluster [DBG] 4.12 scrub ok
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client handle_log_ack log(last 85)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:01:52.270027+0000 osd.1 (osd.1) 84 : cluster [DBG] 2.d scrub starts
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:01:52.280461+0000 osd.1 (osd.1) 85 : cluster [DBG] 2.d scrub ok
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66248704 unmapped: 851968 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:23.434493+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66248704 unmapped: 851968 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:24.434693+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66256896 unmapped: 843776 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:25.434928+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66256896 unmapped: 843776 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 510130 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:26.435183+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66265088 unmapped: 835584 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:27.435525+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66265088 unmapped: 835584 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:28.435714+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66265088 unmapped: 835584 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:29.436024+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66273280 unmapped: 827392 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 2.3 scrub starts
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 2.3 scrub ok
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:30.436188+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  log_queue is 2 last_log 87 sent 85 num 2 unsent 2 sending 2
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:02:00.255229+0000 osd.1 (osd.1) 86 : cluster [DBG] 2.3 scrub starts
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:02:00.265732+0000 osd.1 (osd.1) 87 : cluster [DBG] 2.3 scrub ok
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66273280 unmapped: 827392 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client handle_log_ack log(last 87)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:02:00.255229+0000 osd.1 (osd.1) 86 : cluster [DBG] 2.3 scrub starts
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:02:00.265732+0000 osd.1 (osd.1) 87 : cluster [DBG] 2.3 scrub ok
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512541 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:31.436428+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66281472 unmapped: 819200 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 2.4 scrub starts
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.959676743s of 10.976650238s, submitted: 6
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 2.4 scrub ok
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:32.436609+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  log_queue is 2 last_log 89 sent 87 num 2 unsent 2 sending 2
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:02:02.201748+0000 osd.1 (osd.1) 88 : cluster [DBG] 2.4 scrub starts
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:02:02.212322+0000 osd.1 (osd.1) 89 : cluster [DBG] 2.4 scrub ok
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66306048 unmapped: 794624 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client handle_log_ack log(last 89)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:02:02.201748+0000 osd.1 (osd.1) 88 : cluster [DBG] 2.4 scrub starts
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:02:02.212322+0000 osd.1 (osd.1) 89 : cluster [DBG] 2.4 scrub ok
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:33.436954+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66314240 unmapped: 786432 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 2.5 scrub starts
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 2.5 scrub ok
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:34.437141+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  log_queue is 2 last_log 91 sent 89 num 2 unsent 2 sending 2
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:02:04.223035+0000 osd.1 (osd.1) 90 : cluster [DBG] 2.5 scrub starts
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:02:04.233661+0000 osd.1 (osd.1) 91 : cluster [DBG] 2.5 scrub ok
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66322432 unmapped: 778240 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 2.7 scrub starts
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 2.7 scrub ok
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client handle_log_ack log(last 91)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:02:04.223035+0000 osd.1 (osd.1) 90 : cluster [DBG] 2.5 scrub starts
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:02:04.233661+0000 osd.1 (osd.1) 91 : cluster [DBG] 2.5 scrub ok
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:35.437436+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  log_queue is 2 last_log 93 sent 91 num 2 unsent 2 sending 2
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:02:05.195136+0000 osd.1 (osd.1) 92 : cluster [DBG] 2.7 scrub starts
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:02:05.205720+0000 osd.1 (osd.1) 93 : cluster [DBG] 2.7 scrub ok
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66322432 unmapped: 778240 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 5.11 scrub starts
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 5.11 scrub ok
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client handle_log_ack log(last 93)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:02:05.195136+0000 osd.1 (osd.1) 92 : cluster [DBG] 2.7 scrub starts
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:02:05.205720+0000 osd.1 (osd.1) 93 : cluster [DBG] 2.7 scrub ok
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522187 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:36.437928+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  log_queue is 2 last_log 95 sent 93 num 2 unsent 2 sending 2
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:02:06.206433+0000 osd.1 (osd.1) 94 : cluster [DBG] 5.11 scrub starts
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:02:06.216894+0000 osd.1 (osd.1) 95 : cluster [DBG] 5.11 scrub ok
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66330624 unmapped: 770048 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client handle_log_ack log(last 95)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:02:06.206433+0000 osd.1 (osd.1) 94 : cluster [DBG] 5.11 scrub starts
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:02:06.216894+0000 osd.1 (osd.1) 95 : cluster [DBG] 5.11 scrub ok
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:37.438183+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66330624 unmapped: 770048 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:38.438336+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66330624 unmapped: 770048 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 5.1 scrub starts
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 5.1 scrub ok
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:39.438559+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  log_queue is 2 last_log 97 sent 95 num 2 unsent 2 sending 2
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:02:09.126044+0000 osd.1 (osd.1) 96 : cluster [DBG] 5.1 scrub starts
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:02:09.136866+0000 osd.1 (osd.1) 97 : cluster [DBG] 5.1 scrub ok
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66338816 unmapped: 761856 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client handle_log_ack log(last 97)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:02:09.126044+0000 osd.1 (osd.1) 96 : cluster [DBG] 5.1 scrub starts
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:02:09.136866+0000 osd.1 (osd.1) 97 : cluster [DBG] 5.1 scrub ok
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:40.438827+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66338816 unmapped: 761856 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 524598 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:41.438990+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66347008 unmapped: 753664 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:42.439165+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66347008 unmapped: 753664 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:43.439312+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66347008 unmapped: 753664 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:44.439492+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66355200 unmapped: 745472 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:45.439639+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66355200 unmapped: 745472 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 524598 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:46.439836+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66355200 unmapped: 745472 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:47.439927+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66363392 unmapped: 737280 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 2.6 scrub starts
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.874062538s of 15.986262321s, submitted: 10
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 2.6 scrub ok
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:48.440079+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  log_queue is 2 last_log 99 sent 97 num 2 unsent 2 sending 2
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:02:18.187943+0000 osd.1 (osd.1) 98 : cluster [DBG] 2.6 scrub starts
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:02:18.198500+0000 osd.1 (osd.1) 99 : cluster [DBG] 2.6 scrub ok
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 5.1d scrub starts
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 5.1d scrub ok
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client handle_log_ack log(last 99)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:02:18.187943+0000 osd.1 (osd.1) 98 : cluster [DBG] 2.6 scrub starts
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:02:18.198500+0000 osd.1 (osd.1) 99 : cluster [DBG] 2.6 scrub ok
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:49.440352+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  log_queue is 2 last_log 101 sent 99 num 2 unsent 2 sending 2
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:02:19.232471+0000 osd.1 (osd.1) 100 : cluster [DBG] 5.1d scrub starts
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:02:19.242971+0000 osd.1 (osd.1) 101 : cluster [DBG] 5.1d scrub ok
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66412544 unmapped: 688128 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 5.f scrub starts
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 5.f scrub ok
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client handle_log_ack log(last 101)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:02:19.232471+0000 osd.1 (osd.1) 100 : cluster [DBG] 5.1d scrub starts
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:02:19.242971+0000 osd.1 (osd.1) 101 : cluster [DBG] 5.1d scrub ok
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:50.440619+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  log_queue is 2 last_log 103 sent 101 num 2 unsent 2 sending 2
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:02:20.262501+0000 osd.1 (osd.1) 102 : cluster [DBG] 5.f scrub starts
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:02:20.272880+0000 osd.1 (osd.1) 103 : cluster [DBG] 5.f scrub ok
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66412544 unmapped: 688128 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 531833 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client handle_log_ack log(last 103)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:02:20.262501+0000 osd.1 (osd.1) 102 : cluster [DBG] 5.f scrub starts
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:02:20.272880+0000 osd.1 (osd.1) 103 : cluster [DBG] 5.f scrub ok
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:51.441070+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66420736 unmapped: 679936 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:52.441292+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66420736 unmapped: 679936 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:53.441564+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66420736 unmapped: 679936 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:54.441732+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66428928 unmapped: 671744 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 5.c scrub starts
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 5.c scrub ok
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:55.441888+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  log_queue is 2 last_log 105 sent 103 num 2 unsent 2 sending 2
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:02:25.198768+0000 osd.1 (osd.1) 104 : cluster [DBG] 5.c scrub starts
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:02:25.209251+0000 osd.1 (osd.1) 105 : cluster [DBG] 5.c scrub ok
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66437120 unmapped: 663552 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 534244 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client handle_log_ack log(last 105)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:02:25.198768+0000 osd.1 (osd.1) 104 : cluster [DBG] 5.c scrub starts
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:02:25.209251+0000 osd.1 (osd.1) 105 : cluster [DBG] 5.c scrub ok
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:56.442154+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66445312 unmapped: 655360 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:57.442356+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66445312 unmapped: 655360 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:58.442534+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66445312 unmapped: 655360 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:59.442841+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66453504 unmapped: 647168 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:00.443036+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66453504 unmapped: 647168 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 4.5 scrub starts
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.020789146s of 13.038912773s, submitted: 8
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 4.5 scrub ok
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 536655 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:01.443275+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  log_queue is 2 last_log 107 sent 105 num 2 unsent 2 sending 2
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:02:31.227118+0000 osd.1 (osd.1) 106 : cluster [DBG] 4.5 scrub starts
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:02:31.237626+0000 osd.1 (osd.1) 107 : cluster [DBG] 4.5 scrub ok
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66478080 unmapped: 622592 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client handle_log_ack log(last 107)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:02:31.227118+0000 osd.1 (osd.1) 106 : cluster [DBG] 4.5 scrub starts
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:02:31.237626+0000 osd.1 (osd.1) 107 : cluster [DBG] 4.5 scrub ok
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:02.443741+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66486272 unmapped: 614400 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:03.443910+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66486272 unmapped: 614400 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:04.444090+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66494464 unmapped: 606208 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 5.1a scrub starts
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 5.1a scrub ok
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:05.444240+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  log_queue is 2 last_log 109 sent 107 num 2 unsent 2 sending 2
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:02:35.395296+0000 osd.1 (osd.1) 108 : cluster [DBG] 5.1a scrub starts
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:02:35.405857+0000 osd.1 (osd.1) 109 : cluster [DBG] 5.1a scrub ok
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client handle_log_ack log(last 109)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:02:35.395296+0000 osd.1 (osd.1) 108 : cluster [DBG] 5.1a scrub starts
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:02:35.405857+0000 osd.1 (osd.1) 109 : cluster [DBG] 5.1a scrub ok
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66494464 unmapped: 606208 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 539068 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:06.444485+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66502656 unmapped: 598016 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:07.444820+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66502656 unmapped: 598016 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:08.445126+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66510848 unmapped: 589824 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 5.18 scrub starts
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 5.18 scrub ok
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:09.445362+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  log_queue is 2 last_log 111 sent 109 num 2 unsent 2 sending 2
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:02:39.428528+0000 osd.1 (osd.1) 110 : cluster [DBG] 5.18 scrub starts
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:02:39.439142+0000 osd.1 (osd.1) 111 : cluster [DBG] 5.18 scrub ok
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66519040 unmapped: 581632 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client handle_log_ack log(last 111)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:02:39.428528+0000 osd.1 (osd.1) 110 : cluster [DBG] 5.18 scrub starts
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:02:39.439142+0000 osd.1 (osd.1) 111 : cluster [DBG] 5.18 scrub ok
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:10.445788+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66519040 unmapped: 581632 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 541481 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:11.446025+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66527232 unmapped: 573440 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:12.446279+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66527232 unmapped: 573440 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:13.446585+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66527232 unmapped: 573440 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:14.446909+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66535424 unmapped: 565248 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:15.447136+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66535424 unmapped: 565248 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 5.19 scrub starts
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.103586197s of 15.117080688s, submitted: 6
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 5.19 scrub ok
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 543894 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:16.447307+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  log_queue is 2 last_log 113 sent 111 num 2 unsent 2 sending 2
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:02:46.344075+0000 osd.1 (osd.1) 112 : cluster [DBG] 5.19 scrub starts
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:02:46.354683+0000 osd.1 (osd.1) 113 : cluster [DBG] 5.19 scrub ok
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66543616 unmapped: 557056 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client handle_log_ack log(last 113)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:02:46.344075+0000 osd.1 (osd.1) 112 : cluster [DBG] 5.19 scrub starts
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:02:46.354683+0000 osd.1 (osd.1) 113 : cluster [DBG] 5.19 scrub ok
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:17.447528+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66543616 unmapped: 557056 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:18.447864+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66543616 unmapped: 557056 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 4.8 scrub starts
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 4.8 scrub ok
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:19.448510+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  log_queue is 2 last_log 115 sent 113 num 2 unsent 2 sending 2
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:02:49.294155+0000 osd.1 (osd.1) 114 : cluster [DBG] 4.8 scrub starts
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:02:49.304671+0000 osd.1 (osd.1) 115 : cluster [DBG] 4.8 scrub ok
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66551808 unmapped: 548864 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client handle_log_ack log(last 115)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:02:49.294155+0000 osd.1 (osd.1) 114 : cluster [DBG] 4.8 scrub starts
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:02:49.304671+0000 osd.1 (osd.1) 115 : cluster [DBG] 4.8 scrub ok
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:20.449020+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66551808 unmapped: 548864 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 546305 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:21.449163+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66560000 unmapped: 540672 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:22.449299+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66568192 unmapped: 532480 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 4.14 scrub starts
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 4.14 scrub ok
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:23.449498+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  log_queue is 2 last_log 117 sent 115 num 2 unsent 2 sending 2
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:02:53.331729+0000 osd.1 (osd.1) 116 : cluster [DBG] 4.14 scrub starts
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:02:53.342209+0000 osd.1 (osd.1) 117 : cluster [DBG] 4.14 scrub ok
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66584576 unmapped: 516096 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client handle_log_ack log(last 117)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:02:53.331729+0000 osd.1 (osd.1) 116 : cluster [DBG] 4.14 scrub starts
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:02:53.342209+0000 osd.1 (osd.1) 117 : cluster [DBG] 4.14 scrub ok
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:24.449820+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66584576 unmapped: 516096 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:25.449982+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66584576 unmapped: 516096 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:26.450142+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 548718 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66592768 unmapped: 507904 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:27.450359+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66592768 unmapped: 507904 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 6.4 scrub starts
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.009474754s of 12.021146774s, submitted: 6
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 6.4 scrub ok
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:28.450611+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  log_queue is 2 last_log 119 sent 117 num 2 unsent 2 sending 2
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:02:58.365230+0000 osd.1 (osd.1) 118 : cluster [DBG] 6.4 scrub starts
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:02:58.389731+0000 osd.1 (osd.1) 119 : cluster [DBG] 6.4 scrub ok
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66600960 unmapped: 499712 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 6.b scrub starts
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client handle_log_ack log(last 119)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:02:58.365230+0000 osd.1 (osd.1) 118 : cluster [DBG] 6.4 scrub starts
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:02:58.389731+0000 osd.1 (osd.1) 119 : cluster [DBG] 6.4 scrub ok
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 6.b scrub ok
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:29.451279+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  log_queue is 2 last_log 121 sent 119 num 2 unsent 2 sending 2
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:02:59.336401+0000 osd.1 (osd.1) 120 : cluster [DBG] 6.b scrub starts
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:02:59.350514+0000 osd.1 (osd.1) 121 : cluster [DBG] 6.b scrub ok
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66609152 unmapped: 491520 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client handle_log_ack log(last 121)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:02:59.336401+0000 osd.1 (osd.1) 120 : cluster [DBG] 6.b scrub starts
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:02:59.350514+0000 osd.1 (osd.1) 121 : cluster [DBG] 6.b scrub ok
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 6.e scrub starts
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 6.e scrub ok
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:30.451442+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  log_queue is 2 last_log 123 sent 121 num 2 unsent 2 sending 2
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:03:00.365757+0000 osd.1 (osd.1) 122 : cluster [DBG] 6.e scrub starts
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:03:00.380106+0000 osd.1 (osd.1) 123 : cluster [DBG] 6.e scrub ok
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66617344 unmapped: 483328 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client handle_log_ack log(last 123)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:03:00.365757+0000 osd.1 (osd.1) 122 : cluster [DBG] 6.e scrub starts
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:03:00.380106+0000 osd.1 (osd.1) 123 : cluster [DBG] 6.e scrub ok
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:31.451644+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 555951 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66625536 unmapped: 475136 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:32.451806+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66625536 unmapped: 475136 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 6.1 scrub starts
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 6.1 scrub ok
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:33.451967+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  log_queue is 2 last_log 125 sent 123 num 2 unsent 2 sending 2
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:03:03.346791+0000 osd.1 (osd.1) 124 : cluster [DBG] 6.1 scrub starts
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:03:03.357324+0000 osd.1 (osd.1) 125 : cluster [DBG] 6.1 scrub ok
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66633728 unmapped: 466944 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client handle_log_ack log(last 125)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:03:03.346791+0000 osd.1 (osd.1) 124 : cluster [DBG] 6.1 scrub starts
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:03:03.357324+0000 osd.1 (osd.1) 125 : cluster [DBG] 6.1 scrub ok
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:34.452205+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66633728 unmapped: 466944 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:35.452330+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66633728 unmapped: 466944 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:36.452497+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-mon[75249]: from='client.15096 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 10 17:32:04 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/341943870' entity='client.admin' cmd={"prefix": "node ls"} : dispatch
Jan 10 17:32:04 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/1174244532' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 10 17:32:04 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/3265205213' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} : dispatch
Jan 10 17:32:04 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/562716068' entity='client.admin' cmd={"prefix": "osd crush class ls"} : dispatch
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 558362 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66650112 unmapped: 450560 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:37.452692+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66650112 unmapped: 450560 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:38.452949+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66658304 unmapped: 442368 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:39.453141+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66658304 unmapped: 442368 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:40.453307+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66658304 unmapped: 442368 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:41.453504+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 558362 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 6.6 scrub starts
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.718189240s of 13.738556862s, submitted: 8
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 6.6 scrub ok
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66682880 unmapped: 417792 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:42.453768+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  log_queue is 2 last_log 127 sent 125 num 2 unsent 2 sending 2
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:03:12.103943+0000 osd.1 (osd.1) 126 : cluster [DBG] 6.6 scrub starts
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:03:12.118153+0000 osd.1 (osd.1) 127 : cluster [DBG] 6.6 scrub ok
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client handle_log_ack log(last 127)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:03:12.103943+0000 osd.1 (osd.1) 126 : cluster [DBG] 6.6 scrub starts
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:03:12.118153+0000 osd.1 (osd.1) 127 : cluster [DBG] 6.6 scrub ok
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66691072 unmapped: 409600 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:43.454013+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66699264 unmapped: 401408 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:44.454229+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 6.2 scrub starts
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 6.2 scrub ok
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66723840 unmapped: 376832 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:45.454478+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  log_queue is 2 last_log 129 sent 127 num 2 unsent 2 sending 2
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:03:15.095406+0000 osd.1 (osd.1) 128 : cluster [DBG] 6.2 scrub starts
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:03:15.105889+0000 osd.1 (osd.1) 129 : cluster [DBG] 6.2 scrub ok
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 6.d scrub starts
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 6.d scrub ok
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client handle_log_ack log(last 129)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:03:15.095406+0000 osd.1 (osd.1) 128 : cluster [DBG] 6.2 scrub starts
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:03:15.105889+0000 osd.1 (osd.1) 129 : cluster [DBG] 6.2 scrub ok
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66740224 unmapped: 360448 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:46.454760+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  log_queue is 2 last_log 131 sent 129 num 2 unsent 2 sending 2
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:03:16.100152+0000 osd.1 (osd.1) 130 : cluster [DBG] 6.d scrub starts
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:03:16.117781+0000 osd.1 (osd.1) 131 : cluster [DBG] 6.d scrub ok
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 565595 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 6.c scrub starts
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_channel(cluster) log [DBG] : 6.c scrub ok
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client handle_log_ack log(last 131)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:03:16.100152+0000 osd.1 (osd.1) 130 : cluster [DBG] 6.d scrub starts
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:03:16.117781+0000 osd.1 (osd.1) 131 : cluster [DBG] 6.d scrub ok
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66748416 unmapped: 352256 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:47.454977+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  log_queue is 2 last_log 133 sent 131 num 2 unsent 2 sending 2
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:03:17.086127+0000 osd.1 (osd.1) 132 : cluster [DBG] 6.c scrub starts
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  will send 2026-01-10T17:03:17.100173+0000 osd.1 (osd.1) 133 : cluster [DBG] 6.c scrub ok
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client handle_log_ack log(last 133)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:03:17.086127+0000 osd.1 (osd.1) 132 : cluster [DBG] 6.c scrub starts
Jan 10 17:32:04 compute-0 ceph-osd[86809]: log_client  logged 2026-01-10T17:03:17.100173+0000 osd.1 (osd.1) 133 : cluster [DBG] 6.c scrub ok
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66748416 unmapped: 352256 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:48.455346+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66756608 unmapped: 344064 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:49.455577+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66756608 unmapped: 344064 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:50.455816+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66764800 unmapped: 335872 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:51.456032+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66764800 unmapped: 335872 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:52.456173+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66772992 unmapped: 327680 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:53.456348+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66772992 unmapped: 327680 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:54.456525+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66772992 unmapped: 327680 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:55.456662+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66781184 unmapped: 319488 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:56.456807+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66781184 unmapped: 319488 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:57.456944+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66789376 unmapped: 311296 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:58.457322+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66789376 unmapped: 311296 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:59.457752+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66789376 unmapped: 311296 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:00.458003+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66797568 unmapped: 303104 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:01.458222+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66797568 unmapped: 303104 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:02.458398+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66805760 unmapped: 294912 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:03.458618+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66805760 unmapped: 294912 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:04.458826+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66805760 unmapped: 294912 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:05.458990+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66813952 unmapped: 286720 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:06.459176+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66813952 unmapped: 286720 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:07.459344+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66813952 unmapped: 286720 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:08.459478+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66822144 unmapped: 278528 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:09.459678+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66822144 unmapped: 278528 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:10.459855+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66830336 unmapped: 270336 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:11.460009+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66830336 unmapped: 270336 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:12.460180+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66838528 unmapped: 262144 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:13.460332+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66838528 unmapped: 262144 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:14.460492+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66838528 unmapped: 262144 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:15.460661+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66846720 unmapped: 253952 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:16.460782+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66846720 unmapped: 253952 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:17.460977+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66854912 unmapped: 245760 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:18.461099+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66854912 unmapped: 245760 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:19.461381+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66854912 unmapped: 245760 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:20.461522+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66863104 unmapped: 237568 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:21.461754+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66863104 unmapped: 237568 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:22.461986+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66871296 unmapped: 229376 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:23.462134+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66871296 unmapped: 229376 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:24.462283+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66879488 unmapped: 221184 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:25.462413+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66879488 unmapped: 221184 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:26.462545+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66879488 unmapped: 221184 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:27.462719+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66887680 unmapped: 212992 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:28.462865+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66887680 unmapped: 212992 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:29.463169+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66887680 unmapped: 212992 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:30.463521+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66895872 unmapped: 204800 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:31.463674+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66895872 unmapped: 204800 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:32.463782+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66904064 unmapped: 196608 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:33.463911+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:34.464228+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66904064 unmapped: 196608 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:35.464534+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66904064 unmapped: 196608 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:36.464797+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66912256 unmapped: 188416 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:37.465125+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66912256 unmapped: 188416 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:38.465376+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66920448 unmapped: 180224 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:39.465609+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66920448 unmapped: 180224 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:40.465961+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66928640 unmapped: 172032 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:41.466281+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66936832 unmapped: 163840 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:42.466428+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66936832 unmapped: 163840 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:43.466600+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66945024 unmapped: 155648 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:44.466771+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66945024 unmapped: 155648 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:45.466892+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66945024 unmapped: 155648 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:46.467038+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66953216 unmapped: 147456 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:47.478398+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66953216 unmapped: 147456 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:48.478860+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66953216 unmapped: 147456 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:49.479065+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66961408 unmapped: 139264 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:50.479389+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66961408 unmapped: 139264 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:51.479758+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66969600 unmapped: 131072 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:52.479931+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66969600 unmapped: 131072 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:53.480207+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66977792 unmapped: 122880 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:54.480451+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66977792 unmapped: 122880 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:55.480688+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66977792 unmapped: 122880 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:56.480912+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66985984 unmapped: 114688 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:57.481083+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66985984 unmapped: 114688 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:58.481240+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66994176 unmapped: 106496 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:59.481565+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66994176 unmapped: 106496 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:00.481761+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67002368 unmapped: 98304 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:01.481970+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67002368 unmapped: 98304 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:02.482194+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67002368 unmapped: 98304 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:03.482358+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67010560 unmapped: 90112 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:04.482493+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67010560 unmapped: 90112 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:05.482622+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67018752 unmapped: 81920 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:06.482792+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67018752 unmapped: 81920 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:07.482952+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67018752 unmapped: 81920 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:08.483085+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67026944 unmapped: 73728 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:09.483289+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67026944 unmapped: 73728 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:10.483452+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67026944 unmapped: 73728 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:11.483593+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67035136 unmapped: 65536 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:12.483764+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67035136 unmapped: 65536 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:13.483931+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67043328 unmapped: 57344 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:14.484055+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67043328 unmapped: 57344 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:15.484190+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67043328 unmapped: 57344 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:16.484320+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67051520 unmapped: 49152 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:17.484496+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67051520 unmapped: 49152 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:18.484646+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67059712 unmapped: 40960 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:19.484857+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67059712 unmapped: 40960 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:20.485016+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67059712 unmapped: 40960 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:21.485139+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67067904 unmapped: 32768 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:22.485584+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67067904 unmapped: 32768 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:23.485939+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67076096 unmapped: 24576 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:24.486131+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67076096 unmapped: 24576 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:25.486300+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67092480 unmapped: 8192 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:26.486517+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67100672 unmapped: 0 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:27.486681+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67100672 unmapped: 0 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:28.486869+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67100672 unmapped: 0 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:29.487053+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67108864 unmapped: 1040384 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:30.487218+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67117056 unmapped: 1032192 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:31.487351+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67117056 unmapped: 1032192 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:32.487484+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67117056 unmapped: 1032192 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:33.488000+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67125248 unmapped: 1024000 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:34.488200+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67125248 unmapped: 1024000 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:35.488421+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 1015808 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:36.488570+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 1015808 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:37.488806+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 1015808 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:38.488961+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 1007616 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:39.489197+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 1007616 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:40.489421+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 1007616 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:41.489578+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67149824 unmapped: 999424 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:42.489765+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67149824 unmapped: 999424 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:43.489934+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67158016 unmapped: 991232 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:44.490123+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67158016 unmapped: 991232 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:45.490323+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67182592 unmapped: 966656 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:46.490457+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67182592 unmapped: 966656 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:47.490681+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67182592 unmapped: 966656 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:48.490941+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 958464 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:49.491177+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 958464 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:50.491351+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 958464 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:51.491581+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67198976 unmapped: 950272 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:52.491789+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67198976 unmapped: 950272 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:53.491967+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 942080 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:54.492198+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 942080 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:55.492365+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 942080 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:56.492577+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67215360 unmapped: 933888 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:57.492796+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67215360 unmapped: 933888 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:58.493009+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67223552 unmapped: 925696 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:59.493261+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67223552 unmapped: 925696 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:00.493433+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67223552 unmapped: 925696 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:01.493606+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67231744 unmapped: 917504 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:02.493756+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67231744 unmapped: 917504 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:03.493885+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67239936 unmapped: 909312 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:04.494050+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67239936 unmapped: 909312 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:05.494207+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67256320 unmapped: 892928 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:06.494397+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67264512 unmapped: 884736 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:07.494645+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67264512 unmapped: 884736 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:08.494881+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67272704 unmapped: 876544 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:09.495075+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67280896 unmapped: 868352 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:10.495240+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67280896 unmapped: 868352 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:11.495435+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 860160 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:12.495626+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 860160 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:13.495779+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 851968 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:14.495962+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 851968 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:15.496110+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 851968 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:16.496280+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67305472 unmapped: 843776 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:17.496446+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67305472 unmapped: 843776 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:18.496772+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67305472 unmapped: 843776 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:19.497011+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67313664 unmapped: 835584 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:20.497152+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67313664 unmapped: 835584 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:21.497333+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67321856 unmapped: 827392 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:22.497484+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67321856 unmapped: 827392 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:23.497779+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67321856 unmapped: 827392 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:24.497922+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67330048 unmapped: 819200 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:25.498096+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67330048 unmapped: 819200 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:26.498249+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67330048 unmapped: 819200 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:27.498386+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67330048 unmapped: 819200 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:28.498546+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67338240 unmapped: 811008 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:29.498789+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67338240 unmapped: 811008 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:30.498913+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67346432 unmapped: 802816 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:31.499064+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67346432 unmapped: 802816 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:32.499288+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67354624 unmapped: 794624 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:33.499452+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67354624 unmapped: 794624 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:34.499624+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67354624 unmapped: 794624 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:35.499759+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67362816 unmapped: 786432 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:36.499904+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67362816 unmapped: 786432 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:37.500024+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67371008 unmapped: 778240 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:38.500143+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67371008 unmapped: 778240 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:39.500338+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67371008 unmapped: 778240 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:40.509325+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67379200 unmapped: 770048 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:41.509516+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67379200 unmapped: 770048 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:42.509654+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67387392 unmapped: 761856 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:43.509823+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67387392 unmapped: 761856 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:44.509968+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67387392 unmapped: 761856 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:45.510194+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67395584 unmapped: 753664 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:46.510388+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67395584 unmapped: 753664 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:47.510655+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67395584 unmapped: 753664 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:48.510827+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67403776 unmapped: 745472 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:49.511038+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67403776 unmapped: 745472 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:50.511253+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67403776 unmapped: 745472 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:51.511542+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67411968 unmapped: 737280 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:52.511778+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67411968 unmapped: 737280 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:53.511916+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67420160 unmapped: 729088 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:54.512063+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67420160 unmapped: 729088 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:55.512212+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67420160 unmapped: 729088 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:56.512435+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67428352 unmapped: 720896 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:57.512598+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67428352 unmapped: 720896 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:58.512790+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67436544 unmapped: 712704 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:59.512988+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67436544 unmapped: 712704 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:00.513123+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67436544 unmapped: 712704 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:01.513286+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67444736 unmapped: 704512 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:02.513431+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67444736 unmapped: 704512 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:03.513623+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67452928 unmapped: 696320 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:04.513748+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67452928 unmapped: 696320 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:05.513894+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67452928 unmapped: 696320 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:06.514033+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67461120 unmapped: 688128 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:07.514184+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67461120 unmapped: 688128 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:08.514353+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67469312 unmapped: 679936 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:09.514594+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67469312 unmapped: 679936 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:10.514776+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67469312 unmapped: 679936 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:11.514915+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67477504 unmapped: 671744 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:12.515097+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67477504 unmapped: 671744 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:13.515266+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67477504 unmapped: 671744 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:14.515426+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67485696 unmapped: 663552 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:15.515616+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67485696 unmapped: 663552 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:16.515791+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 655360 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:17.515983+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 655360 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:18.516210+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 655360 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:19.516441+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67502080 unmapped: 647168 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:20.516567+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67502080 unmapped: 647168 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:21.516709+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67510272 unmapped: 638976 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:22.516820+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67510272 unmapped: 638976 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:23.516965+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67510272 unmapped: 638976 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:24.517164+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67518464 unmapped: 630784 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:25.517394+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67518464 unmapped: 630784 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:26.517552+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:27.517685+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:28.517880+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:29.518127+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67534848 unmapped: 614400 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:30.518279+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67534848 unmapped: 614400 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:31.518431+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 606208 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:32.518826+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 606208 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:33.518996+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67551232 unmapped: 598016 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:34.519166+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67551232 unmapped: 598016 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:35.519301+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67551232 unmapped: 598016 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:36.519500+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67559424 unmapped: 589824 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:37.519628+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67559424 unmapped: 589824 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:38.519760+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67559424 unmapped: 589824 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:39.519934+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67567616 unmapped: 581632 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:40.520086+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67567616 unmapped: 581632 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:41.520219+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67567616 unmapped: 581632 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:42.520378+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67575808 unmapped: 573440 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:43.520824+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67575808 unmapped: 573440 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:44.520982+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67584000 unmapped: 565248 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:45.521153+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67584000 unmapped: 565248 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:46.521326+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67584000 unmapped: 565248 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:47.521488+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67592192 unmapped: 557056 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:48.521635+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67592192 unmapped: 557056 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:49.521821+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67600384 unmapped: 548864 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:50.522031+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67600384 unmapped: 548864 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:51.522182+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67600384 unmapped: 548864 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:52.522364+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67608576 unmapped: 540672 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:53.522539+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67608576 unmapped: 540672 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:54.522748+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67608576 unmapped: 540672 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:55.522963+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67616768 unmapped: 532480 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:56.523157+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67616768 unmapped: 532480 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:57.523316+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67616768 unmapped: 532480 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:58.523476+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67624960 unmapped: 524288 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:59.523774+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67624960 unmapped: 524288 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:00.523983+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67633152 unmapped: 516096 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:01.524186+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67633152 unmapped: 516096 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:02.524373+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67633152 unmapped: 516096 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:03.524546+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67641344 unmapped: 507904 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:04.524681+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67641344 unmapped: 507904 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:05.524886+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67649536 unmapped: 499712 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:06.525097+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67649536 unmapped: 499712 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:07.525238+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67649536 unmapped: 499712 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:08.525370+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67657728 unmapped: 491520 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:09.525542+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67657728 unmapped: 491520 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:10.525677+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67665920 unmapped: 483328 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:11.525834+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67665920 unmapped: 483328 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:12.525909+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67674112 unmapped: 475136 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:13.526022+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67674112 unmapped: 475136 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:14.526162+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67674112 unmapped: 475136 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:15.526348+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67682304 unmapped: 466944 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:16.526545+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67682304 unmapped: 466944 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:17.526684+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67690496 unmapped: 458752 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:18.526893+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67690496 unmapped: 458752 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:19.527080+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67690496 unmapped: 458752 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:20.527256+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67698688 unmapped: 450560 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:21.527440+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67698688 unmapped: 450560 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:22.527665+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67706880 unmapped: 442368 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:23.527980+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67706880 unmapped: 442368 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:24.528161+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67706880 unmapped: 442368 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:25.528351+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67706880 unmapped: 442368 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:26.528502+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67715072 unmapped: 434176 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:27.528691+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67715072 unmapped: 434176 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:28.528903+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67723264 unmapped: 425984 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:29.529171+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67723264 unmapped: 425984 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:30.529378+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67723264 unmapped: 425984 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:31.529565+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67731456 unmapped: 417792 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:32.529796+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67731456 unmapped: 417792 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:33.529967+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67731456 unmapped: 417792 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:34.530146+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67739648 unmapped: 409600 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:35.530272+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67739648 unmapped: 409600 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:36.530460+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67747840 unmapped: 401408 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:37.530611+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67747840 unmapped: 401408 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:38.530756+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67756032 unmapped: 393216 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:39.530918+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67756032 unmapped: 393216 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:40.531128+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67756032 unmapped: 393216 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:41.531281+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67756032 unmapped: 393216 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:42.531439+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67764224 unmapped: 385024 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:43.531569+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67764224 unmapped: 385024 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:44.531687+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67764224 unmapped: 385024 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:45.531816+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:46.532000+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67772416 unmapped: 376832 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:47.532276+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67772416 unmapped: 376832 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:48.532415+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67780608 unmapped: 368640 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:49.532664+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67780608 unmapped: 368640 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:50.532760+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67788800 unmapped: 360448 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:51.532969+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67788800 unmapped: 360448 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:52.533156+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67788800 unmapped: 360448 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:53.533355+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67796992 unmapped: 352256 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:54.533547+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67796992 unmapped: 352256 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:55.533670+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67805184 unmapped: 344064 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:56.533771+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67805184 unmapped: 344064 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:57.533963+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67805184 unmapped: 344064 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:58.534087+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67813376 unmapped: 335872 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:59.534291+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67813376 unmapped: 335872 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:00.534485+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67813376 unmapped: 335872 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:01.534665+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67821568 unmapped: 327680 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:02.534819+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67821568 unmapped: 327680 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:03.534967+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67829760 unmapped: 319488 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:04.535163+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67829760 unmapped: 319488 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Cumulative writes: 4552 writes, 20K keys, 4552 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 4552 writes, 515 syncs, 8.84 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 4552 writes, 20K keys, 4552 commit groups, 1.0 writes per commit group, ingest: 16.66 MB, 0.03 MB/s
                                           Interval WAL: 4552 writes, 515 syncs, 8.84 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.019       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.019       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.019       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d5952838d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 0.000145 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d5952838d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 0.000145 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d5952838d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 0.000145 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d5952838d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 0.000145 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.01              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d5952838d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 0.000145 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d5952838d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 0.000145 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d5952838d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 0.000145 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d595283a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 1.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d595283a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 1.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.02              0.00         1    0.025       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.02              0.00         1    0.025       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.02              0.00         1    0.025       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d595283a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 1.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d5952838d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 0.000145 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d5952838d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 0.000145 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:05.535323+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67887104 unmapped: 262144 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:06.535585+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67895296 unmapped: 253952 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:07.535785+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67895296 unmapped: 253952 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:08.535982+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67903488 unmapped: 245760 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:09.536317+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67903488 unmapped: 245760 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:10.536529+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67911680 unmapped: 237568 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:11.536785+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67919872 unmapped: 229376 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:12.536988+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67919872 unmapped: 229376 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:13.537208+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67919872 unmapped: 229376 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:14.537432+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67928064 unmapped: 221184 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:15.537777+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67928064 unmapped: 221184 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:16.538012+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67936256 unmapped: 212992 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:17.538235+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67936256 unmapped: 212992 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:18.538467+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67936256 unmapped: 212992 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:19.538732+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67944448 unmapped: 204800 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:20.538904+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67944448 unmapped: 204800 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:21.539059+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67952640 unmapped: 196608 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:22.539267+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67952640 unmapped: 196608 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:23.539474+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67952640 unmapped: 196608 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:24.539783+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67960832 unmapped: 188416 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:25.540002+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67960832 unmapped: 188416 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:26.540196+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67969024 unmapped: 180224 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:27.540413+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67969024 unmapped: 180224 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:28.540581+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67969024 unmapped: 180224 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:29.540835+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67977216 unmapped: 172032 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:30.540987+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67969024 unmapped: 180224 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:31.541179+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67969024 unmapped: 180224 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:32.541325+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67977216 unmapped: 172032 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:33.541494+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67977216 unmapped: 172032 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:34.541694+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67977216 unmapped: 172032 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:35.541934+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67985408 unmapped: 163840 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:36.542117+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67985408 unmapped: 163840 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:37.542286+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67993600 unmapped: 155648 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:38.542457+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67993600 unmapped: 155648 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:39.542673+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67993600 unmapped: 155648 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:40.542876+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68001792 unmapped: 147456 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:41.543007+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68001792 unmapped: 147456 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:42.543122+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68009984 unmapped: 139264 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:43.543334+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68009984 unmapped: 139264 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:44.543484+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68009984 unmapped: 139264 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:45.543638+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68018176 unmapped: 131072 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:46.543813+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68018176 unmapped: 131072 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:47.543958+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68026368 unmapped: 122880 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:48.544079+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68026368 unmapped: 122880 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:49.544266+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68026368 unmapped: 122880 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:50.544504+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68034560 unmapped: 114688 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:51.544663+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68034560 unmapped: 114688 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:52.544837+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68034560 unmapped: 114688 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:53.544986+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68042752 unmapped: 106496 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:54.545134+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68042752 unmapped: 106496 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:55.545288+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 98304 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:56.545510+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 98304 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:57.545737+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 98304 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:58.545908+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68059136 unmapped: 90112 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:59.546220+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68059136 unmapped: 90112 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:00.546456+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68059136 unmapped: 90112 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:01.546608+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68067328 unmapped: 81920 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:02.546799+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68067328 unmapped: 81920 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:03.546960+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68075520 unmapped: 73728 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:04.547141+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68075520 unmapped: 73728 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:05.547383+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68075520 unmapped: 73728 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:06.547560+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 65536 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:07.547789+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 65536 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:08.547935+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68091904 unmapped: 57344 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:09.548122+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68091904 unmapped: 57344 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:10.548484+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68091904 unmapped: 57344 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:11.548623+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 49152 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:12.548774+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 49152 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:13.548921+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68108288 unmapped: 40960 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:14.549370+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68108288 unmapped: 40960 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:15.549567+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68108288 unmapped: 40960 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:16.549797+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 32768 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:17.550161+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 32768 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:18.550305+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 24576 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:19.550598+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 24576 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:20.550776+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 24576 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:21.550955+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 16384 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:22.551116+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 16384 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:23.551311+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 16384 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:24.551475+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 8192 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:25.551619+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 8192 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:26.551820+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68149248 unmapped: 0 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:27.551971+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68149248 unmapped: 0 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:28.552139+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68149248 unmapped: 0 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:29.552342+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68149248 unmapped: 1048576 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:30.552512+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68149248 unmapped: 1048576 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:31.552643+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 1040384 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:32.553109+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 1040384 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:33.553364+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 1040384 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:34.553757+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68165632 unmapped: 1032192 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:35.553960+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68165632 unmapped: 1032192 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:36.554200+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 1024000 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:37.554454+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 1024000 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:38.554780+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 1024000 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:39.555008+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68182016 unmapped: 1015808 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:40.555429+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68182016 unmapped: 1015808 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:41.555677+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 1007616 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:42.555950+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 1007616 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:43.556088+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 1007616 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:44.556229+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68198400 unmapped: 999424 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:45.556441+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68198400 unmapped: 999424 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:46.556627+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68198400 unmapped: 999424 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:47.556839+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 991232 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:48.556985+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 991232 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:49.557204+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 991232 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:50.557416+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68214784 unmapped: 983040 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:51.557592+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68214784 unmapped: 983040 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:52.557858+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68222976 unmapped: 974848 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:53.558054+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68222976 unmapped: 974848 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:54.558347+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68222976 unmapped: 974848 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:55.558623+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68231168 unmapped: 966656 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:56.558868+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68231168 unmapped: 966656 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:57.559158+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68239360 unmapped: 958464 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:58.559363+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68239360 unmapped: 958464 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:59.559620+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68239360 unmapped: 958464 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:00.559762+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68239360 unmapped: 958464 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:01.560048+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68247552 unmapped: 950272 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:02.560232+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68247552 unmapped: 950272 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:03.560475+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68255744 unmapped: 942080 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:04.560636+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68255744 unmapped: 942080 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:05.560805+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:06.561032+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:07.561380+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:08.561737+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:09.562046+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:10.562279+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:11.562456+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:12.562779+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:13.563016+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:14.563169+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:15.563342+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:16.563497+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:17.563759+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:18.563923+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:19.564132+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:20.564299+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:21.564458+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:22.655883+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:23.656038+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:24.656437+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:25.656595+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:26.656829+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:27.657000+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:28.657158+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:29.657382+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:30.657566+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:31.657779+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:32.657952+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:33.658194+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:34.658417+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:35.658571+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:36.658721+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:37.658910+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:38.659081+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:39.659316+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:40.659640+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:41.659794+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:42.659954+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:43.660151+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:44.660308+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:45.660473+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:46.660609+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:47.660787+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:48.660939+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:49.661094+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:50.661288+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:51.661483+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:52.661656+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:53.661807+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:54.661969+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:55.662123+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:56.662276+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:57.662420+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:58.662576+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:59.662830+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:00.663018+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:01.663149+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:02.663259+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:03.663448+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:04.663614+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:05.663778+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:06.663990+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:07.664163+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:08.664296+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:09.664491+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:10.689906+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:11.690160+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:12.690300+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:13.693883+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:14.694023+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:15.694175+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:16.694295+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:17.694435+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:18.694605+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:19.694847+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:20.695023+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:21.695175+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:22.695340+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:23.695601+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:24.695794+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:25.695969+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:26.696132+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:27.696750+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:28.696912+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:29.697358+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:30.697585+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:31.697794+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:32.697943+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:33.698157+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:34.698326+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:35.698466+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:36.698654+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:37.698827+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:38.699093+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:39.699374+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:40.699746+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:41.700140+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:42.700321+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:43.700484+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:44.700657+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:45.700778+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:46.700942+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:47.701176+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:48.701362+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:49.701592+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:50.701747+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:51.701909+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:52.702128+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:53.702320+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:54.702486+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:55.702638+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:56.702995+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:57.703143+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:58.703321+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:59.703509+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:00.703650+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:01.703770+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:02.704046+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:03.704322+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:04.704539+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:05.704759+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:06.704959+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:07.705169+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:08.705363+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:09.705609+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:10.705795+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:11.705954+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:12.706203+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:13.706394+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:14.706534+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:15.706678+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:16.706892+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:17.707120+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:18.707270+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:19.707478+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:20.707763+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:21.707973+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:22.708109+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:23.708367+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:24.708548+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:25.708777+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:26.709033+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:27.709628+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:28.709838+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:29.710404+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:30.710563+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:31.710762+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:32.710962+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:33.711117+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:34.711262+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:35.711404+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:36.711545+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:37.711770+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:38.711924+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:39.712127+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:40.712364+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:41.712566+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:42.712736+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:43.712941+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:44.713136+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:45.713298+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:46.713492+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:47.713938+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:48.726361+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:49.726608+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:50.726810+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:51.726983+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:52.727131+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:53.727264+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:54.727380+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:55.727580+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:56.727756+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:57.728796+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:58.728936+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:59.729168+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:00.729305+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:01.729474+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68304896 unmapped: 892928 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:02.729638+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68304896 unmapped: 892928 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:03.729788+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68304896 unmapped: 892928 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:04.729947+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68304896 unmapped: 892928 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:05.730078+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68304896 unmapped: 892928 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:06.730261+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68304896 unmapped: 892928 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:07.730409+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68304896 unmapped: 892928 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:08.730573+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68304896 unmapped: 892928 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:09.730803+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68304896 unmapped: 892928 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:10.730955+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68304896 unmapped: 892928 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:11.731165+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68304896 unmapped: 892928 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:12.731310+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68304896 unmapped: 892928 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:13.731451+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68304896 unmapped: 892928 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:14.731618+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68304896 unmapped: 892928 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:15.731753+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68304896 unmapped: 892928 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:16.731891+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68304896 unmapped: 892928 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:17.732093+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 ms_handle_reset con 0x55d5962f7000 session 0x55d596c1a700
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d597b79c00
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 ms_handle_reset con 0x55d597b79000 session 0x55d596c0e700
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d5962f7000
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:18.732225+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:19.732414+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:20.732574+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:21.732768+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:22.732911+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:23.733070+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:24.733263+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:25.733397+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:26.733522+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:27.733640+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:28.733809+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:29.734097+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:30.734309+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:31.734495+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:32.734672+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:33.734859+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:34.735004+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:35.735131+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:36.735296+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:37.735438+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:38.735584+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:39.735981+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:40.736233+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:41.736438+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:42.736577+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:43.736837+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:44.737020+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:45.737245+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:46.737410+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:47.737573+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:48.737722+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:49.737952+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:50.738136+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:51.738321+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:52.738502+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:53.738659+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:54.738934+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:55.739107+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:56.739329+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:57.739481+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:58.739625+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:59.739923+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:00.740086+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:01.740261+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:02.740491+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:03.740645+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:04.740809+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:05.740989+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:06.741178+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:07.741414+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:08.741564+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68444160 unmapped: 753664 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:09.741861+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68444160 unmapped: 753664 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:10.742011+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68444160 unmapped: 753664 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:11.742185+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68444160 unmapped: 753664 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:12.742342+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68444160 unmapped: 753664 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:13.742526+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68444160 unmapped: 753664 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:14.742726+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:15.742850+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:16.743056+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:17.743300+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:18.743492+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:19.743818+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:20.744003+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:21.744192+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:22.744363+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:23.744593+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:24.744788+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:25.744973+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:26.745279+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:27.745483+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:28.745635+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:29.746299+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:30.746455+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:31.746860+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:32.746996+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:33.747130+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:34.747382+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:35.747538+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:36.747669+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:37.747838+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:38.748027+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:39.748219+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:40.748487+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:41.748748+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:42.748920+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:43.749078+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:44.749305+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:45.749606+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:46.749771+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:47.749924+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:48.750034+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:49.750240+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:50.750424+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:51.750547+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:52.750672+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:53.750892+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:54.751003+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:55.751137+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:56.751280+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:57.751486+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:58.751646+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:59.751940+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:00.752152+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:01.752282+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:02.752442+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:03.752607+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:04.752817+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:05.753024+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:06.753269+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:07.753472+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:08.753655+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:09.753892+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:10.754091+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:11.754256+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:12.754597+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:13.755475+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:14.755898+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:15.756122+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:16.756274+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:17.756471+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:18.756658+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:19.756875+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:20.757081+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:21.757335+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:22.757521+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:23.757791+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:24.758017+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:25.758199+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:26.758464+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:27.758860+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:28.759119+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:29.759432+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:30.759613+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:31.759804+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:32.760072+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:33.760290+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:34.760412+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:35.760602+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:36.760920+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:37.761150+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:38.761393+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:39.764022+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:40.766102+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:41.766620+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:42.768449+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:43.769170+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:44.770674+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:45.771251+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:46.771804+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:47.772119+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:48.772805+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:49.773796+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:50.774123+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:51.774542+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:52.774769+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:53.775148+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:54.775647+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:55.775909+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:56.776204+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:57.776352+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:58.776563+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:59.777148+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:00.777492+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:01.777817+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:02.778112+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:03.778329+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:04.778646+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:05.778857+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:06.779097+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:07.779344+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:08.779491+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:09.779771+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68460544 unmapped: 737280 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:10.780018+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68460544 unmapped: 737280 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:11.780210+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68460544 unmapped: 737280 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:12.780395+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68460544 unmapped: 737280 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:13.780565+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68460544 unmapped: 737280 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:14.780766+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68460544 unmapped: 737280 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:15.780919+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68460544 unmapped: 737280 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:16.781156+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68460544 unmapped: 737280 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:17.781370+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68460544 unmapped: 737280 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:18.781575+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68460544 unmapped: 737280 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:19.781842+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68460544 unmapped: 737280 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:20.782037+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68460544 unmapped: 737280 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:21.782218+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68460544 unmapped: 737280 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:22.782437+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68460544 unmapped: 737280 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:23.782552+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68460544 unmapped: 737280 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:24.782826+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68460544 unmapped: 737280 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:25.782978+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68460544 unmapped: 737280 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:26.783148+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68460544 unmapped: 737280 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:27.783246+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68460544 unmapped: 737280 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:28.783334+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68460544 unmapped: 737280 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:29.783560+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68460544 unmapped: 737280 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:30.783765+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68460544 unmapped: 737280 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:31.783980+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68460544 unmapped: 737280 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:32.784152+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68460544 unmapped: 737280 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:33.784331+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68468736 unmapped: 729088 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:34.784540+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68468736 unmapped: 729088 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:35.784726+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68468736 unmapped: 729088 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:36.784927+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68468736 unmapped: 729088 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:37.785096+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68468736 unmapped: 729088 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:38.785286+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68468736 unmapped: 729088 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:39.785684+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68468736 unmapped: 729088 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:40.785900+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68468736 unmapped: 729088 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:41.786165+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68468736 unmapped: 729088 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:42.786350+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68468736 unmapped: 729088 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:43.788005+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68468736 unmapped: 729088 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:44.788152+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68468736 unmapped: 729088 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:45.788683+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68468736 unmapped: 729088 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:46.789445+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68468736 unmapped: 729088 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:47.789837+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68468736 unmapped: 729088 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:48.789986+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68468736 unmapped: 729088 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:49.790271+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68468736 unmapped: 729088 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:50.790579+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68468736 unmapped: 729088 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:51.790885+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68468736 unmapped: 729088 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:52.791086+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68468736 unmapped: 729088 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:53.791339+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68468736 unmapped: 729088 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:54.791621+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68468736 unmapped: 729088 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:55.791928+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68468736 unmapped: 729088 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:56.792269+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68468736 unmapped: 729088 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:57.792508+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68468736 unmapped: 729088 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:58.792791+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68468736 unmapped: 729088 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:59.793283+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68468736 unmapped: 729088 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:00.793533+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68468736 unmapped: 729088 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:01.793760+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68468736 unmapped: 729088 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:02.793968+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68468736 unmapped: 729088 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:03.794245+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68468736 unmapped: 729088 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:04.794450+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68468736 unmapped: 729088 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:05.794758+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68468736 unmapped: 729088 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:06.794995+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68468736 unmapped: 729088 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:07.795351+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68468736 unmapped: 729088 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:08.795516+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 720896 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:09.795761+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 720896 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:10.796001+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 720896 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:11.796218+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 720896 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:12.796435+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 720896 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:13.796588+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 720896 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:14.796806+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 720896 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:15.796965+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 720896 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:16.797880+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 720896 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:17.798077+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 720896 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:18.798264+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 720896 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:19.798531+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 720896 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:20.798691+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 720896 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:21.798880+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 720896 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:22.799027+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 720896 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:23.799194+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 720896 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:24.799350+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 720896 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:25.799543+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 720896 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:26.799873+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 720896 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:27.800058+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 720896 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:28.800227+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 720896 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:29.800397+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 720896 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:30.800569+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 720896 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:31.800730+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 720896 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:32.800930+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 720896 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:33.801169+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 720896 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:34.801355+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 720896 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:35.801533+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 720896 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:36.801820+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 720896 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:37.802000+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 720896 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:38.802160+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 720896 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:39.802565+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 720896 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:40.802886+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 720896 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:41.803039+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 720896 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:42.803191+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 720896 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:43.803343+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 720896 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:44.803496+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 720896 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:45.803609+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 720896 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:46.803783+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 720896 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:47.803949+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 720896 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:48.804168+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 720896 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:49.804423+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 720896 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:50.804594+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 720896 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:51.804770+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68485120 unmapped: 712704 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:52.804945+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68485120 unmapped: 712704 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:53.805194+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68485120 unmapped: 712704 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:54.805546+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68485120 unmapped: 712704 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:55.805862+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68485120 unmapped: 712704 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:56.806169+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68485120 unmapped: 712704 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:57.806532+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68485120 unmapped: 712704 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:58.806778+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68485120 unmapped: 712704 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:59.807077+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68485120 unmapped: 712704 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:00.807408+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68485120 unmapped: 712704 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:01.807563+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68485120 unmapped: 712704 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:02.807812+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68485120 unmapped: 712704 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:03.807984+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68485120 unmapped: 712704 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:04.808199+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Cumulative writes: 4552 writes, 20K keys, 4552 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 4552 writes, 515 syncs, 8.84 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.019       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.019       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.019       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d5952838d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d5952838d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d5952838d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d5952838d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.01              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d5952838d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d5952838d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d5952838d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d595283a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d595283a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.02              0.00         1    0.025       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.02              0.00         1    0.025       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.02              0.00         1    0.025       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d595283a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d5952838d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d5952838d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:05.808420+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:06.808598+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:07.808836+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:08.809038+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:09.809380+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:10.809616+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:11.809810+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:12.810013+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:13.810272+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:14.810459+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:15.810663+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:16.810946+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:17.811145+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:18.811302+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:19.811547+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:20.811785+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:21.812008+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:22.812291+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:23.812524+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:24.812728+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:25.812913+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:26.813084+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:27.813259+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:28.813485+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:29.813764+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:30.813942+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:31.814091+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:32.814238+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:33.814495+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:34.814757+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:35.814891+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:36.815143+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:37.815381+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:38.815594+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:39.815851+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:40.816035+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:41.816214+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:42.816480+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:43.816805+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:44.817092+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:45.817295+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:46.817570+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:47.817772+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:48.817983+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:49.818238+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:50.819435+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:51.819653+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:52.820336+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:53.820774+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:54.820985+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:55.821138+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:56.821679+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:57.821948+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:58.822125+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:59.822384+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:00.822959+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:01.823254+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:02.823642+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:03.824069+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:04.824381+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:05.824545+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:06.824797+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:07.824986+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:08.825203+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d598f3ec00
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:09.825449+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 68 handle_osd_map epochs [68,69], i have 68, src has [1,69]
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 987.875549316s of 987.892700195s, submitted: 8
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _renew_subs
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 69 handle_osd_map epochs [69,69], i have 69, src has [1,69]
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 548864 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:10.825605+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 573166 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _renew_subs
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 69 handle_osd_map epochs [70,70], i have 69, src has [1,70]
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77234176 unmapped: 8749056 heap: 85983232 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:11.825862+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _renew_subs
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 70 handle_osd_map epochs [71,71], i have 70, src has [1,71]
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 71 ms_handle_reset con 0x55d598f3ec00 session 0x55d5975fc000
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 70000640 unmapped: 15982592 heap: 85983232 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 71 heartbeat osd_stat(store_statfs(0x4fd892000/0x0/0x4ffc00000, data 0x8c0ead/0x936000, compress 0x0/0x0/0x0, omap 0xbe6e, meta 0x1a24192), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d598f3f400
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:12.826021+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 70131712 unmapped: 15851520 heap: 85983232 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:13.826221+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 24100864 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _renew_subs
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 71 handle_osd_map epochs [72,72], i have 71, src has [1,72]
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 72 ms_handle_reset con 0x55d598f3f400 session 0x55d598b91c00
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:14.826440+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 24092672 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:15.826651+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 691316 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 24092672 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:16.826881+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 24092672 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 72 heartbeat osd_stat(store_statfs(0x4fcc1e000/0x0/0x4ffc00000, data 0x1533a7f/0x15ac000, compress 0x0/0x0/0x0, omap 0xc4ca, meta 0x1a23b36), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:17.827182+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 24092672 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 72 heartbeat osd_stat(store_statfs(0x4fcc1e000/0x0/0x4ffc00000, data 0x1533a7f/0x15ac000, compress 0x0/0x0/0x0, omap 0xc4ca, meta 0x1a23b36), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:18.827391+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 24092672 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:19.827641+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 24092672 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:20.827957+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 691316 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 24092672 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 72 handle_osd_map epochs [73,73], i have 72, src has [1,73]
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.579319000s of 11.783731461s, submitted: 42
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:21.828135+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 24068096 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:22.828344+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 24068096 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:23.828562+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 73 heartbeat osd_stat(store_statfs(0x4fcc1b000/0x0/0x4ffc00000, data 0x1534f2f/0x15af000, compress 0x0/0x0/0x0, omap 0xc703, meta 0x1a238fd), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 24068096 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:24.828796+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 24068096 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:25.828985+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 694088 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 24068096 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:26.829166+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 24068096 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:27.829359+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 73 heartbeat osd_stat(store_statfs(0x4fcc1b000/0x0/0x4ffc00000, data 0x1534f2f/0x15af000, compress 0x0/0x0/0x0, omap 0xc703, meta 0x1a238fd), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 24068096 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:28.829521+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 24068096 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:29.829786+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 24068096 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:30.830026+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 694088 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 24068096 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:31.830312+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 24068096 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:32.830516+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 24068096 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 73 heartbeat osd_stat(store_statfs(0x4fcc1b000/0x0/0x4ffc00000, data 0x1534f2f/0x15af000, compress 0x0/0x0/0x0, omap 0xc703, meta 0x1a238fd), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:33.830768+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 73 heartbeat osd_stat(store_statfs(0x4fcc1b000/0x0/0x4ffc00000, data 0x1534f2f/0x15af000, compress 0x0/0x0/0x0, omap 0xc703, meta 0x1a238fd), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 24068096 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:34.830965+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 24068096 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:35.831137+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 694088 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 73 heartbeat osd_stat(store_statfs(0x4fcc1b000/0x0/0x4ffc00000, data 0x1534f2f/0x15af000, compress 0x0/0x0/0x0, omap 0xc703, meta 0x1a238fd), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 24068096 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:36.831326+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 24068096 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:37.831467+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 24068096 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:38.831638+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 24068096 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:39.831813+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 24068096 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:40.831998+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 694088 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 24068096 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:41.832190+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 73 heartbeat osd_stat(store_statfs(0x4fcc1b000/0x0/0x4ffc00000, data 0x1534f2f/0x15af000, compress 0x0/0x0/0x0, omap 0xc703, meta 0x1a238fd), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 24068096 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:42.832322+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 24068096 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:43.832488+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 24068096 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:44.832734+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 24068096 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:45.832887+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 694088 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 24068096 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:46.833055+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 24068096 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:47.833250+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 73 heartbeat osd_stat(store_statfs(0x4fcc1b000/0x0/0x4ffc00000, data 0x1534f2f/0x15af000, compress 0x0/0x0/0x0, omap 0xc703, meta 0x1a238fd), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 24068096 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 73 heartbeat osd_stat(store_statfs(0x4fcc1b000/0x0/0x4ffc00000, data 0x1534f2f/0x15af000, compress 0x0/0x0/0x0, omap 0xc703, meta 0x1a238fd), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:48.833373+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 24068096 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:49.833581+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 24068096 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 73 heartbeat osd_stat(store_statfs(0x4fcc1b000/0x0/0x4ffc00000, data 0x1534f2f/0x15af000, compress 0x0/0x0/0x0, omap 0xc703, meta 0x1a238fd), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:50.833773+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 694088 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 24068096 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:51.833981+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 24068096 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:52.834197+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 24068096 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:53.834409+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 24068096 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:54.834587+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 24068096 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:55.834762+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 694088 data_alloc: 218103808 data_used: 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 73 heartbeat osd_stat(store_statfs(0x4fcc1b000/0x0/0x4ffc00000, data 0x1534f2f/0x15af000, compress 0x0/0x0/0x0, omap 0xc703, meta 0x1a238fd), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 24068096 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d596c29800
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 34.930355072s of 34.937496185s, submitted: 13
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d596c28000
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:56.834900+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _renew_subs
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 73 handle_osd_map epochs [74,74], i have 73, src has [1,74]
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 74 ms_handle_reset con 0x55d596c29800 session 0x55d596c1a1c0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 70696960 unmapped: 23683072 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d596c29c00
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 74 ms_handle_reset con 0x55d596c29c00 session 0x55d598e47c00
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d596c29000
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 74 ms_handle_reset con 0x55d596c29000 session 0x55d596cc2e00
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:57.835572+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _renew_subs
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 74 handle_osd_map epochs [75,75], i have 74, src has [1,75]
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 75 ms_handle_reset con 0x55d596c28000 session 0x55d598e74700
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d596c29000
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 70754304 unmapped: 23625728 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d596c29c00
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:58.836123+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 75 handle_osd_map epochs [75,76], i have 75, src has [1,76]
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 76 ms_handle_reset con 0x55d596c29c00 session 0x55d5975fd180
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 76 ms_handle_reset con 0x55d596c29000 session 0x55d598e26700
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d598f3f400
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 76 ms_handle_reset con 0x55d598f3f400 session 0x55d596cc2540
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d599109800
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 76 ms_handle_reset con 0x55d599109800 session 0x55d5975fc000
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 72048640 unmapped: 22331392 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:59.836345+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d596c28000
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 72015872 unmapped: 22364160 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 76 heartbeat osd_stat(store_statfs(0x4fcc0f000/0x0/0x4ffc00000, data 0x1539540/0x15bc000, compress 0x0/0x0/0x0, omap 0xd5db, meta 0x1a22a25), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:00.836476+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _renew_subs
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 76 handle_osd_map epochs [77,77], i have 76, src has [1,77]
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 715390 data_alloc: 218103808 data_used: 19
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 77 ms_handle_reset con 0x55d596c28000 session 0x55d596cc2a80
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 72048640 unmapped: 22331392 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d596c29000
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:01.836586+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 77 handle_osd_map epochs [77,78], i have 77, src has [1,78]
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 78 ms_handle_reset con 0x55d596c29000 session 0x55d597b17180
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 72081408 unmapped: 22298624 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:02.836809+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 72105984 unmapped: 22274048 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:03.837009+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d596c5d000
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _renew_subs
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 78 handle_osd_map epochs [79,79], i have 78, src has [1,79]
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 79 ms_handle_reset con 0x55d596c5d000 session 0x55d597b16c40
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 22249472 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:04.837171+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d596c29800
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _renew_subs
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 79 handle_osd_map epochs [80,80], i have 79, src has [1,80]
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 80 heartbeat osd_stat(store_statfs(0x4fcc07000/0x0/0x4ffc00000, data 0x153d348/0x15c3000, compress 0x0/0x0/0x0, omap 0xe026, meta 0x1a21fda), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _renew_subs
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 80 handle_osd_map epochs [81,81], i have 80, src has [1,81]
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 81 ms_handle_reset con 0x55d596c29800 session 0x55d5975fd180
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 22241280 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:05.837398+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d596c29c00
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 733219 data_alloc: 218103808 data_used: 8141
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _renew_subs
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 81 handle_osd_map epochs [82,82], i have 81, src has [1,82]
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 82 ms_handle_reset con 0x55d596c29c00 session 0x55d596edea80
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 72269824 unmapped: 22110208 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:06.837611+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d596c28000
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d596c29000
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _renew_subs
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 82 handle_osd_map epochs [83,83], i have 82, src has [1,83]
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.447594643s of 10.715058327s, submitted: 154
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 83 ms_handle_reset con 0x55d596c28000 session 0x55d597b16fc0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 83 ms_handle_reset con 0x55d596c29000 session 0x55d597b176c0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 72212480 unmapped: 22167552 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:07.837769+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d596c29800
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _renew_subs
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 83 handle_osd_map epochs [84,84], i have 83, src has [1,84]
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 84 ms_handle_reset con 0x55d596c29800 session 0x55d597b16540
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 72220672 unmapped: 22159360 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d596c5d000
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:08.837909+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _renew_subs
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 84 handle_osd_map epochs [85,85], i have 84, src has [1,85]
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 85 ms_handle_reset con 0x55d596c5d000 session 0x55d5988be540
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 72302592 unmapped: 22077440 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d598f3ec00
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:09.838078+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 85 heartbeat osd_stat(store_statfs(0x4fc3ed000/0x0/0x4ffc00000, data 0x1d46de1/0x1ddd000, compress 0x0/0x0/0x0, omap 0xf073, meta 0x1a20f8d), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 21741568 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:10.838251+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 962711 data_alloc: 218103808 data_used: 8141
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 73842688 unmapped: 20537344 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _renew_subs
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 85 handle_osd_map epochs [86,86], i have 85, src has [1,86]
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 86 ms_handle_reset con 0x55d598f3ec00 session 0x55d596d128c0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:11.838467+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d596c28000
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 20496384 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _renew_subs
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 86 handle_osd_map epochs [87,87], i have 86, src has [1,87]
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d596c29000
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:12.838619+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 87 ms_handle_reset con 0x55d596c28000 session 0x55d597e6a000
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d596c29800
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 73834496 unmapped: 20545536 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _renew_subs
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 87 handle_osd_map epochs [88,88], i have 87, src has [1,88]
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 88 heartbeat osd_stat(store_statfs(0x4fba45000/0x0/0x4ffc00000, data 0x15499ac/0x15e2000, compress 0x0/0x0/0x0, omap 0xf5cf, meta 0x2bc0a31), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:13.838784+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 88 ms_handle_reset con 0x55d596c29000 session 0x55d598e47880
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 88 ms_handle_reset con 0x55d596c29800 session 0x55d596edf6c0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d596c5d000
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 73580544 unmapped: 20799488 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _renew_subs
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 88 handle_osd_map epochs [89,89], i have 88, src has [1,89]
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:14.838943+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 89 ms_handle_reset con 0x55d596c5d000 session 0x55d597b2d180
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d598f3f400
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 73728000 unmapped: 20652032 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 89 handle_osd_map epochs [89,90], i have 89, src has [1,90]
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:15.839117+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 90 ms_handle_reset con 0x55d598f3f400 session 0x55d597b17dc0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 773529 data_alloc: 218103808 data_used: 8141
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d596c28000
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 73777152 unmapped: 20602880 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _renew_subs
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 90 handle_osd_map epochs [91,91], i have 90, src has [1,91]
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:16.839306+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 91 ms_handle_reset con 0x55d596c28000 session 0x55d597b17500
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d598f3f400
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 73801728 unmapped: 20578304 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d5993b4000
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.893272400s of 10.285517693s, submitted: 131
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _renew_subs
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 91 handle_osd_map epochs [92,92], i have 91, src has [1,92]
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:17.839489+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 92 ms_handle_reset con 0x55d598f3f400 session 0x55d596d13880
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d5993b5000
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 75014144 unmapped: 19365888 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _renew_subs
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 92 handle_osd_map epochs [93,93], i have 92, src has [1,93]
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 93 ms_handle_reset con 0x55d5993b4000 session 0x55d597e6b500
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:18.839657+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 93 ms_handle_reset con 0x55d5993b5000 session 0x55d597b161c0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 75096064 unmapped: 19283968 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba3b000/0x0/0x4ffc00000, data 0x1550b01/0x15ed000, compress 0x0/0x0/0x0, omap 0x10a25, meta 0x2bbf5db), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 93 handle_osd_map epochs [94,94], i have 93, src has [1,94]
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:19.839983+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 75120640 unmapped: 19259392 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 94 heartbeat osd_stat(store_statfs(0x4fba36000/0x0/0x4ffc00000, data 0x1551fcd/0x15f0000, compress 0x0/0x0/0x0, omap 0x10c5d, meta 0x2bbf3a3), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:20.840219+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784952 data_alloc: 218103808 data_used: 8141
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 75137024 unmapped: 19243008 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:21.840461+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 94 heartbeat osd_stat(store_statfs(0x4fba36000/0x0/0x4ffc00000, data 0x1551fcd/0x15f0000, compress 0x0/0x0/0x0, omap 0x10c5d, meta 0x2bbf3a3), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 75137024 unmapped: 19243008 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:22.840774+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 75137024 unmapped: 19243008 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:23.841141+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d5993b5400
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 94 ms_handle_reset con 0x55d5993b5400 session 0x55d596ebc380
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d596c28000
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d598f3f400
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _renew_subs
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 94 handle_osd_map epochs [95,95], i have 94, src has [1,95]
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 75137024 unmapped: 19243008 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:24.841295+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 95 heartbeat osd_stat(store_statfs(0x4fba36000/0x0/0x4ffc00000, data 0x15535da/0x15f4000, compress 0x0/0x0/0x0, omap 0x11026, meta 0x2bbefda), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _renew_subs
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 95 handle_osd_map epochs [96,96], i have 95, src has [1,96]
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 96 ms_handle_reset con 0x55d598f3f400 session 0x55d597bd8c40
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 96 ms_handle_reset con 0x55d596c28000 session 0x55d596ebce00
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d5993b4000
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 96 ms_handle_reset con 0x55d5993b4000 session 0x55d597e6b880
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 75194368 unmapped: 19185664 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:25.841613+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 794599 data_alloc: 218103808 data_used: 8141
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 75194368 unmapped: 19185664 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:26.841918+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 75210752 unmapped: 19169280 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 96 heartbeat osd_stat(store_statfs(0x4fba30000/0x0/0x4ffc00000, data 0x1554c48/0x15f8000, compress 0x0/0x0/0x0, omap 0x11347, meta 0x2bbecb9), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:27.842097+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 75210752 unmapped: 19169280 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:28.842222+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 75210752 unmapped: 19169280 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:29.842447+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 96 heartbeat osd_stat(store_statfs(0x4fba30000/0x0/0x4ffc00000, data 0x1554c48/0x15f8000, compress 0x0/0x0/0x0, omap 0x11347, meta 0x2bbecb9), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 75210752 unmapped: 19169280 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:30.842649+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 794599 data_alloc: 218103808 data_used: 8141
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 75210752 unmapped: 19169280 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:31.842857+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 75210752 unmapped: 19169280 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:32.843101+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 75210752 unmapped: 19169280 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:33.843274+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d5993b5000
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 96 ms_handle_reset con 0x55d5993b5000 session 0x55d598e476c0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 75210752 unmapped: 19169280 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:34.843483+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d59774b400
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _renew_subs
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 96 handle_osd_map epochs [97,97], i have 96, src has [1,97]
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.301731110s of 17.467674255s, submitted: 110
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 97 ms_handle_reset con 0x55d59774b400 session 0x55d596cc2c40
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 97 heartbeat osd_stat(store_statfs(0x4fba2f000/0x0/0x4ffc00000, data 0x15560f8/0x15fb000, compress 0x0/0x0/0x0, omap 0x11667, meta 0x2bbe999), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 75096064 unmapped: 19283968 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:35.843681+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796587 data_alloc: 218103808 data_used: 8141
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d596c28000
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 97 ms_handle_reset con 0x55d596c28000 session 0x55d596edf180
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 75096064 unmapped: 19283968 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d598f3f400
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:36.843849+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 75096064 unmapped: 19283968 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:37.844027+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 75096064 unmapped: 19283968 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 97 handle_osd_map epochs [98,99], i have 97, src has [1,99]
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 99 ms_handle_reset con 0x55d598f3f400 session 0x55d596d13dc0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:38.844851+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 99 heartbeat osd_stat(store_statfs(0x4fba05000/0x0/0x4ffc00000, data 0x157cce6/0x1625000, compress 0x0/0x0/0x0, omap 0x118a9, meta 0x2bbe757), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 18915328 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d5993b4000
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d5993b5000
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:39.845012+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 75497472 unmapped: 18882560 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:40.845125+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 805611 data_alloc: 218103808 data_used: 10701
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 99 heartbeat osd_stat(store_statfs(0x4fba05000/0x0/0x4ffc00000, data 0x157cce6/0x1625000, compress 0x0/0x0/0x0, omap 0x118a9, meta 0x2bbe757), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 75497472 unmapped: 18882560 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:41.845278+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 75497472 unmapped: 18882560 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d598fb8400
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:42.845425+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d5993b7800
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 75792384 unmapped: 18587648 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _renew_subs
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 99 handle_osd_map epochs [100,100], i have 99, src has [1,100]
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 100 ms_handle_reset con 0x55d5993b7800 session 0x55d597e6b6c0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 100 ms_handle_reset con 0x55d598fb8400 session 0x55d596ec8540
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:43.845565+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 18579456 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d5993b7c00
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _renew_subs
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 100 handle_osd_map epochs [101,101], i have 100, src has [1,101]
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:44.845778+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 101 ms_handle_reset con 0x55d5993b7c00 session 0x55d596c1b500
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d596c28000
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 101 ms_handle_reset con 0x55d596c28000 session 0x55d598e8d6c0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d598f3f400
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.204052925s of 10.307401657s, submitted: 49
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 75874304 unmapped: 18505728 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:45.845947+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812037 data_alloc: 218103808 data_used: 10802
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 101 heartbeat osd_stat(store_statfs(0x4fb9ff000/0x0/0x4ffc00000, data 0x157f8d4/0x162b000, compress 0x0/0x0/0x0, omap 0x11e17, meta 0x2bbe1e9), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _renew_subs
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 101 handle_osd_map epochs [102,102], i have 101, src has [1,102]
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 102 ms_handle_reset con 0x55d598f3f400 session 0x55d598e8da40
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 75890688 unmapped: 18489344 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d598fb8400
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:46.846123+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _renew_subs
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 102 handle_osd_map epochs [103,103], i have 102, src has [1,103]
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 103 ms_handle_reset con 0x55d598fb8400 session 0x55d598e47a40
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 18472960 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:47.846345+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 103 heartbeat osd_stat(store_statfs(0x4fb9fa000/0x0/0x4ffc00000, data 0x1580ef5/0x162e000, compress 0x0/0x0/0x0, omap 0x120f9, meta 0x2bbdf07), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d5993b7800
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 103 handle_osd_map epochs [103,104], i have 103, src has [1,104]
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 104 ms_handle_reset con 0x55d5993b7800 session 0x55d597bd8e00
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 18399232 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:48.846503+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 18399232 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:49.846753+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 104 heartbeat osd_stat(store_statfs(0x4fb9f4000/0x0/0x4ffc00000, data 0x1583b69/0x1636000, compress 0x0/0x0/0x0, omap 0x12577, meta 0x2bbda89), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 18399232 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:50.846930+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 825379 data_alloc: 218103808 data_used: 10802
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 18399232 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:51.847075+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 104 heartbeat osd_stat(store_statfs(0x4fb9f4000/0x0/0x4ffc00000, data 0x1583b69/0x1636000, compress 0x0/0x0/0x0, omap 0x12577, meta 0x2bbda89), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 18399232 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:52.847271+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 18399232 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:53.847403+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d599109400
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 104 ms_handle_reset con 0x55d599109400 session 0x55d598e741c0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d596c28000
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 104 ms_handle_reset con 0x55d596c28000 session 0x55d597bd8c40
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d597b78800
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 104 ms_handle_reset con 0x55d597b78800 session 0x55d597b17500
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 104 handle_osd_map epochs [104,105], i have 104, src has [1,105]
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d598f3f400
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 105 ms_handle_reset con 0x55d598f3f400 session 0x55d597e6a000
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d598fb8c00
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d598fb8400
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 76283904 unmapped: 18096128 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 105 ms_handle_reset con 0x55d598fb8400 session 0x55d598e74fc0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 105 ms_handle_reset con 0x55d598fb8c00 session 0x55d598eb3180
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d596c28000
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 105 ms_handle_reset con 0x55d596c28000 session 0x55d596ec8700
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d597b78800
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 105 ms_handle_reset con 0x55d597b78800 session 0x55d598eb2540
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:54.847566+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d598f3f400
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 105 ms_handle_reset con 0x55d598f3f400 session 0x55d597b17dc0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d598fb8400
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 105 ms_handle_reset con 0x55d598fb8400 session 0x55d598eb3340
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 76062720 unmapped: 18317312 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:55.847786+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 829661 data_alloc: 218103808 data_used: 10817
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 76062720 unmapped: 18317312 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:56.847974+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d599109000
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.297220230s of 11.404709816s, submitted: 74
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 105 ms_handle_reset con 0x55d599109000 session 0x55d597e6ac40
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d596c28000
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 76218368 unmapped: 18161664 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:57.848148+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 105 heartbeat osd_stat(store_statfs(0x4fb9f1000/0x0/0x4ffc00000, data 0x1585084/0x163b000, compress 0x0/0x0/0x0, omap 0x12d33, meta 0x2bbd2cd), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 76218368 unmapped: 18161664 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:58.848354+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 76218368 unmapped: 18161664 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:59.848635+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 76218368 unmapped: 18161664 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:00.848800+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 830678 data_alloc: 218103808 data_used: 10833
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d597b78800
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 105 ms_handle_reset con 0x55d597b78800 session 0x55d596d121c0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d598f3f400
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 105 handle_osd_map epochs [105,106], i have 105, src has [1,106]
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 76218368 unmapped: 18161664 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 106 ms_handle_reset con 0x55d598f3f400 session 0x55d5975fddc0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d598fb8400
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d59774ac00
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:01.848963+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 106 ms_handle_reset con 0x55d598fb8400 session 0x55d596d13340
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 106 ms_handle_reset con 0x55d59774ac00 session 0x55d598b916c0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d59774a400
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 106 ms_handle_reset con 0x55d59774a400 session 0x55d598e75dc0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d59774ac00
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 76627968 unmapped: 17752064 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 106 ms_handle_reset con 0x55d59774ac00 session 0x55d598fdc380
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d597b78800
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _renew_subs
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 106 handle_osd_map epochs [107,107], i have 106, src has [1,107]
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:02.849072+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _renew_subs
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 107 handle_osd_map epochs [108,108], i have 107, src has [1,108]
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 17817600 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 108 ms_handle_reset con 0x55d597b78800 session 0x55d596c1aa80
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 108 heartbeat osd_stat(store_statfs(0x4fb9e7000/0x0/0x4ffc00000, data 0x1588082/0x1643000, compress 0x0/0x0/0x0, omap 0x132c3, meta 0x2bbcd3d), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:03.849303+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 76570624 unmapped: 17809408 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d598f3f400
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 108 ms_handle_reset con 0x55d598f3f400 session 0x55d598e8ce00
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:04.849485+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d598fb8400
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 108 ms_handle_reset con 0x55d598fb8400 session 0x55d598e75c00
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d59774b400
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 108 ms_handle_reset con 0x55d59774b400 session 0x55d598ff4a80
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d59774ac00
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 76865536 unmapped: 17514496 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 108 ms_handle_reset con 0x55d59774ac00 session 0x55d596ebc000
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:05.849646+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d597b78800
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 843426 data_alloc: 218103808 data_used: 14965
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 76890112 unmapped: 17489920 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _renew_subs
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 108 handle_osd_map epochs [109,109], i have 108, src has [1,109]
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:06.849804+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.546408653s of 10.006135941s, submitted: 114
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 109 ms_handle_reset con 0x55d597b78800 session 0x55d598e8d340
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77979648 unmapped: 16400384 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:07.849998+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 109 ms_handle_reset con 0x55d596c28000 session 0x55d596d12a80
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d598f3f400
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fb9e4000/0x0/0x4ffc00000, data 0x158a889/0x1646000, compress 0x0/0x0/0x0, omap 0x13de3, meta 0x2bbc21d), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77955072 unmapped: 16424960 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:08.850938+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _renew_subs
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 109 handle_osd_map epochs [110,110], i have 109, src has [1,110]
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 110 ms_handle_reset con 0x55d598f3f400 session 0x55d597b2d500
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 110 handle_osd_map epochs [110,111], i have 110, src has [1,111]
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 16367616 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:09.851159+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 16367616 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:10.851380+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 850847 data_alloc: 218103808 data_used: 10802
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 111 heartbeat osd_stat(store_statfs(0x4fb9dd000/0x0/0x4ffc00000, data 0x158d35f/0x164a000, compress 0x0/0x0/0x0, omap 0x1463c, meta 0x2bbb9c4), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 111 ms_handle_reset con 0x55d5993b4000 session 0x55d596cc3340
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 111 ms_handle_reset con 0x55d5993b5000 session 0x55d597b2c540
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77832192 unmapped: 16547840 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:11.851520+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d596c28000
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 111 ms_handle_reset con 0x55d596c28000 session 0x55d597bd9880
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77611008 unmapped: 16769024 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d59774ac00
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:12.851691+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 111 ms_handle_reset con 0x55d59774ac00 session 0x55d5988bfdc0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d597b78800
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77611008 unmapped: 16769024 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:13.851930+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _renew_subs
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 111 handle_osd_map epochs [112,112], i have 111, src has [1,112]
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 112 ms_handle_reset con 0x55d597b78800 session 0x55d5988bea80
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 112 heartbeat osd_stat(store_statfs(0x4fba02000/0x0/0x4ffc00000, data 0x156a94e/0x1628000, compress 0x0/0x0/0x0, omap 0x14a07, meta 0x2bbb5f9), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 112 handle_osd_map epochs [112,113], i have 112, src has [1,113]
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77529088 unmapped: 16850944 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:14.852123+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77529088 unmapped: 16850944 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:15.852299+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853791 data_alloc: 218103808 data_used: 12714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77529088 unmapped: 16850944 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:16.852454+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fb9fd000/0x0/0x4ffc00000, data 0x156be1a/0x162b000, compress 0x0/0x0/0x0, omap 0x14c5d, meta 0x2bbb3a3), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77529088 unmapped: 16850944 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:17.852788+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77529088 unmapped: 16850944 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:18.852943+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:19.853272+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77529088 unmapped: 16850944 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:20.853488+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77529088 unmapped: 16850944 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 113 handle_osd_map epochs [114,114], i have 113, src has [1,114]
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.037557602s of 14.196849823s, submitted: 107
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:21.853786+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:22.853992+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:23.854178+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:24.854395+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:25.854615+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:26.854809+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:27.854997+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:28.855189+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:29.855439+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:30.855680+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:31.855947+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:32.856435+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:33.856792+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:34.856971+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:35.857229+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:36.857484+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:37.857908+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:38.858338+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:39.858677+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:40.859132+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:41.859385+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:42.859575+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:43.859820+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:44.860059+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:45.860334+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:46.860607+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:47.860771+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:48.861054+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:49.861572+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:50.861810+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:51.862068+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:52.862348+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:53.862591+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:54.862790+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:55.862989+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:56.863159+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:57.863383+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:58.863817+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:59.864208+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:00.864441+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:01.864631+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:02.864821+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:03.865083+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:04.865293+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:05.865514+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:06.865791+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:07.865960+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:08.866126+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:09.866384+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:10.866552+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:11.866739+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:12.866976+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:13.867157+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:14.867296+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:15.867493+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:16.867619+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:17.867813+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:18.867997+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:19.868193+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:20.868375+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:21.868499+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:22.868767+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:23.868946+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:24.869122+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:25.869283+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:26.869468+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:27.869577+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:28.869725+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:29.869923+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:30.870093+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:31.870260+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:32.870459+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:33.870878+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:34.871070+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:35.871224+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:36.871354+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:37.871501+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:38.871644+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:39.871910+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:40.872041+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:41.872197+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:42.872325+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:43.872580+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:44.872816+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:45.872969+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: do_command 'config diff' '{prefix=config diff}'
Jan 10 17:32:04 compute-0 ceph-osd[86809]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 16728064 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: do_command 'config show' '{prefix=config show}'
Jan 10 17:32:04 compute-0 ceph-osd[86809]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Jan 10 17:32:04 compute-0 ceph-osd[86809]: do_command 'counter dump' '{prefix=counter dump}'
Jan 10 17:32:04 compute-0 ceph-osd[86809]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Jan 10 17:32:04 compute-0 ceph-osd[86809]: do_command 'counter schema' '{prefix=counter schema}'
Jan 10 17:32:04 compute-0 ceph-osd[86809]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:46.873095+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78045184 unmapped: 16334848 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:47.873214+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78045184 unmapped: 16334848 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:48.873379+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: do_command 'log dump' '{prefix=log dump}'
Jan 10 17:32:04 compute-0 ceph-osd[86809]: do_command 'log dump' '{prefix=log dump}' result is 0 bytes
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78061568 unmapped: 16318464 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: do_command 'perf dump' '{prefix=perf dump}'
Jan 10 17:32:04 compute-0 ceph-osd[86809]: do_command 'perf dump' '{prefix=perf dump}' result is 0 bytes
Jan 10 17:32:04 compute-0 ceph-osd[86809]: do_command 'perf histogram dump' '{prefix=perf histogram dump}'
Jan 10 17:32:04 compute-0 ceph-osd[86809]: do_command 'perf histogram dump' '{prefix=perf histogram dump}' result is 0 bytes
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:49.873574+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: do_command 'perf schema' '{prefix=perf schema}'
Jan 10 17:32:04 compute-0 ceph-osd[86809]: do_command 'perf schema' '{prefix=perf schema}' result is 0 bytes
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 16261120 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:50.873735+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 16261120 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:51.873851+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 16261120 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:52.873979+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 16261120 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:53.874101+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 16261120 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:54.874225+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 16261120 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:55.874368+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 16261120 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:56.874544+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 16261120 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:57.874704+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 16261120 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:58.874831+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 16261120 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:59.875226+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 16261120 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:00.875411+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 16261120 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:01.875573+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 16261120 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:02.875720+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 16261120 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:03.875866+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 16261120 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:04.875991+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 16261120 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:05.876112+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 16261120 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:06.876246+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 16261120 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:07.876388+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 16261120 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:08.876529+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 16261120 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:09.876753+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 16261120 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:10.876951+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 16261120 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:11.877167+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 16261120 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:12.877366+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 16261120 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:13.877557+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 16261120 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:14.877728+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 16261120 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:15.877891+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 16261120 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:16.878110+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 16261120 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:17.878305+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 16261120 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:18.878480+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread fragmentation_score=0.000149 took=0.000078s
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 16261120 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:19.878790+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 16261120 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:20.879018+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 16261120 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:21.879191+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 16261120 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:22.879459+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 16261120 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:23.879784+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 16261120 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:24.880112+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 16261120 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:25.880428+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 16261120 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:26.880819+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 16261120 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:27.881368+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 16261120 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:28.881651+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 16261120 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:29.882044+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 16261120 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:30.882374+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 16261120 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:31.882616+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 16261120 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:32.882882+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 16261120 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:33.883186+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 16261120 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:34.883456+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 16261120 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:35.883853+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 16261120 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:36.884142+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 16261120 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:37.884352+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 16261120 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:38.884924+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 16261120 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:39.885301+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 16261120 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:40.885613+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 16261120 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:41.885854+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 16261120 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:42.886213+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 16261120 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:43.886513+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78127104 unmapped: 16252928 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:44.886900+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78127104 unmapped: 16252928 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:45.906888+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78127104 unmapped: 16252928 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:46.907161+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78127104 unmapped: 16252928 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:47.907383+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78127104 unmapped: 16252928 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:48.907585+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78127104 unmapped: 16252928 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:49.908150+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78127104 unmapped: 16252928 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:50.908360+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78127104 unmapped: 16252928 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:51.908925+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78127104 unmapped: 16252928 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:52.909098+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78127104 unmapped: 16252928 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:53.909235+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78127104 unmapped: 16252928 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:54.909456+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78127104 unmapped: 16252928 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:55.909655+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78127104 unmapped: 16252928 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:56.909878+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78127104 unmapped: 16252928 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:57.910066+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:58.910244+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 16244736 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:59.910527+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 16244736 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:00.910737+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 16244736 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:01.911031+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 16244736 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:02.911283+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 16244736 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:03.911493+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 16244736 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:04.911781+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 16244736 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:05.911936+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 16244736 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:06.912121+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 16244736 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:07.912323+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 16244736 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:08.912527+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 16244736 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:09.912786+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 16244736 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:10.912979+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78143488 unmapped: 16236544 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:11.913159+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78143488 unmapped: 16236544 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:12.913424+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78143488 unmapped: 16236544 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:13.913624+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78143488 unmapped: 16236544 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:14.913835+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78143488 unmapped: 16236544 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:15.914065+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78143488 unmapped: 16236544 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:16.914225+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78143488 unmapped: 16236544 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:17.914438+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78143488 unmapped: 16236544 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:18.914639+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78143488 unmapped: 16236544 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:19.915022+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78143488 unmapped: 16236544 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:20.915272+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78143488 unmapped: 16236544 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:21.915479+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78143488 unmapped: 16236544 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:22.915777+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78143488 unmapped: 16236544 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:23.916016+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78143488 unmapped: 16236544 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:24.916317+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78143488 unmapped: 16236544 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:25.916568+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78143488 unmapped: 16236544 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:26.916803+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78151680 unmapped: 16228352 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:27.916947+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78151680 unmapped: 16228352 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:28.917080+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78151680 unmapped: 16228352 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:29.917258+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78151680 unmapped: 16228352 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:30.917495+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78151680 unmapped: 16228352 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:31.917786+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78151680 unmapped: 16228352 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:32.917969+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78151680 unmapped: 16228352 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:33.918188+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78151680 unmapped: 16228352 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:34.918362+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78151680 unmapped: 16228352 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:35.918589+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78151680 unmapped: 16228352 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:36.918775+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78151680 unmapped: 16228352 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:37.918954+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78151680 unmapped: 16228352 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:38.919154+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78159872 unmapped: 16220160 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:39.919462+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78159872 unmapped: 16220160 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:40.919687+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78159872 unmapped: 16220160 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:41.919928+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78159872 unmapped: 16220160 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:42.920125+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78159872 unmapped: 16220160 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:43.920333+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78159872 unmapped: 16220160 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:44.920536+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78159872 unmapped: 16220160 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:45.920803+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78159872 unmapped: 16220160 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:46.920983+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78159872 unmapped: 16220160 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:47.921159+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78159872 unmapped: 16220160 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:48.921416+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78159872 unmapped: 16220160 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:49.921680+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78159872 unmapped: 16220160 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:50.921915+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78159872 unmapped: 16220160 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:51.922111+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78159872 unmapped: 16220160 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:52.922341+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78159872 unmapped: 16220160 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:53.922525+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78159872 unmapped: 16220160 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:54.922757+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78159872 unmapped: 16220160 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:55.922896+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78159872 unmapped: 16220160 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:56.923041+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78159872 unmapped: 16220160 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:57.923218+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78159872 unmapped: 16220160 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:58.923378+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78159872 unmapped: 16220160 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:59.923558+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78159872 unmapped: 16220160 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:00.923751+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78159872 unmapped: 16220160 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:01.923909+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78159872 unmapped: 16220160 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:02.924067+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78159872 unmapped: 16220160 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:03.924253+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78159872 unmapped: 16220160 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:04.924438+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78159872 unmapped: 16220160 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:05.924588+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78159872 unmapped: 16220160 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:06.924828+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78159872 unmapped: 16220160 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:07.925053+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78159872 unmapped: 16220160 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:08.925252+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78159872 unmapped: 16220160 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:09.925483+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78159872 unmapped: 16220160 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:10.925674+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78159872 unmapped: 16220160 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:11.925879+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78159872 unmapped: 16220160 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:12.926011+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78159872 unmapped: 16220160 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:13.926189+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78159872 unmapped: 16220160 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:14.926378+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78159872 unmapped: 16220160 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:15.926511+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78159872 unmapped: 16220160 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:16.926786+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78159872 unmapped: 16220160 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:17.927199+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78159872 unmapped: 16220160 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:18.927533+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78159872 unmapped: 16220160 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:19.927808+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78159872 unmapped: 16220160 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:20.928138+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78159872 unmapped: 16220160 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:21.928320+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78159872 unmapped: 16220160 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:22.928530+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78159872 unmapped: 16220160 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:23.929271+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78159872 unmapped: 16220160 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:24.929536+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78159872 unmapped: 16220160 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:25.929870+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78159872 unmapped: 16220160 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:26.930077+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78159872 unmapped: 16220160 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:27.930609+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78159872 unmapped: 16220160 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:28.931083+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78159872 unmapped: 16220160 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:29.931650+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78159872 unmapped: 16220160 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:30.931986+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78159872 unmapped: 16220160 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:31.932381+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78159872 unmapped: 16220160 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:32.932563+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78159872 unmapped: 16220160 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:33.932767+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78159872 unmapped: 16220160 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:34.932918+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78168064 unmapped: 16211968 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:35.933053+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78168064 unmapped: 16211968 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:36.933196+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78168064 unmapped: 16211968 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:37.933332+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78168064 unmapped: 16211968 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:38.933527+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78168064 unmapped: 16211968 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:39.933835+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78168064 unmapped: 16211968 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:40.934199+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78168064 unmapped: 16211968 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:41.934510+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78168064 unmapped: 16211968 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:42.934736+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78168064 unmapped: 16211968 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:43.934940+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78168064 unmapped: 16211968 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:44.935176+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78168064 unmapped: 16211968 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:45.935392+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78168064 unmapped: 16211968 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:46.935665+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78168064 unmapped: 16211968 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:47.935851+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78168064 unmapped: 16211968 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:48.936119+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78176256 unmapped: 16203776 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:49.936441+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78176256 unmapped: 16203776 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:50.936678+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78176256 unmapped: 16203776 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:51.936890+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78176256 unmapped: 16203776 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:52.937119+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78176256 unmapped: 16203776 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:53.955101+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78176256 unmapped: 16203776 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:54.955363+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78176256 unmapped: 16203776 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:55.955550+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78176256 unmapped: 16203776 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:56.955760+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78176256 unmapped: 16203776 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:57.955964+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78176256 unmapped: 16203776 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:58.956096+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78176256 unmapped: 16203776 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:59.956324+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78176256 unmapped: 16203776 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:00.956516+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78176256 unmapped: 16203776 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:01.956938+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78176256 unmapped: 16203776 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:02.957175+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78176256 unmapped: 16203776 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:03.957370+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78176256 unmapped: 16203776 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:04.957598+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78176256 unmapped: 16203776 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:05.957793+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78176256 unmapped: 16203776 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:06.957951+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78176256 unmapped: 16203776 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:07.958181+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78176256 unmapped: 16203776 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:08.958408+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78184448 unmapped: 16195584 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:09.958678+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78184448 unmapped: 16195584 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:10.958894+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78184448 unmapped: 16195584 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:11.959087+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78184448 unmapped: 16195584 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:12.959313+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78184448 unmapped: 16195584 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:13.959570+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78184448 unmapped: 16195584 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:14.959789+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78184448 unmapped: 16195584 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:15.960004+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78184448 unmapped: 16195584 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:16.960222+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78184448 unmapped: 16195584 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:17.960508+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78184448 unmapped: 16195584 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:18.960735+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78184448 unmapped: 16195584 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:19.960996+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78184448 unmapped: 16195584 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:20.961233+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78184448 unmapped: 16195584 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:21.961427+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78184448 unmapped: 16195584 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:22.961593+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78184448 unmapped: 16195584 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:23.961840+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78184448 unmapped: 16195584 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:24.961998+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78184448 unmapped: 16195584 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:25.962172+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78184448 unmapped: 16195584 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:26.962365+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78184448 unmapped: 16195584 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:27.962547+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78184448 unmapped: 16195584 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:28.962752+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78184448 unmapped: 16195584 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:29.962966+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78184448 unmapped: 16195584 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:30.963154+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78184448 unmapped: 16195584 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:31.963308+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78184448 unmapped: 16195584 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:32.963471+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78184448 unmapped: 16195584 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:33.963642+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78184448 unmapped: 16195584 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:34.963826+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78184448 unmapped: 16195584 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:35.964013+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78184448 unmapped: 16195584 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:36.964158+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78184448 unmapped: 16195584 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:37.964324+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78184448 unmapped: 16195584 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:38.964466+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78184448 unmapped: 16195584 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:39.964635+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78184448 unmapped: 16195584 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:40.964770+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78184448 unmapped: 16195584 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:41.964937+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78184448 unmapped: 16195584 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:42.965214+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78184448 unmapped: 16195584 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:43.965422+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78184448 unmapped: 16195584 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:44.965584+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78184448 unmapped: 16195584 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:45.965754+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78184448 unmapped: 16195584 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:46.965935+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78184448 unmapped: 16195584 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:47.966139+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78192640 unmapped: 16187392 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:48.966310+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78192640 unmapped: 16187392 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:49.966829+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78192640 unmapped: 16187392 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:50.967069+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78192640 unmapped: 16187392 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:51.967257+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78192640 unmapped: 16187392 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:52.967430+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78192640 unmapped: 16187392 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:53.967661+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78192640 unmapped: 16187392 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:54.967839+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78192640 unmapped: 16187392 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:55.968080+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78192640 unmapped: 16187392 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:56.968444+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78192640 unmapped: 16187392 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:57.968662+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78192640 unmapped: 16187392 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:58.968868+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78192640 unmapped: 16187392 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:59.969142+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78192640 unmapped: 16187392 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:00.969399+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78192640 unmapped: 16187392 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:01.969665+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78192640 unmapped: 16187392 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:02.969870+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78192640 unmapped: 16187392 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:03.970083+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78192640 unmapped: 16187392 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:04.970276+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78192640 unmapped: 16187392 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:05.970493+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78192640 unmapped: 16187392 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:06.970838+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78192640 unmapped: 16187392 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:07.971029+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78200832 unmapped: 16179200 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:08.971288+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78200832 unmapped: 16179200 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:09.971611+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78200832 unmapped: 16179200 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:10.971795+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78200832 unmapped: 16179200 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:11.972018+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78200832 unmapped: 16179200 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:12.972280+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78200832 unmapped: 16179200 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:13.972549+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78200832 unmapped: 16179200 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:14.972762+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78200832 unmapped: 16179200 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:15.972954+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78200832 unmapped: 16179200 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:16.973168+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78209024 unmapped: 16171008 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:17.973373+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78209024 unmapped: 16171008 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:18.973598+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78209024 unmapped: 16171008 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:19.973944+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78209024 unmapped: 16171008 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:20.974194+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78209024 unmapped: 16171008 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:21.974349+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78209024 unmapped: 16171008 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:22.974646+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78209024 unmapped: 16171008 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:23.974821+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78209024 unmapped: 16171008 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:24.975010+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:25.975224+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78209024 unmapped: 16171008 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:26.975382+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78209024 unmapped: 16171008 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:27.975553+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78077952 unmapped: 16302080 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:28.975751+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78077952 unmapped: 16302080 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:29.976027+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78077952 unmapped: 16302080 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:30.976203+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78077952 unmapped: 16302080 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:31.976424+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78077952 unmapped: 16302080 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:32.976634+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78077952 unmapped: 16302080 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:33.976836+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78077952 unmapped: 16302080 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:34.977042+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78077952 unmapped: 16302080 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:35.977227+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78086144 unmapped: 16293888 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:36.977402+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78086144 unmapped: 16293888 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:37.977521+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78086144 unmapped: 16293888 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:38.977788+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78086144 unmapped: 16293888 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:39.978048+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78086144 unmapped: 16293888 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:40.978248+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78086144 unmapped: 16293888 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:41.978419+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78086144 unmapped: 16293888 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:42.978595+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78086144 unmapped: 16293888 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:43.978861+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78086144 unmapped: 16293888 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:44.979119+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78086144 unmapped: 16293888 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:45.979346+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78086144 unmapped: 16293888 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:46.979620+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78086144 unmapped: 16293888 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:47.979783+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78086144 unmapped: 16293888 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:48.979996+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78086144 unmapped: 16293888 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:49.980446+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78086144 unmapped: 16293888 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:50.980635+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78086144 unmapped: 16293888 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:51.980773+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78086144 unmapped: 16293888 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:52.980937+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78086144 unmapped: 16293888 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:53.981118+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78086144 unmapped: 16293888 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:54.981357+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78094336 unmapped: 16285696 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:55.981562+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78094336 unmapped: 16285696 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:56.981760+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78094336 unmapped: 16285696 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:57.981943+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78094336 unmapped: 16285696 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:58.982149+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78094336 unmapped: 16285696 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:59.982477+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78094336 unmapped: 16285696 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:00.982678+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78094336 unmapped: 16285696 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:01.982884+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78094336 unmapped: 16285696 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:02.983046+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78094336 unmapped: 16285696 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:03.983281+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78094336 unmapped: 16285696 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:04.983566+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78094336 unmapped: 16285696 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.2 total, 600.0 interval
                                           Cumulative writes: 6001 writes, 24K keys, 6001 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 6001 writes, 1157 syncs, 5.19 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1449 writes, 4204 keys, 1449 commit groups, 1.0 writes per commit group, ingest: 2.27 MB, 0.00 MB/s
                                           Interval WAL: 1449 writes, 642 syncs, 2.26 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:05.983781+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78094336 unmapped: 16285696 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:06.984067+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78094336 unmapped: 16285696 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:07.984279+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78094336 unmapped: 16285696 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:08.984526+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78094336 unmapped: 16285696 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:09.984849+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78094336 unmapped: 16285696 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:10.985041+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78094336 unmapped: 16285696 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:11.985226+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78094336 unmapped: 16285696 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:12.985386+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78102528 unmapped: 16277504 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:13.985550+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78102528 unmapped: 16277504 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: mgrc ms_handle_reset ms_handle_reset con 0x55d596b9e000
Jan 10 17:32:04 compute-0 ceph-osd[86809]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3703679480
Jan 10 17:32:04 compute-0 ceph-osd[86809]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3703679480,v1:192.168.122.100:6801/3703679480]
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: get_auth_request con 0x55d598f82c00 auth_method 0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: mgrc handle_mgr_configure stats_period=5
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:14.985785+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78217216 unmapped: 16162816 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:15.986078+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78217216 unmapped: 16162816 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:16.986227+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78217216 unmapped: 16162816 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:17.986661+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 ms_handle_reset con 0x55d5962f7400 session 0x55d596c0e1c0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d596c28000
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78209024 unmapped: 16171008 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 ms_handle_reset con 0x55d597b79c00 session 0x55d598fdcfc0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d5962f6c00
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 ms_handle_reset con 0x55d5962f7000 session 0x55d598e26c40
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: handle_auth_request added challenge on 0x55d597b79c00
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:18.987885+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78225408 unmapped: 16154624 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:19.988146+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78225408 unmapped: 16154624 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:20.988340+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78225408 unmapped: 16154624 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:21.988531+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78225408 unmapped: 16154624 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:22.988778+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78225408 unmapped: 16154624 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:23.989039+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78225408 unmapped: 16154624 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:24.989252+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 16138240 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:25.989466+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 16138240 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:26.989675+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 16138240 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:27.989895+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 16138240 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:28.990127+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 16138240 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:29.990452+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 16138240 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:30.990658+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 16138240 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:31.990826+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 16138240 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:32.991126+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 16138240 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:33.991465+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 16138240 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:34.991749+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 16138240 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:35.992186+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 16138240 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:36.992456+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 16138240 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:37.992780+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 16138240 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:38.993063+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 16138240 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:39.993333+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 16138240 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:40.993616+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 16138240 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:41.993853+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 16138240 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:42.994027+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 16138240 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:43.994227+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 16138240 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:44.994449+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 16138240 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:45.994762+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 16138240 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:46.995222+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 16138240 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:47.995478+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 16138240 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:48.995784+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 16138240 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:49.996065+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 16138240 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:50.996339+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 16138240 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:51.996676+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 16138240 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:52.997068+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 16138240 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:53.997240+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 16138240 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:54.997486+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 16138240 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:55.997646+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 16138240 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:56.997970+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 16138240 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:57.998167+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 16138240 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:58.998347+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 16138240 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:59.998560+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 16138240 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:00.999103+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 16138240 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:01.999337+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 16138240 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:02.999534+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 16138240 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:03.999835+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 16138240 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:05.000149+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 16138240 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:06.000313+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 16138240 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:07.000500+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 16138240 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:08.000673+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 16138240 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:09.000937+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 16138240 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:10.001196+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 16138240 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:11.001414+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 16138240 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:12.001575+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 16138240 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:13.001769+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 16138240 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:14.002019+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 16138240 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:15.002420+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 16138240 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:16.002664+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 16138240 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:17.002879+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 16138240 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:18.003109+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 16138240 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:19.003339+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 16138240 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:20.003649+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 16138240 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:21.004885+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 16138240 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:22.005101+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 16138240 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:23.005259+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 16138240 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:24.005409+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 16138240 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:25.005646+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 16138240 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:26.005900+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 16138240 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:27.006158+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 16138240 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:28.006434+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 16138240 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:29.006608+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 16138240 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:30.006902+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 16138240 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:31.007112+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 16138240 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:32.007379+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 16138240 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:33.007550+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 16138240 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:34.007835+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 16138240 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:35.008504+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 16138240 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:36.009250+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 16138240 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:37.009548+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 16138240 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:38.009768+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 16138240 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:39.009964+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 16138240 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:40.010259+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 16138240 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:41.010542+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 16138240 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:42.010839+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 16138240 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:43.011113+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 16138240 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:44.011265+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 16138240 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:45.011483+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 16138240 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:46.011786+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 16138240 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:47.012093+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 16138240 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:48.012369+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 16138240 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:49.012596+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 16138240 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:50.012931+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 16138240 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:51.013112+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 16138240 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:52.013299+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 16138240 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:53.013582+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 16138240 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:54.013975+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 16138240 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:55.014256+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 16138240 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:56.014527+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 16138240 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:57.014801+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 16138240 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:58.015065+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 16138240 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:59.015289+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 16138240 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:00.015581+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 16138240 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:01.015863+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 16138240 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:02.016126+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 16138240 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:03.016286+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 16138240 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:04.016506+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 16138240 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:05.016682+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 16138240 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:06.016892+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 16138240 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:07.017078+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 16138240 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:08.017325+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 16138240 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:09.017683+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 16138240 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:10.018019+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 16138240 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:11.018360+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 16138240 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:12.018756+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 16138240 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:13.019066+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 16138240 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:14.019375+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 16138240 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:15.019639+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 16138240 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:16.019886+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 16138240 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:17.020064+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 16138240 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:18.020303+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 16138240 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:19.020537+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 16138240 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:20.020867+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 16138240 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:21.021090+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 16138240 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:22.021316+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 16138240 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:23.021534+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 16138240 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:24.021954+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 16138240 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:25.022202+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 16138240 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:26.022409+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 16138240 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:27.022598+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 16138240 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:28.022795+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 16138240 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:29.023090+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 16138240 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:30.023521+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 16138240 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:31.023758+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 16138240 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:32.023995+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 16138240 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:33.024176+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 16138240 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:34.024379+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 16138240 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:35.024591+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 16138240 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:36.024830+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 16138240 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:37.025011+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 16138240 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:38.025215+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 16138240 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:39.025381+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 16138240 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:40.025820+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:41.026185+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 16138240 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:42.026576+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 16138240 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:43.026960+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 16138240 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:44.027266+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 16138240 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:45.027580+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 16138240 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:46.027836+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 16138240 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:47.028089+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77963264 unmapped: 16416768 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:48.028347+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77963264 unmapped: 16416768 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:49.028585+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77963264 unmapped: 16416768 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:50.028898+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77963264 unmapped: 16416768 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:51.029126+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77963264 unmapped: 16416768 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:52.029336+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77963264 unmapped: 16416768 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:53.032435+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77963264 unmapped: 16416768 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:54.032616+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77963264 unmapped: 16416768 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:55.032791+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77963264 unmapped: 16416768 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:56.033026+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77963264 unmapped: 16416768 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:57.033241+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77963264 unmapped: 16416768 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:58.033448+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77963264 unmapped: 16416768 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:59.033641+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77963264 unmapped: 16416768 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:31:00.033943+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77963264 unmapped: 16416768 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:31:01.034139+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77963264 unmapped: 16416768 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:31:02.034359+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77963264 unmapped: 16416768 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:31:03.034600+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77963264 unmapped: 16416768 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:31:04.034828+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77963264 unmapped: 16416768 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:31:05.035033+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77963264 unmapped: 16416768 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:31:06.035365+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77963264 unmapped: 16416768 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:31:07.035690+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77963264 unmapped: 16416768 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:31:08.035942+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77963264 unmapped: 16416768 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:31:09.036148+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77963264 unmapped: 16416768 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:31:10.036416+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77963264 unmapped: 16416768 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:31:11.036642+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77963264 unmapped: 16416768 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:31:12.036869+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77963264 unmapped: 16416768 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:31:13.037042+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77963264 unmapped: 16416768 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:31:14.037255+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77963264 unmapped: 16416768 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:31:15.037453+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77963264 unmapped: 16416768 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:31:16.037631+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77963264 unmapped: 16416768 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:31:17.037869+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77963264 unmapped: 16416768 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:31:18.038074+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77963264 unmapped: 16416768 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:31:19.038297+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77963264 unmapped: 16416768 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:31:20.038569+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77963264 unmapped: 16416768 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:31:21.038794+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77963264 unmapped: 16416768 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:31:22.038996+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77963264 unmapped: 16416768 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:31:23.039173+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77963264 unmapped: 16416768 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:31:24.039343+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77963264 unmapped: 16416768 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:31:25.039517+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77963264 unmapped: 16416768 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:31:26.039716+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77963264 unmapped: 16416768 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:31:27.039891+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77963264 unmapped: 16416768 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:31:28.040041+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77963264 unmapped: 16416768 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:31:29.040202+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77963264 unmapped: 16416768 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:04 compute-0 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:31:30.040408+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77963264 unmapped: 16416768 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:31:31.040576+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77963264 unmapped: 16416768 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: do_command 'config diff' '{prefix=config diff}'
Jan 10 17:32:04 compute-0 ceph-osd[86809]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Jan 10 17:32:04 compute-0 ceph-osd[86809]: do_command 'config show' '{prefix=config show}'
Jan 10 17:32:04 compute-0 ceph-osd[86809]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Jan 10 17:32:04 compute-0 ceph-osd[86809]: do_command 'counter dump' '{prefix=counter dump}'
Jan 10 17:32:04 compute-0 ceph-osd[86809]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:31:32.040837+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: do_command 'counter schema' '{prefix=counter schema}'
Jan 10 17:32:04 compute-0 ceph-osd[86809]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Jan 10 17:32:04 compute-0 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78110720 unmapped: 16269312 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:31:33.041111+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78151680 unmapped: 16228352 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: tick
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_tickets
Jan 10 17:32:04 compute-0 ceph-osd[86809]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:31:34.041337+0000)
Jan 10 17:32:04 compute-0 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78168064 unmapped: 16211968 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:04 compute-0 ceph-osd[86809]: do_command 'log dump' '{prefix=log dump}'
Jan 10 17:32:04 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush dump"} v 0)
Jan 10 17:32:04 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/241551644' entity='client.admin' cmd={"prefix": "osd crush dump"} : dispatch
Jan 10 17:32:04 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0)
Jan 10 17:32:04 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/854965664' entity='client.admin' cmd={"prefix": "mgr dump", "format": "json-pretty"} : dispatch
Jan 10 17:32:04 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 10 17:32:04 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/76181218' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 10 17:32:04 compute-0 nova_compute[237049]: 2026-01-10 17:32:04.906 237053 DEBUG oslo_concurrency.processutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.622s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 10 17:32:04 compute-0 nova_compute[237049]: 2026-01-10 17:32:04.915 237053 DEBUG nova.compute.provider_tree [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Inventory has not changed in ProviderTree for provider: 5f85855c-8a9b-43b5-ae49-f5846b9dcebe update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 10 17:32:04 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1138: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:32:04 compute-0 nova_compute[237049]: 2026-01-10 17:32:04.935 237053 DEBUG nova.scheduler.client.report [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Inventory has not changed for provider 5f85855c-8a9b-43b5-ae49-f5846b9dcebe based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 10 17:32:04 compute-0 nova_compute[237049]: 2026-01-10 17:32:04.937 237053 DEBUG nova.compute.resource_tracker [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 10 17:32:04 compute-0 nova_compute[237049]: 2026-01-10 17:32:04.937 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.780s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 10 17:32:05 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush rule ls"} v 0)
Jan 10 17:32:05 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1432863944' entity='client.admin' cmd={"prefix": "osd crush rule ls"} : dispatch
Jan 10 17:32:05 compute-0 rsyslogd[1006]: imjournal from <np0005580781:ceph-osd>: begin to drop messages due to rate-limiting
Jan 10 17:32:05 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0)
Jan 10 17:32:05 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3987760947' entity='client.admin' cmd={"prefix": "mgr metadata", "format": "json-pretty"} : dispatch
Jan 10 17:32:05 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/241551644' entity='client.admin' cmd={"prefix": "osd crush dump"} : dispatch
Jan 10 17:32:05 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/854965664' entity='client.admin' cmd={"prefix": "mgr dump", "format": "json-pretty"} : dispatch
Jan 10 17:32:05 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/76181218' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 10 17:32:05 compute-0 ceph-mon[75249]: pgmap v1138: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:32:05 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/1432863944' entity='client.admin' cmd={"prefix": "osd crush rule ls"} : dispatch
Jan 10 17:32:05 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/3987760947' entity='client.admin' cmd={"prefix": "mgr metadata", "format": "json-pretty"} : dispatch
Jan 10 17:32:05 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0)
Jan 10 17:32:05 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/292863134' entity='client.admin' cmd={"prefix": "osd crush show-tunables"} : dispatch
Jan 10 17:32:05 compute-0 nova_compute[237049]: 2026-01-10 17:32:05.941 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:32:05 compute-0 nova_compute[237049]: 2026-01-10 17:32:05.945 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:32:05 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0)
Jan 10 17:32:05 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1364774294' entity='client.admin' cmd={"prefix": "mgr module ls", "format": "json-pretty"} : dispatch
Jan 10 17:32:05 compute-0 nova_compute[237049]: 2026-01-10 17:32:05.963 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:32:05 compute-0 nova_compute[237049]: 2026-01-10 17:32:05.964 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:32:05 compute-0 nova_compute[237049]: 2026-01-10 17:32:05.964 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:32:05 compute-0 nova_compute[237049]: 2026-01-10 17:32:05.964 237053 DEBUG nova.compute.manager [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 10 17:32:06 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0)
Jan 10 17:32:06 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2883747033' entity='client.admin' cmd={"prefix": "osd crush tree", "show_shadow": true} : dispatch
Jan 10 17:32:06 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0)
Jan 10 17:32:06 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1807946841' entity='client.admin' cmd={"prefix": "mgr services", "format": "json-pretty"} : dispatch
Jan 10 17:32:06 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/292863134' entity='client.admin' cmd={"prefix": "osd crush show-tunables"} : dispatch
Jan 10 17:32:06 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/1364774294' entity='client.admin' cmd={"prefix": "mgr module ls", "format": "json-pretty"} : dispatch
Jan 10 17:32:06 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/2883747033' entity='client.admin' cmd={"prefix": "osd crush tree", "show_shadow": true} : dispatch
Jan 10 17:32:06 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/1807946841' entity='client.admin' cmd={"prefix": "mgr services", "format": "json-pretty"} : dispatch
Jan 10 17:32:06 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0)
Jan 10 17:32:06 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/128955906' entity='client.admin' cmd={"prefix": "osd erasure-code-profile ls"} : dispatch
Jan 10 17:32:06 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1139: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:32:06 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0)
Jan 10 17:32:06 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/201951822' entity='client.admin' cmd={"prefix": "mgr stat", "format": "json-pretty"} : dispatch
Jan 10 17:32:07 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Jan 10 17:32:07 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3173546782' entity='client.admin' cmd={"prefix": "osd metadata"} : dispatch
Jan 10 17:32:07 compute-0 nova_compute[237049]: 2026-01-10 17:32:07.346 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:32:07 compute-0 nova_compute[237049]: 2026-01-10 17:32:07.346 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 10 17:32:07 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/128955906' entity='client.admin' cmd={"prefix": "osd erasure-code-profile ls"} : dispatch
Jan 10 17:32:07 compute-0 ceph-mon[75249]: pgmap v1139: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:32:07 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/201951822' entity='client.admin' cmd={"prefix": "mgr stat", "format": "json-pretty"} : dispatch
Jan 10 17:32:07 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/3173546782' entity='client.admin' cmd={"prefix": "osd metadata"} : dispatch
Jan 10 17:32:07 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0)
Jan 10 17:32:07 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2547306844' entity='client.admin' cmd={"prefix": "mgr versions", "format": "json-pretty"} : dispatch
Jan 10 17:32:07 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd utilization"} v 0)
Jan 10 17:32:07 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3188059542' entity='client.admin' cmd={"prefix": "osd utilization"} : dispatch
Jan 10 17:32:08 compute-0 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.15135 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 10 17:32:08 compute-0 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.15136 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 17:32:08 compute-0 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.15138 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 10 17:32:08 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/2547306844' entity='client.admin' cmd={"prefix": "mgr versions", "format": "json-pretty"} : dispatch
Jan 10 17:32:08 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/3188059542' entity='client.admin' cmd={"prefix": "osd utilization"} : dispatch
Jan 10 17:32:08 compute-0 ceph-mon[75249]: from='client.15135 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 10 17:32:08 compute-0 ceph-mon[75249]: from='client.15136 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 17:32:08 compute-0 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.15140 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 17:32:08 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1140: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:32:09 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:32:09 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:32:09 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:32:09 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:32:09 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 17:32:09 compute-0 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 17:32:09 compute-0 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.15142 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 10 17:32:09 compute-0 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.15146 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 10 17:32:09 compute-0 ceph-mon[75249]: from='client.15138 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 10 17:32:09 compute-0 ceph-mon[75249]: from='client.15140 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 17:32:09 compute-0 ceph-mon[75249]: pgmap v1140: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:32:09 compute-0 ceph-mon[75249]: from='client.15142 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 64430080 unmapped: 573440 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T16:59:59.795197+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 439161 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 64430080 unmapped: 573440 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:00.795434+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _renew_subs
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 63 handle_osd_map epochs [64,64], i have 63, src has [1,64]
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 64 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=52/53 n=1 ec=39/23 lis/c=52/52 les/c/f=53/53/0 sis=52) [0] r=0 lpr=52 crt=33'39 mlcod 33'39 active+clean] exit Started/Primary/Active/Clean 18.903283 33 0.000417
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 64 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=52/53 n=1 ec=39/23 lis/c=52/52 les/c/f=53/53/0 sis=52) [0] r=0 lpr=52 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active 18.985193 0 0.000000
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 64 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=52/53 n=1 ec=39/23 lis/c=52/52 les/c/f=53/53/0 sis=52) [0] r=0 lpr=52 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary 19.915866 0 0.000000
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 64 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=52/53 n=1 ec=39/23 lis/c=52/52 les/c/f=53/53/0 sis=52) [0] r=0 lpr=52 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started 19.916015 0 0.000000
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 64 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=52/53 n=1 ec=39/23 lis/c=52/52 les/c/f=53/53/0 sis=52) [0] r=0 lpr=52 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Reset
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 64 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=52/53 n=1 ec=39/23 lis/c=52/52 les/c/f=53/53/0 sis=64 pruub=13.018323898s) [1] r=-1 lpr=64 pi=[52,64)/1 crt=33'39 active pruub 134.575500488s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 64 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=52/53 n=1 ec=39/23 lis/c=52/52 les/c/f=53/53/0 sis=64 pruub=13.018235207s) [1] r=-1 lpr=64 pi=[52,64)/1 crt=33'39 unknown NOTIFY pruub 134.575500488s@ mbc={}] exit Reset 0.000158 1 0.000269
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 64 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=52/53 n=1 ec=39/23 lis/c=52/52 les/c/f=53/53/0 sis=64 pruub=13.018235207s) [1] r=-1 lpr=64 pi=[52,64)/1 crt=33'39 unknown NOTIFY pruub 134.575500488s@ mbc={}] enter Started
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 64 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=52/53 n=1 ec=39/23 lis/c=52/52 les/c/f=53/53/0 sis=64 pruub=13.018235207s) [1] r=-1 lpr=64 pi=[52,64)/1 crt=33'39 unknown NOTIFY pruub 134.575500488s@ mbc={}] enter Start
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 64 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=52/53 n=1 ec=39/23 lis/c=52/52 les/c/f=53/53/0 sis=64 pruub=13.018235207s) [1] r=-1 lpr=64 pi=[52,64)/1 crt=33'39 unknown NOTIFY pruub 134.575500488s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 64 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=52/53 n=1 ec=39/23 lis/c=52/52 les/c/f=53/53/0 sis=64 pruub=13.018235207s) [1] r=-1 lpr=64 pi=[52,64)/1 crt=33'39 unknown NOTIFY pruub 134.575500488s@ mbc={}] exit Start 0.000007 0 0.000000
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 64 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=52/53 n=1 ec=39/23 lis/c=52/52 les/c/f=53/53/0 sis=64 pruub=13.018235207s) [1] r=-1 lpr=64 pi=[52,64)/1 crt=33'39 unknown NOTIFY pruub 134.575500488s@ mbc={}] enter Started/Stray
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 64 handle_osd_map epochs [64,64], i have 64, src has [1,64]
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 64446464 unmapped: 557056 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:01.795616+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 64 handle_osd_map epochs [64,65], i have 64, src has [1,65]
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.260480881s of 10.348821640s, submitted: 32
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 65 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=52/53 n=1 ec=39/23 lis/c=52/52 les/c/f=53/53/0 sis=64) [1] r=-1 lpr=64 pi=[52,64)/1 crt=33'39 unknown NOTIFY mbc={}] exit Started/Stray 0.650333 7 0.000176
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 65 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=52/53 n=1 ec=39/23 lis/c=52/52 les/c/f=53/53/0 sis=64) [1] r=-1 lpr=64 pi=[52,64)/1 crt=33'39 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 65 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=52/53 n=1 ec=39/23 lis/c=52/52 les/c/f=53/53/0 sis=64) [1] r=-1 lpr=64 pi=[52,64)/1 crt=33'39 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 65 handle_osd_map epochs [65,65], i have 65, src has [1,65]
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 65 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=52/53 n=1 ec=39/23 lis/c=52/52 les/c/f=53/53/0 sis=64) [1] r=-1 lpr=64 pi=[52,64)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.071731 2 0.000068
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 65 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=52/53 n=1 ec=39/23 lis/c=52/52 les/c/f=53/53/0 sis=64) [1] r=-1 lpr=64 pi=[52,64)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ReplicaActive 0.071770 0 0.000000
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 65 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=52/53 n=1 ec=39/23 lis/c=52/52 les/c/f=53/53/0 sis=64) [1] r=-1 lpr=64 pi=[52,64)/1 pct=0'0 crt=33'39 active mbc={}] enter Started/ToDelete
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 65 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=52/53 n=1 ec=39/23 lis/c=52/52 les/c/f=53/53/0 sis=64) [1] r=-1 lpr=64 pi=[52,64)/1 pct=0'0 crt=33'39 active mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 65 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=52/53 n=1 ec=39/23 lis/c=52/52 les/c/f=53/53/0 sis=64) [1] r=-1 lpr=64 pi=[52,64)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000105 1 0.000070
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 65 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=52/53 n=1 ec=39/23 lis/c=52/52 les/c/f=53/53/0 sis=64) [1] r=-1 lpr=64 pi=[52,64)/1 pct=0'0 crt=33'39 active mbc={}] enter Started/ToDelete/Deleting
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 65 pg[6.d( v 33'39 (0'0,33'39] lb MIN local-lis/les=52/53 n=1 ec=39/23 lis/c=52/52 les/c/f=53/53/0 sis=64) [1] r=-1 lpr=64 DELETING pi=[52,64)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ToDelete/Deleting 0.016832 2 0.000231
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 65 pg[6.d( v 33'39 (0'0,33'39] lb MIN local-lis/les=52/53 n=1 ec=39/23 lis/c=52/52 les/c/f=53/53/0 sis=64) [1] r=-1 lpr=64 pi=[52,64)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ToDelete 0.016988 0 0.000000
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 65 pg[6.d( v 33'39 (0'0,33'39] lb MIN local-lis/les=52/53 n=1 ec=39/23 lis/c=52/52 les/c/f=53/53/0 sis=64) [1] r=-1 lpr=64 pi=[52,64)/1 pct=0'0 crt=33'39 active mbc={}] exit Started 0.739154 0 0.000000
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 64462848 unmapped: 540672 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 65 heartbeat osd_stat(store_statfs(0x4fe116000/0x0/0x4ffc00000, data 0x49c13/0xb4000, compress 0x0/0x0/0x0, omap 0xb5e2, meta 0x1a24a1e), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:02.795766+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 65 heartbeat osd_stat(store_statfs(0x4fe116000/0x0/0x4ffc00000, data 0x49c13/0xb4000, compress 0x0/0x0/0x0, omap 0xb5e2, meta 0x1a24a1e), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 65 handle_osd_map epochs [66,66], i have 65, src has [1,66]
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 65 handle_osd_map epochs [66,66], i have 66, src has [1,66]
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 64487424 unmapped: 516096 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:03.795945+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 64487424 unmapped: 516096 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:04.796100+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 443393 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 64503808 unmapped: 499712 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:05.796258+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 64503808 unmapped: 499712 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:06.796408+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 64512000 unmapped: 491520 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:07.796650+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 66 handle_osd_map epochs [67,67], i have 66, src has [1,67]
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 66 handle_osd_map epochs [67,67], i have 67, src has [1,67]
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 66 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4b229/0xb7000, compress 0x0/0x0/0x0, omap 0xb692, meta 0x1a2496e), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:08.796865+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 67 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=48) [0] r=0 lpr=48 crt=33'39 mlcod 33'39 active+clean] exit Started/Primary/Active/Clean 35.178636 56 0.000295
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 67 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=48) [0] r=0 lpr=48 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active 35.522821 0 0.000000
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 67 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=48) [0] r=0 lpr=48 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary 36.536740 0 0.000000
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 67 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=48) [0] r=0 lpr=48 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started 36.536799 0 0.000000
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 67 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=48) [0] r=0 lpr=48 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Reset
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 67 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=67 pruub=12.483779907s) [2] r=-1 lpr=67 pi=[48,67)/1 crt=33'39 active pruub 141.736114502s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 67 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=67 pruub=12.483655930s) [2] r=-1 lpr=67 pi=[48,67)/1 crt=33'39 unknown NOTIFY pruub 141.736114502s@ mbc={}] exit Reset 0.000209 1 0.000365
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 67 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=67 pruub=12.483655930s) [2] r=-1 lpr=67 pi=[48,67)/1 crt=33'39 unknown NOTIFY pruub 141.736114502s@ mbc={}] enter Started
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 67 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=67 pruub=12.483655930s) [2] r=-1 lpr=67 pi=[48,67)/1 crt=33'39 unknown NOTIFY pruub 141.736114502s@ mbc={}] enter Start
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 67 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=67 pruub=12.483655930s) [2] r=-1 lpr=67 pi=[48,67)/1 crt=33'39 unknown NOTIFY pruub 141.736114502s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 67 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=67 pruub=12.483655930s) [2] r=-1 lpr=67 pi=[48,67)/1 crt=33'39 unknown NOTIFY pruub 141.736114502s@ mbc={}] exit Start 0.000010 0 0.000000
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 67 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=67 pruub=12.483655930s) [2] r=-1 lpr=67 pi=[48,67)/1 crt=33'39 unknown NOTIFY pruub 141.736114502s@ mbc={}] enter Started/Stray
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 7.13 scrub starts
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 7.13 scrub ok
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 64552960 unmapped: 1499136 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:09.797039+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  log_queue is 2 last_log 43 sent 41 num 2 unsent 2 sending 2
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:00:38.991334+0000 osd.0 (osd.0) 42 : cluster [DBG] 7.13 scrub starts
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:00:39.001773+0000 osd.0 (osd.0) 43 : cluster [DBG] 7.13 scrub ok
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 67 handle_osd_map epochs [68,68], i have 67, src has [1,68]
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 68 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=67) [2] r=-1 lpr=67 pi=[48,67)/1 crt=33'39 unknown NOTIFY mbc={}] exit Started/Stray 1.022619 6 0.000297
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 68 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=67) [2] r=-1 lpr=67 pi=[48,67)/1 crt=33'39 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 68 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=67) [2] r=-1 lpr=67 pi=[48,67)/1 crt=33'39 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client handle_log_ack log(last 43)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:00:38.991334+0000 osd.0 (osd.0) 42 : cluster [DBG] 7.13 scrub starts
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:00:39.001773+0000 osd.0 (osd.0) 43 : cluster [DBG] 7.13 scrub ok
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 2.13 scrub starts
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 2.13 scrub ok
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 68 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=67) [2] r=-1 lpr=67 pi=[48,67)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.138844 3 0.000713
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 68 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=67) [2] r=-1 lpr=67 pi=[48,67)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ReplicaActive 0.138897 0 0.000000
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 68 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=67) [2] r=-1 lpr=67 pi=[48,67)/1 pct=0'0 crt=33'39 active mbc={}] enter Started/ToDelete
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 68 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=67) [2] r=-1 lpr=67 pi=[48,67)/1 pct=0'0 crt=33'39 active mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 68 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=67) [2] r=-1 lpr=67 pi=[48,67)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000077 1 0.000083
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 68 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=67) [2] r=-1 lpr=67 pi=[48,67)/1 pct=0'0 crt=33'39 active mbc={}] enter Started/ToDelete/Deleting
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 68 pg[6.f( v 33'39 (0'0,33'39] lb MIN local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=67) [2] r=-1 lpr=67 DELETING pi=[48,67)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ToDelete/Deleting 0.024160 2 0.000210
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 68 pg[6.f( v 33'39 (0'0,33'39] lb MIN local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=67) [2] r=-1 lpr=67 pi=[48,67)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ToDelete 0.024307 0 0.000000
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 pg_epoch: 68 pg[6.f( v 33'39 (0'0,33'39] lb MIN local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=67) [2] r=-1 lpr=67 pi=[48,67)/1 pct=0'0 crt=33'39 active mbc={}] exit Started 1.186475 0 0.000000
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 447709 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 64618496 unmapped: 1433600 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:10.797430+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  log_queue is 2 last_log 45 sent 43 num 2 unsent 2 sending 2
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:00:40.075419+0000 osd.0 (osd.0) 44 : cluster [DBG] 2.13 scrub starts
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:00:40.145251+0000 osd.0 (osd.0) 45 : cluster [DBG] 2.13 scrub ok
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client handle_log_ack log(last 45)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:00:40.075419+0000 osd.0 (osd.0) 44 : cluster [DBG] 2.13 scrub starts
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:00:40.145251+0000 osd.0 (osd.0) 45 : cluster [DBG] 2.13 scrub ok
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _renew_subs
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 64618496 unmapped: 1433600 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:11.797640+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe10d000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 64626688 unmapped: 1425408 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:12.797839+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 3.17 scrub starts
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.325577736s of 11.136447906s, submitted: 18
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 3.17 scrub ok
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe10d000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 64643072 unmapped: 1409024 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:13.798080+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  log_queue is 2 last_log 47 sent 45 num 2 unsent 2 sending 2
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:00:43.073333+0000 osd.0 (osd.0) 46 : cluster [DBG] 3.17 scrub starts
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:00:43.083776+0000 osd.0 (osd.0) 47 : cluster [DBG] 3.17 scrub ok
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 3.9 scrub starts
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 3.9 scrub ok
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client handle_log_ack log(last 47)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:00:43.073333+0000 osd.0 (osd.0) 46 : cluster [DBG] 3.17 scrub starts
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:00:43.083776+0000 osd.0 (osd.0) 47 : cluster [DBG] 3.17 scrub ok
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 64651264 unmapped: 1400832 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:14.798313+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  log_queue is 2 last_log 49 sent 47 num 2 unsent 2 sending 2
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:00:44.107776+0000 osd.0 (osd.0) 48 : cluster [DBG] 3.9 scrub starts
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:00:44.118278+0000 osd.0 (osd.0) 49 : cluster [DBG] 3.9 scrub ok
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client handle_log_ack log(last 49)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:00:44.107776+0000 osd.0 (osd.0) 48 : cluster [DBG] 3.9 scrub starts
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:00:44.118278+0000 osd.0 (osd.0) 49 : cluster [DBG] 3.9 scrub ok
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 451349 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 1392640 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:15.798559+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 1392640 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:16.798751+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 64675840 unmapped: 1376256 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:17.798957+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 64675840 unmapped: 1376256 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:18.799136+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 64675840 unmapped: 1376256 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:19.799319+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 2.8 scrub starts
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 2.8 scrub ok
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 453760 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 64684032 unmapped: 1368064 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:20.799512+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  log_queue is 2 last_log 51 sent 49 num 2 unsent 2 sending 2
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:00:50.146244+0000 osd.0 (osd.0) 50 : cluster [DBG] 2.8 scrub starts
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:00:50.156789+0000 osd.0 (osd.0) 51 : cluster [DBG] 2.8 scrub ok
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client handle_log_ack log(last 51)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:00:50.146244+0000 osd.0 (osd.0) 50 : cluster [DBG] 2.8 scrub starts
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:00:50.156789+0000 osd.0 (osd.0) 51 : cluster [DBG] 2.8 scrub ok
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 64684032 unmapped: 1368064 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:21.799810+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 3.15 scrub starts
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 3.15 scrub ok
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 64716800 unmapped: 1335296 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:22.800069+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  log_queue is 2 last_log 53 sent 51 num 2 unsent 2 sending 2
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:00:52.167446+0000 osd.0 (osd.0) 52 : cluster [DBG] 3.15 scrub starts
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:00:52.177985+0000 osd.0 (osd.0) 53 : cluster [DBG] 3.15 scrub ok
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 64716800 unmapped: 1335296 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:23.800448+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client handle_log_ack log(last 53)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:00:52.167446+0000 osd.0 (osd.0) 52 : cluster [DBG] 3.15 scrub starts
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:00:52.177985+0000 osd.0 (osd.0) 53 : cluster [DBG] 3.15 scrub ok
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 64716800 unmapped: 1335296 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:24.800597+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 2.16 scrub starts
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.089248657s of 12.105111122s, submitted: 8
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 2.16 scrub ok
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 458586 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 64716800 unmapped: 1335296 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:25.800860+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  log_queue is 2 last_log 55 sent 53 num 2 unsent 2 sending 2
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:00:55.178486+0000 osd.0 (osd.0) 54 : cluster [DBG] 2.16 scrub starts
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:00:55.189069+0000 osd.0 (osd.0) 55 : cluster [DBG] 2.16 scrub ok
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client handle_log_ack log(last 55)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:00:55.178486+0000 osd.0 (osd.0) 54 : cluster [DBG] 2.16 scrub starts
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:00:55.189069+0000 osd.0 (osd.0) 55 : cluster [DBG] 2.16 scrub ok
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 7.f scrub starts
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 7.f scrub ok
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 64724992 unmapped: 1327104 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:26.801153+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  log_queue is 2 last_log 57 sent 55 num 2 unsent 2 sending 2
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:00:56.159998+0000 osd.0 (osd.0) 56 : cluster [DBG] 7.f scrub starts
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:00:56.170302+0000 osd.0 (osd.0) 57 : cluster [DBG] 7.f scrub ok
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client handle_log_ack log(last 57)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:00:56.159998+0000 osd.0 (osd.0) 56 : cluster [DBG] 7.f scrub starts
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:00:56.170302+0000 osd.0 (osd.0) 57 : cluster [DBG] 7.f scrub ok
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 5.3 scrub starts
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 5.3 scrub ok
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 64733184 unmapped: 1318912 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:27.801435+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  log_queue is 2 last_log 59 sent 57 num 2 unsent 2 sending 2
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:00:57.132905+0000 osd.0 (osd.0) 58 : cluster [DBG] 5.3 scrub starts
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:00:57.143248+0000 osd.0 (osd.0) 59 : cluster [DBG] 5.3 scrub ok
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client handle_log_ack log(last 59)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:00:57.132905+0000 osd.0 (osd.0) 58 : cluster [DBG] 5.3 scrub starts
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:00:57.143248+0000 osd.0 (osd.0) 59 : cluster [DBG] 5.3 scrub ok
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 3.6 scrub starts
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 3.6 scrub ok
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 64733184 unmapped: 1318912 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:28.801786+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  log_queue is 2 last_log 61 sent 59 num 2 unsent 2 sending 2
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:00:58.118921+0000 osd.0 (osd.0) 60 : cluster [DBG] 3.6 scrub starts
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:00:58.129347+0000 osd.0 (osd.0) 61 : cluster [DBG] 3.6 scrub ok
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client handle_log_ack log(last 61)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:00:58.118921+0000 osd.0 (osd.0) 60 : cluster [DBG] 3.6 scrub starts
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:00:58.129347+0000 osd.0 (osd.0) 61 : cluster [DBG] 3.6 scrub ok
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 64733184 unmapped: 1318912 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:29.802021+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 465819 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 64741376 unmapped: 1310720 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:30.802213+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 64741376 unmapped: 1310720 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:31.802372+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 5.2 scrub starts
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 5.2 scrub ok
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 64765952 unmapped: 1286144 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:32.802554+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  log_queue is 2 last_log 63 sent 61 num 2 unsent 2 sending 2
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:01:01.971813+0000 osd.0 (osd.0) 62 : cluster [DBG] 5.2 scrub starts
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:01:01.982259+0000 osd.0 (osd.0) 63 : cluster [DBG] 5.2 scrub ok
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 2.2 scrub starts
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 2.2 scrub ok
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client handle_log_ack log(last 63)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:01:01.971813+0000 osd.0 (osd.0) 62 : cluster [DBG] 5.2 scrub starts
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:01:01.982259+0000 osd.0 (osd.0) 63 : cluster [DBG] 5.2 scrub ok
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 64782336 unmapped: 1269760 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:33.802948+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  log_queue is 2 last_log 65 sent 63 num 2 unsent 2 sending 2
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:01:02.955270+0000 osd.0 (osd.0) 64 : cluster [DBG] 2.2 scrub starts
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:01:02.965811+0000 osd.0 (osd.0) 65 : cluster [DBG] 2.2 scrub ok
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client handle_log_ack log(last 65)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:01:02.955270+0000 osd.0 (osd.0) 64 : cluster [DBG] 2.2 scrub starts
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:01:02.965811+0000 osd.0 (osd.0) 65 : cluster [DBG] 2.2 scrub ok
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 64782336 unmapped: 1269760 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:34.804227+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 7.3 scrub starts
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 7.3 scrub ok
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 473052 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 64798720 unmapped: 1253376 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:35.805490+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  log_queue is 2 last_log 67 sent 65 num 2 unsent 2 sending 2
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:01:05.022340+0000 osd.0 (osd.0) 66 : cluster [DBG] 7.3 scrub starts
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:01:05.032804+0000 osd.0 (osd.0) 67 : cluster [DBG] 7.3 scrub ok
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client handle_log_ack log(last 67)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:01:05.022340+0000 osd.0 (osd.0) 66 : cluster [DBG] 7.3 scrub starts
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:01:05.032804+0000 osd.0 (osd.0) 67 : cluster [DBG] 7.3 scrub ok
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 64798720 unmapped: 1253376 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:36.806248+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 1245184 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:37.806565+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 1245184 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:38.807119+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 1245184 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:39.807345+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 473052 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 64815104 unmapped: 1236992 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:40.808245+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 64823296 unmapped: 1228800 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:41.808509+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 3.3 scrub starts
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.449089050s of 16.955524445s, submitted: 14
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 3.3 scrub ok
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 64831488 unmapped: 1220608 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:42.808739+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  log_queue is 2 last_log 69 sent 67 num 2 unsent 2 sending 2
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:01:12.133022+0000 osd.0 (osd.0) 68 : cluster [DBG] 3.3 scrub starts
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:01:12.143576+0000 osd.0 (osd.0) 69 : cluster [DBG] 3.3 scrub ok
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client handle_log_ack log(last 69)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:01:12.133022+0000 osd.0 (osd.0) 68 : cluster [DBG] 3.3 scrub starts
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:01:12.143576+0000 osd.0 (osd.0) 69 : cluster [DBG] 3.3 scrub ok
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:43.809109+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 64831488 unmapped: 1220608 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:44.809443+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 64831488 unmapped: 1220608 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 5.5 scrub starts
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 5.5 scrub ok
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 477874 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:45.809840+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  log_queue is 2 last_log 71 sent 69 num 2 unsent 2 sending 2
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:01:15.144038+0000 osd.0 (osd.0) 70 : cluster [DBG] 5.5 scrub starts
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:01:15.154550+0000 osd.0 (osd.0) 71 : cluster [DBG] 5.5 scrub ok
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 64839680 unmapped: 1212416 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client handle_log_ack log(last 71)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:01:15.144038+0000 osd.0 (osd.0) 70 : cluster [DBG] 5.5 scrub starts
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:01:15.154550+0000 osd.0 (osd.0) 71 : cluster [DBG] 5.5 scrub ok
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:46.810126+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 64839680 unmapped: 1212416 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:47.810357+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 64847872 unmapped: 1204224 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:48.810557+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 64847872 unmapped: 1204224 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:49.810777+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 64847872 unmapped: 1204224 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 2.f scrub starts
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 2.f scrub ok
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 480285 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:50.810961+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  log_queue is 2 last_log 73 sent 71 num 2 unsent 2 sending 2
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:01:20.305589+0000 osd.0 (osd.0) 72 : cluster [DBG] 2.f scrub starts
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:01:20.316053+0000 osd.0 (osd.0) 73 : cluster [DBG] 2.f scrub ok
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 64864256 unmapped: 1187840 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client handle_log_ack log(last 73)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:01:20.305589+0000 osd.0 (osd.0) 72 : cluster [DBG] 2.f scrub starts
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:01:20.316053+0000 osd.0 (osd.0) 73 : cluster [DBG] 2.f scrub ok
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:51.811213+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 64864256 unmapped: 1187840 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:52.811370+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 1163264 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 7.18 scrub starts
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.150645256s of 11.165366173s, submitted: 6
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 7.18 scrub ok
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:53.811741+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  log_queue is 2 last_log 75 sent 73 num 2 unsent 2 sending 2
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:01:23.299639+0000 osd.0 (osd.0) 74 : cluster [DBG] 7.18 scrub starts
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:01:23.310243+0000 osd.0 (osd.0) 75 : cluster [DBG] 7.18 scrub ok
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 1163264 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client handle_log_ack log(last 75)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:01:23.299639+0000 osd.0 (osd.0) 74 : cluster [DBG] 7.18 scrub starts
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:01:23.310243+0000 osd.0 (osd.0) 75 : cluster [DBG] 7.18 scrub ok
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:54.811978+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 64897024 unmapped: 1155072 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 482698 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:55.812375+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 64897024 unmapped: 1155072 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:56.812527+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 64897024 unmapped: 1155072 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:57.812666+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 64913408 unmapped: 1138688 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 7.6 scrub starts
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 7.6 scrub ok
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:58.812829+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  log_queue is 2 last_log 77 sent 75 num 2 unsent 2 sending 2
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:01:28.345164+0000 osd.0 (osd.0) 76 : cluster [DBG] 7.6 scrub starts
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:01:28.355761+0000 osd.0 (osd.0) 77 : cluster [DBG] 7.6 scrub ok
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 64913408 unmapped: 1138688 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client handle_log_ack log(last 77)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:01:28.345164+0000 osd.0 (osd.0) 76 : cluster [DBG] 7.6 scrub starts
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:01:28.355761+0000 osd.0 (osd.0) 77 : cluster [DBG] 7.6 scrub ok
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 5.4 scrub starts
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 5.4 scrub ok
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:00:59.813136+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  log_queue is 2 last_log 79 sent 77 num 2 unsent 2 sending 2
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:01:29.359069+0000 osd.0 (osd.0) 78 : cluster [DBG] 5.4 scrub starts
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:01:29.369640+0000 osd.0 (osd.0) 79 : cluster [DBG] 5.4 scrub ok
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 64929792 unmapped: 1122304 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client handle_log_ack log(last 79)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:01:29.359069+0000 osd.0 (osd.0) 78 : cluster [DBG] 5.4 scrub starts
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:01:29.369640+0000 osd.0 (osd.0) 79 : cluster [DBG] 5.4 scrub ok
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 3.1 scrub starts
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 3.1 scrub ok
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 489931 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:00.813534+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  log_queue is 2 last_log 81 sent 79 num 2 unsent 2 sending 2
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:01:30.325015+0000 osd.0 (osd.0) 80 : cluster [DBG] 3.1 scrub starts
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:01:30.335518+0000 osd.0 (osd.0) 81 : cluster [DBG] 3.1 scrub ok
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 64946176 unmapped: 1105920 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client handle_log_ack log(last 81)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:01:30.325015+0000 osd.0 (osd.0) 80 : cluster [DBG] 3.1 scrub starts
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:01:30.335518+0000 osd.0 (osd.0) 81 : cluster [DBG] 3.1 scrub ok
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:01.813807+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 64954368 unmapped: 1097728 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 3.c scrub starts
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 3.c scrub ok
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:02.813979+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  log_queue is 2 last_log 83 sent 81 num 2 unsent 2 sending 2
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:01:32.442874+0000 osd.0 (osd.0) 82 : cluster [DBG] 3.c scrub starts
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:01:32.453548+0000 osd.0 (osd.0) 83 : cluster [DBG] 3.c scrub ok
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 64987136 unmapped: 1064960 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client handle_log_ack log(last 83)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:01:32.442874+0000 osd.0 (osd.0) 82 : cluster [DBG] 3.c scrub starts
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:01:32.453548+0000 osd.0 (osd.0) 83 : cluster [DBG] 3.c scrub ok
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:03.814560+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 64987136 unmapped: 1064960 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:04.814774+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 64995328 unmapped: 1056768 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 492342 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:05.814934+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 64995328 unmapped: 1056768 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:06.815134+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65003520 unmapped: 1048576 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:07.815303+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65003520 unmapped: 1048576 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:08.815438+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65003520 unmapped: 1048576 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 7.4 scrub starts
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.110231400s of 16.139881134s, submitted: 10
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 7.4 scrub ok
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:09.815597+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  log_queue is 2 last_log 85 sent 83 num 2 unsent 2 sending 2
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:01:39.439440+0000 osd.0 (osd.0) 84 : cluster [DBG] 7.4 scrub starts
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:01:39.450044+0000 osd.0 (osd.0) 85 : cluster [DBG] 7.4 scrub ok
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65036288 unmapped: 1015808 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client handle_log_ack log(last 85)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:01:39.439440+0000 osd.0 (osd.0) 84 : cluster [DBG] 7.4 scrub starts
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:01:39.450044+0000 osd.0 (osd.0) 85 : cluster [DBG] 7.4 scrub ok
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 494753 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:10.815852+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65036288 unmapped: 1015808 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:11.816036+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65044480 unmapped: 1007616 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:12.816275+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65060864 unmapped: 991232 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:13.816538+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65060864 unmapped: 991232 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:14.816721+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65069056 unmapped: 983040 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 3.f scrub starts
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 3.f scrub ok
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 497164 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:15.816869+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  log_queue is 2 last_log 87 sent 85 num 2 unsent 2 sending 2
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:01:45.322816+0000 osd.0 (osd.0) 86 : cluster [DBG] 3.f scrub starts
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:01:45.333508+0000 osd.0 (osd.0) 87 : cluster [DBG] 3.f scrub ok
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client handle_log_ack log(last 87)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:01:45.322816+0000 osd.0 (osd.0) 86 : cluster [DBG] 3.f scrub starts
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:01:45.333508+0000 osd.0 (osd.0) 87 : cluster [DBG] 3.f scrub ok
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65069056 unmapped: 983040 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:16.817103+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65077248 unmapped: 974848 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:17.817316+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65077248 unmapped: 974848 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:18.817476+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65077248 unmapped: 974848 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 7.1f scrub starts
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 7.1f scrub ok
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:19.817658+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  log_queue is 2 last_log 89 sent 87 num 2 unsent 2 sending 2
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:01:49.359848+0000 osd.0 (osd.0) 88 : cluster [DBG] 7.1f scrub starts
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:01:49.370356+0000 osd.0 (osd.0) 89 : cluster [DBG] 7.1f scrub ok
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65093632 unmapped: 958464 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client handle_log_ack log(last 89)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:01:49.359848+0000 osd.0 (osd.0) 88 : cluster [DBG] 7.1f scrub starts
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:01:49.370356+0000 osd.0 (osd.0) 89 : cluster [DBG] 7.1f scrub ok
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499577 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:20.817988+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65093632 unmapped: 958464 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:21.818196+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65093632 unmapped: 958464 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 5.7 scrub starts
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.795870781s of 12.943486214s, submitted: 6
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 5.7 scrub ok
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:22.818459+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  log_queue is 2 last_log 91 sent 89 num 2 unsent 2 sending 2
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:01:52.383048+0000 osd.0 (osd.0) 90 : cluster [DBG] 5.7 scrub starts
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:01:52.393362+0000 osd.0 (osd.0) 91 : cluster [DBG] 5.7 scrub ok
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65101824 unmapped: 950272 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 3.1b scrub starts
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 3.1b scrub ok
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client handle_log_ack log(last 91)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:01:52.383048+0000 osd.0 (osd.0) 90 : cluster [DBG] 5.7 scrub starts
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:01:52.393362+0000 osd.0 (osd.0) 91 : cluster [DBG] 5.7 scrub ok
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:23.819200+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  log_queue is 2 last_log 93 sent 91 num 2 unsent 2 sending 2
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:01:53.349860+0000 osd.0 (osd.0) 92 : cluster [DBG] 3.1b scrub starts
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:01:53.360349+0000 osd.0 (osd.0) 93 : cluster [DBG] 3.1b scrub ok
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65110016 unmapped: 942080 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client handle_log_ack log(last 93)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:01:53.349860+0000 osd.0 (osd.0) 92 : cluster [DBG] 3.1b scrub starts
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:01:53.360349+0000 osd.0 (osd.0) 93 : cluster [DBG] 3.1b scrub ok
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:24.819472+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65118208 unmapped: 933888 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 504401 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:25.819759+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65118208 unmapped: 933888 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 2.19 scrub starts
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 2.19 scrub ok
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:26.819965+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  log_queue is 2 last_log 95 sent 93 num 2 unsent 2 sending 2
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:01:56.294828+0000 osd.0 (osd.0) 94 : cluster [DBG] 2.19 scrub starts
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:01:56.305271+0000 osd.0 (osd.0) 95 : cluster [DBG] 2.19 scrub ok
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65126400 unmapped: 925696 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client handle_log_ack log(last 95)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:01:56.294828+0000 osd.0 (osd.0) 94 : cluster [DBG] 2.19 scrub starts
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:01:56.305271+0000 osd.0 (osd.0) 95 : cluster [DBG] 2.19 scrub ok
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:27.820260+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65126400 unmapped: 925696 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:28.820481+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65126400 unmapped: 925696 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:29.820882+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65134592 unmapped: 917504 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 506814 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:30.821047+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65134592 unmapped: 917504 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:31.821299+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 909312 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:32.821511+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65159168 unmapped: 892928 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 2.18 scrub starts
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.704629898s of 10.843565941s, submitted: 6
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 2.18 scrub ok
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:33.821789+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  log_queue is 2 last_log 97 sent 95 num 2 unsent 2 sending 2
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:02:03.226608+0000 osd.0 (osd.0) 96 : cluster [DBG] 2.18 scrub starts
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:02:03.237147+0000 osd.0 (osd.0) 97 : cluster [DBG] 2.18 scrub ok
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65183744 unmapped: 868352 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 7.9 scrub starts
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 7.9 scrub ok
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client handle_log_ack log(last 97)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:02:03.226608+0000 osd.0 (osd.0) 96 : cluster [DBG] 2.18 scrub starts
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:02:03.237147+0000 osd.0 (osd.0) 97 : cluster [DBG] 2.18 scrub ok
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:34.822416+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  log_queue is 2 last_log 99 sent 97 num 2 unsent 2 sending 2
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:02:04.192904+0000 osd.0 (osd.0) 98 : cluster [DBG] 7.9 scrub starts
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:02:04.203476+0000 osd.0 (osd.0) 99 : cluster [DBG] 7.9 scrub ok
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65200128 unmapped: 851968 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client handle_log_ack log(last 99)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:02:04.192904+0000 osd.0 (osd.0) 98 : cluster [DBG] 7.9 scrub starts
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:02:04.203476+0000 osd.0 (osd.0) 99 : cluster [DBG] 7.9 scrub ok
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 511638 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:35.822622+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65200128 unmapped: 851968 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:36.822785+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65208320 unmapped: 843776 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:37.822990+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65208320 unmapped: 843776 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 5.1e scrub starts
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 5.1e scrub ok
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:38.823162+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  log_queue is 2 last_log 101 sent 99 num 2 unsent 2 sending 2
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:02:08.235978+0000 osd.0 (osd.0) 100 : cluster [DBG] 5.1e scrub starts
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:02:08.246454+0000 osd.0 (osd.0) 101 : cluster [DBG] 5.1e scrub ok
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65216512 unmapped: 835584 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client handle_log_ack log(last 101)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:02:08.235978+0000 osd.0 (osd.0) 100 : cluster [DBG] 5.1e scrub starts
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:02:08.246454+0000 osd.0 (osd.0) 101 : cluster [DBG] 5.1e scrub ok
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:39.823381+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65216512 unmapped: 835584 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 6.0 scrub starts
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 6.0 scrub ok
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 516462 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:40.823546+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  log_queue is 2 last_log 103 sent 101 num 2 unsent 2 sending 2
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:02:10.278437+0000 osd.0 (osd.0) 102 : cluster [DBG] 6.0 scrub starts
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:02:10.303151+0000 osd.0 (osd.0) 103 : cluster [DBG] 6.0 scrub ok
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65216512 unmapped: 835584 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client handle_log_ack log(last 103)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:02:10.278437+0000 osd.0 (osd.0) 102 : cluster [DBG] 6.0 scrub starts
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:02:10.303151+0000 osd.0 (osd.0) 103 : cluster [DBG] 6.0 scrub ok
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:41.823760+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:42.824074+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:43.824262+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 811008 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:44.824376+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 811008 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 6.3 scrub starts
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.053336143s of 12.073998451s, submitted: 8
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 6.3 scrub ok
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 518873 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:45.824537+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  log_queue is 2 last_log 105 sent 103 num 2 unsent 2 sending 2
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:02:15.300674+0000 osd.0 (osd.0) 104 : cluster [DBG] 6.3 scrub starts
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:02:15.318179+0000 osd.0 (osd.0) 105 : cluster [DBG] 6.3 scrub ok
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65257472 unmapped: 794624 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client handle_log_ack log(last 105)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:02:15.300674+0000 osd.0 (osd.0) 104 : cluster [DBG] 6.3 scrub starts
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:02:15.318179+0000 osd.0 (osd.0) 105 : cluster [DBG] 6.3 scrub ok
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:46.824768+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:47.824975+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65273856 unmapped: 778240 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:48.825132+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65282048 unmapped: 770048 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:49.825303+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65282048 unmapped: 770048 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 518873 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:50.825488+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65290240 unmapped: 761856 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:51.825629+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65290240 unmapped: 761856 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:52.825776+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65306624 unmapped: 745472 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 6.7 scrub starts
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 6.7 scrub ok
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:53.825987+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  log_queue is 2 last_log 107 sent 105 num 2 unsent 2 sending 2
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:02:23.293442+0000 osd.0 (osd.0) 106 : cluster [DBG] 6.7 scrub starts
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:02:23.307688+0000 osd.0 (osd.0) 107 : cluster [DBG] 6.7 scrub ok
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65331200 unmapped: 720896 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client handle_log_ack log(last 107)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:02:23.293442+0000 osd.0 (osd.0) 106 : cluster [DBG] 6.7 scrub starts
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:02:23.307688+0000 osd.0 (osd.0) 107 : cluster [DBG] 6.7 scrub ok
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:54.826249+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65331200 unmapped: 720896 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 521284 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:55.826430+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65339392 unmapped: 712704 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:56.826572+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65347584 unmapped: 704512 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:57.826811+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65355776 unmapped: 696320 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 6.9 scrub starts
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.993289948s of 13.001964569s, submitted: 4
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 6.9 scrub ok
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:58.826984+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  log_queue is 2 last_log 109 sent 107 num 2 unsent 2 sending 2
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:02:28.302620+0000 osd.0 (osd.0) 108 : cluster [DBG] 6.9 scrub starts
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:02:28.313214+0000 osd.0 (osd.0) 109 : cluster [DBG] 6.9 scrub ok
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client handle_log_ack log(last 109)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:02:28.302620+0000 osd.0 (osd.0) 108 : cluster [DBG] 6.9 scrub starts
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:02:28.313214+0000 osd.0 (osd.0) 109 : cluster [DBG] 6.9 scrub ok
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65355776 unmapped: 696320 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:01:59.827216+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65355776 unmapped: 696320 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:00.827387+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 523695 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65363968 unmapped: 688128 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 6.5 scrub starts
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 6.5 scrub ok
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:01.827522+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  log_queue is 2 last_log 111 sent 109 num 2 unsent 2 sending 2
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:02:31.351772+0000 osd.0 (osd.0) 110 : cluster [DBG] 6.5 scrub starts
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:02:31.369817+0000 osd.0 (osd.0) 111 : cluster [DBG] 6.5 scrub ok
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65372160 unmapped: 679936 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client handle_log_ack log(last 111)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:02:31.351772+0000 osd.0 (osd.0) 110 : cluster [DBG] 6.5 scrub starts
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:02:31.369817+0000 osd.0 (osd.0) 111 : cluster [DBG] 6.5 scrub ok
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:02.827787+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65372160 unmapped: 679936 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:03.827963+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65380352 unmapped: 671744 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 6.a scrub starts
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_channel(cluster) log [DBG] : 6.a scrub ok
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:04.828121+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  log_queue is 2 last_log 113 sent 111 num 2 unsent 2 sending 2
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:02:34.386657+0000 osd.0 (osd.0) 112 : cluster [DBG] 6.a scrub starts
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  will send 2026-01-10T17:02:34.397201+0000 osd.0 (osd.0) 113 : cluster [DBG] 6.a scrub ok
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65388544 unmapped: 663552 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client handle_log_ack log(last 113)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:02:34.386657+0000 osd.0 (osd.0) 112 : cluster [DBG] 6.a scrub starts
Jan 10 17:32:09 compute-0 ceph-osd[85764]: log_client  logged 2026-01-10T17:02:34.397201+0000 osd.0 (osd.0) 113 : cluster [DBG] 6.a scrub ok
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:05.828392+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65396736 unmapped: 655360 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:06.828586+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65396736 unmapped: 655360 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:07.828780+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65404928 unmapped: 647168 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:08.828911+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65404928 unmapped: 647168 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:09.829038+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65413120 unmapped: 638976 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:10.829208+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65421312 unmapped: 630784 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:11.829403+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65421312 unmapped: 630784 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:12.829609+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65445888 unmapped: 606208 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:13.829872+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65454080 unmapped: 598016 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:14.830059+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65454080 unmapped: 598016 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:15.830224+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65462272 unmapped: 589824 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:16.830390+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65462272 unmapped: 589824 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:17.830535+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65462272 unmapped: 589824 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:18.830766+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65470464 unmapped: 581632 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:19.831036+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65470464 unmapped: 581632 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:20.831234+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65478656 unmapped: 573440 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:21.831370+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65478656 unmapped: 573440 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:22.831654+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65486848 unmapped: 565248 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:23.831908+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65486848 unmapped: 565248 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:24.832125+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65486848 unmapped: 565248 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:25.832263+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65503232 unmapped: 548864 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:26.832405+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65503232 unmapped: 548864 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:27.832645+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65511424 unmapped: 540672 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:28.832846+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65511424 unmapped: 540672 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:29.833010+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65527808 unmapped: 524288 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:30.833243+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65527808 unmapped: 524288 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:31.833401+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65527808 unmapped: 524288 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:32.833592+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65552384 unmapped: 499712 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:33.833914+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65552384 unmapped: 499712 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:34.834083+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65552384 unmapped: 499712 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:35.834268+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65560576 unmapped: 491520 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:36.834438+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65560576 unmapped: 491520 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:37.834623+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65568768 unmapped: 483328 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:38.834800+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65568768 unmapped: 483328 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:39.834976+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65568768 unmapped: 483328 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:40.835215+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65585152 unmapped: 466944 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:41.835459+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65585152 unmapped: 466944 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:42.835691+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65593344 unmapped: 458752 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:43.835963+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65593344 unmapped: 458752 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:44.836167+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65593344 unmapped: 458752 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:45.836433+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65601536 unmapped: 450560 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:46.836665+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65601536 unmapped: 450560 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:47.836855+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65617920 unmapped: 434176 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:48.837070+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65617920 unmapped: 434176 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:49.837264+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65617920 unmapped: 434176 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:50.837434+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65626112 unmapped: 425984 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:51.837594+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65626112 unmapped: 425984 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:52.837796+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65650688 unmapped: 401408 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:53.838043+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65650688 unmapped: 401408 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:54.838245+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65650688 unmapped: 401408 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:55.838413+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65658880 unmapped: 393216 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:56.838558+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65658880 unmapped: 393216 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:57.838727+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65667072 unmapped: 385024 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:58.838878+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65667072 unmapped: 385024 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:02:59.839036+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65667072 unmapped: 385024 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:00.839231+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65675264 unmapped: 376832 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:01.839440+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65675264 unmapped: 376832 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:02.839584+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65683456 unmapped: 368640 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:03.839839+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65683456 unmapped: 368640 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:04.852872+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65683456 unmapped: 368640 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:05.853016+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65691648 unmapped: 360448 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:06.853176+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65691648 unmapped: 360448 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:07.853369+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65699840 unmapped: 352256 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:08.853513+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65732608 unmapped: 319488 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:09.853746+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65732608 unmapped: 319488 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:10.853946+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65740800 unmapped: 311296 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:11.854115+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65740800 unmapped: 311296 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:12.854495+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65757184 unmapped: 294912 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:13.854770+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65757184 unmapped: 294912 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:14.854985+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65757184 unmapped: 294912 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:15.855248+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65765376 unmapped: 286720 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:16.855478+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65765376 unmapped: 286720 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:17.855816+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65765376 unmapped: 286720 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:18.856142+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65773568 unmapped: 278528 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:19.856296+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65773568 unmapped: 278528 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:20.856652+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65781760 unmapped: 270336 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:21.857038+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65773568 unmapped: 278528 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:22.857192+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65773568 unmapped: 278528 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:23.857356+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65773568 unmapped: 278528 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:24.857598+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65781760 unmapped: 270336 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:25.857895+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65781760 unmapped: 270336 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:26.858085+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65781760 unmapped: 270336 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:27.858243+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65789952 unmapped: 262144 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:28.858417+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65789952 unmapped: 262144 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:29.858614+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65789952 unmapped: 262144 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:30.858769+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65798144 unmapped: 253952 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:31.858921+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65798144 unmapped: 253952 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:32.859202+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65806336 unmapped: 245760 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:33.859662+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65806336 unmapped: 245760 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:34.859903+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65814528 unmapped: 237568 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:35.860191+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65814528 unmapped: 237568 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:36.860468+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65814528 unmapped: 237568 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:37.860681+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65822720 unmapped: 229376 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:38.860862+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65822720 unmapped: 229376 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:39.861063+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65830912 unmapped: 221184 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:40.861225+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65830912 unmapped: 221184 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:41.861350+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65830912 unmapped: 221184 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:42.861513+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65839104 unmapped: 212992 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:43.861743+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65839104 unmapped: 212992 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:44.861911+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65847296 unmapped: 204800 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:45.862076+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65847296 unmapped: 204800 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:46.862737+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65847296 unmapped: 204800 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:47.863116+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65847296 unmapped: 204800 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:48.863322+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65847296 unmapped: 204800 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:49.863505+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65855488 unmapped: 196608 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:50.863658+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65855488 unmapped: 196608 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:51.863779+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65855488 unmapped: 196608 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:52.863988+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65863680 unmapped: 188416 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:53.864269+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65863680 unmapped: 188416 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:54.864487+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65863680 unmapped: 188416 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:55.864767+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65871872 unmapped: 180224 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:56.865014+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65871872 unmapped: 180224 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:57.865235+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65880064 unmapped: 172032 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:58.865471+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65880064 unmapped: 172032 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:03:59.865765+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65880064 unmapped: 172032 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:00.865965+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65888256 unmapped: 163840 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:01.866340+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65888256 unmapped: 163840 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:02.866491+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65896448 unmapped: 155648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:03.866782+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65896448 unmapped: 155648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:04.866916+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65896448 unmapped: 155648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:05.867160+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65904640 unmapped: 147456 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:06.867317+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65904640 unmapped: 147456 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:07.867575+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65912832 unmapped: 139264 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:08.867764+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65921024 unmapped: 131072 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:09.867954+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65929216 unmapped: 122880 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:10.868161+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65929216 unmapped: 122880 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:11.868405+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65929216 unmapped: 122880 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:12.868613+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65937408 unmapped: 114688 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:13.868841+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65937408 unmapped: 114688 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:14.869012+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65945600 unmapped: 106496 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:15.869200+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65945600 unmapped: 106496 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:16.869423+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65953792 unmapped: 98304 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:17.869632+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65953792 unmapped: 98304 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:18.869835+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65953792 unmapped: 98304 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:19.870015+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65961984 unmapped: 90112 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:20.870161+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65961984 unmapped: 90112 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:21.870315+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65961984 unmapped: 90112 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:22.870447+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65970176 unmapped: 81920 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:23.870685+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65970176 unmapped: 81920 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:24.870870+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65970176 unmapped: 81920 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:25.871011+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65978368 unmapped: 73728 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:26.871146+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65978368 unmapped: 73728 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:27.871328+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65986560 unmapped: 65536 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:28.871488+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65986560 unmapped: 65536 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:29.871623+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65994752 unmapped: 57344 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:30.871784+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65994752 unmapped: 57344 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:31.871927+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 65994752 unmapped: 57344 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:32.872065+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66002944 unmapped: 49152 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:33.872286+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66002944 unmapped: 49152 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:34.872436+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 40960 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:35.872576+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 40960 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:36.872764+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66019328 unmapped: 32768 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:37.872903+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66019328 unmapped: 32768 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:38.873087+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66019328 unmapped: 32768 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:39.873233+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66027520 unmapped: 24576 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:40.873358+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66027520 unmapped: 24576 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:41.873500+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66027520 unmapped: 24576 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:42.873632+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66035712 unmapped: 16384 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:43.873886+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66035712 unmapped: 16384 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:44.874012+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66043904 unmapped: 8192 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:45.874136+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:46.874316+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66043904 unmapped: 8192 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:47.874571+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66052096 unmapped: 0 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:48.874900+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66052096 unmapped: 0 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:49.875119+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66052096 unmapped: 0 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:50.875328+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66060288 unmapped: 1040384 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:51.875585+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66060288 unmapped: 1040384 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:52.875825+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66060288 unmapped: 1040384 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:53.876045+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66068480 unmapped: 1032192 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:54.876178+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66068480 unmapped: 1032192 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:55.876332+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66076672 unmapped: 1024000 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:56.876608+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66076672 unmapped: 1024000 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:57.876837+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66076672 unmapped: 1024000 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:58.877067+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66084864 unmapped: 1015808 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:04:59.877334+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66084864 unmapped: 1015808 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:00.877516+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66093056 unmapped: 1007616 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:01.877721+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66093056 unmapped: 1007616 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:02.877863+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66093056 unmapped: 1007616 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:03.878059+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66101248 unmapped: 999424 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:04.878213+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66101248 unmapped: 999424 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:05.878387+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66101248 unmapped: 999424 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:06.878570+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66109440 unmapped: 991232 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:07.878753+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66109440 unmapped: 991232 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:08.878982+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66117632 unmapped: 983040 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:09.879285+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66109440 unmapped: 991232 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:10.879466+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66109440 unmapped: 991232 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:11.879595+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66117632 unmapped: 983040 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:12.879801+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66117632 unmapped: 983040 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:13.880029+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66125824 unmapped: 974848 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:14.880222+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66125824 unmapped: 974848 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:15.880400+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66125824 unmapped: 974848 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:16.880558+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66134016 unmapped: 966656 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:17.880766+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66134016 unmapped: 966656 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:18.880951+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66142208 unmapped: 958464 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:19.881128+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66142208 unmapped: 958464 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:20.881297+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66142208 unmapped: 958464 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:21.881448+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66150400 unmapped: 950272 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:22.881608+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66150400 unmapped: 950272 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:23.881781+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66158592 unmapped: 942080 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:24.881921+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66158592 unmapped: 942080 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:25.882067+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66158592 unmapped: 942080 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:26.882195+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66166784 unmapped: 933888 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:27.882380+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66166784 unmapped: 933888 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:28.882542+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66174976 unmapped: 925696 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:29.882758+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66174976 unmapped: 925696 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:30.883064+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66174976 unmapped: 925696 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:31.883188+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66183168 unmapped: 917504 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:32.883341+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66183168 unmapped: 917504 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:33.883515+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66183168 unmapped: 917504 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:34.883672+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66191360 unmapped: 909312 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:35.883877+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66191360 unmapped: 909312 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:36.884017+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66199552 unmapped: 901120 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:37.884172+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66191360 unmapped: 909312 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:38.884322+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66199552 unmapped: 901120 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:39.884435+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66199552 unmapped: 901120 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:40.884617+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66199552 unmapped: 901120 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:41.884810+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66207744 unmapped: 892928 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:42.884959+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66207744 unmapped: 892928 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:43.885483+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66207744 unmapped: 892928 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:44.885660+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66215936 unmapped: 884736 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:45.885832+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66215936 unmapped: 884736 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:46.886034+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66215936 unmapped: 884736 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:47.886186+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66224128 unmapped: 876544 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:48.886473+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66224128 unmapped: 876544 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:49.886674+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66232320 unmapped: 868352 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:50.887003+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66232320 unmapped: 868352 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:51.887275+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66240512 unmapped: 860160 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:52.887478+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66240512 unmapped: 860160 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:53.887839+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66248704 unmapped: 851968 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:54.887970+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66248704 unmapped: 851968 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:55.888130+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66248704 unmapped: 851968 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:56.888327+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66256896 unmapped: 843776 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:57.888462+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66256896 unmapped: 843776 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:58.888607+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66265088 unmapped: 835584 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:05:59.888780+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66265088 unmapped: 835584 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:00.888957+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66265088 unmapped: 835584 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:01.889139+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66273280 unmapped: 827392 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:02.889284+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66273280 unmapped: 827392 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:03.889493+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66281472 unmapped: 819200 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:04.889664+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66314240 unmapped: 786432 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:05.889886+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66314240 unmapped: 786432 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:06.890098+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66355200 unmapped: 745472 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:07.890260+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66355200 unmapped: 745472 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:08.890469+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66371584 unmapped: 729088 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:09.890627+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66371584 unmapped: 729088 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:10.890798+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:11.890935+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:12.891120+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:13.891302+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:14.891434+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:15.891596+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:16.891787+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66396160 unmapped: 704512 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:17.892055+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66396160 unmapped: 704512 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:18.892290+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66404352 unmapped: 696320 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:19.892456+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66404352 unmapped: 696320 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:20.892624+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66404352 unmapped: 696320 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:21.892780+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66412544 unmapped: 688128 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:22.892934+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66412544 unmapped: 688128 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:23.893151+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66420736 unmapped: 679936 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:24.893343+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66420736 unmapped: 679936 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:25.893517+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66428928 unmapped: 671744 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:26.893749+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66428928 unmapped: 671744 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:27.893901+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66428928 unmapped: 671744 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:28.894086+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66437120 unmapped: 663552 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:29.894295+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66437120 unmapped: 663552 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:30.894480+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66445312 unmapped: 655360 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:31.894630+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66445312 unmapped: 655360 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:32.894804+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66445312 unmapped: 655360 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:33.895004+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66453504 unmapped: 647168 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:34.895140+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66453504 unmapped: 647168 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:35.895274+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66461696 unmapped: 638976 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:36.895436+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66461696 unmapped: 638976 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:37.895623+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66461696 unmapped: 638976 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:38.895758+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66469888 unmapped: 630784 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:39.895937+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66469888 unmapped: 630784 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:40.896113+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66478080 unmapped: 622592 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:41.896305+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66478080 unmapped: 622592 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:42.896466+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66478080 unmapped: 622592 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:43.896636+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66486272 unmapped: 614400 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:44.896828+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66486272 unmapped: 614400 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:45.897000+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66494464 unmapped: 606208 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:46.897200+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66494464 unmapped: 606208 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:47.897646+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66494464 unmapped: 606208 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:48.897979+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66502656 unmapped: 598016 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:49.898194+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66502656 unmapped: 598016 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:50.898544+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66510848 unmapped: 589824 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:51.898835+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66510848 unmapped: 589824 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:52.899157+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66510848 unmapped: 589824 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:53.899503+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66519040 unmapped: 581632 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:54.899806+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66519040 unmapped: 581632 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:55.900024+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66519040 unmapped: 581632 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:56.900311+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66527232 unmapped: 573440 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:57.900595+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66527232 unmapped: 573440 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:58.900844+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66527232 unmapped: 573440 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:06:59.901150+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66535424 unmapped: 565248 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:00.901547+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66535424 unmapped: 565248 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:01.901869+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66543616 unmapped: 557056 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:02.902103+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66543616 unmapped: 557056 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:03.902407+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66543616 unmapped: 557056 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:04.902620+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66551808 unmapped: 548864 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:05.902799+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66551808 unmapped: 548864 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:06.903026+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66560000 unmapped: 540672 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:07.903192+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66560000 unmapped: 540672 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:08.903387+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66568192 unmapped: 532480 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:09.903521+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66568192 unmapped: 532480 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:10.903645+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66568192 unmapped: 532480 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:11.903772+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66576384 unmapped: 524288 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:12.903937+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66576384 unmapped: 524288 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:13.904192+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66584576 unmapped: 516096 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:14.904361+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66584576 unmapped: 516096 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:15.904528+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66584576 unmapped: 516096 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:16.904719+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66592768 unmapped: 507904 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:17.904895+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66592768 unmapped: 507904 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:18.905067+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66600960 unmapped: 499712 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:19.905215+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66600960 unmapped: 499712 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:20.905474+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66600960 unmapped: 499712 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:21.905640+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66609152 unmapped: 491520 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:22.905807+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66609152 unmapped: 491520 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:23.906022+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66617344 unmapped: 483328 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:24.906266+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66617344 unmapped: 483328 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:25.906451+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66625536 unmapped: 475136 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:26.906604+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66625536 unmapped: 475136 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:27.906822+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66625536 unmapped: 475136 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:28.906984+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66633728 unmapped: 466944 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:29.907146+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66633728 unmapped: 466944 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:30.907289+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66641920 unmapped: 458752 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:31.907427+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66641920 unmapped: 458752 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:32.907578+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66641920 unmapped: 458752 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:33.907797+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66650112 unmapped: 450560 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:34.907963+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66650112 unmapped: 450560 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:35.908142+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66650112 unmapped: 450560 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:36.908407+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66658304 unmapped: 442368 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:37.908584+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66666496 unmapped: 434176 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:38.908790+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66666496 unmapped: 434176 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:39.908957+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66674688 unmapped: 425984 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:40.909104+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66674688 unmapped: 425984 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:41.909291+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66682880 unmapped: 417792 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:42.909455+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66682880 unmapped: 417792 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:43.909713+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66691072 unmapped: 409600 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:44.909920+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66691072 unmapped: 409600 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:45.910083+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66691072 unmapped: 409600 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:46.910233+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66699264 unmapped: 401408 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:47.910382+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66699264 unmapped: 401408 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:48.910518+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66707456 unmapped: 393216 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:49.910662+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66707456 unmapped: 393216 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:50.910770+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66707456 unmapped: 393216 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:51.910930+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66715648 unmapped: 385024 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:52.911065+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66715648 unmapped: 385024 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:53.911323+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66715648 unmapped: 385024 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:54.911477+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66723840 unmapped: 376832 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:55.911598+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66723840 unmapped: 376832 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:56.911774+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66732032 unmapped: 368640 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:57.912022+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66732032 unmapped: 368640 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:58.912205+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66732032 unmapped: 368640 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Cumulative writes: 4379 writes, 20K keys, 4379 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 4379 writes, 468 syncs, 9.36 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 4379 writes, 20K keys, 4379 commit groups, 1.0 writes per commit group, ingest: 16.51 MB, 0.03 MB/s
                                           Interval WAL: 4379 writes, 468 syncs, 9.36 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560f2dc198d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 0.000158 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560f2dc198d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 0.000158 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560f2dc198d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 0.000158 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560f2dc198d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 0.000158 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560f2dc198d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 0.000158 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560f2dc198d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 0.000158 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560f2dc198d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 0.000158 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560f2dc19a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560f2dc19a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560f2dc19a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560f2dc198d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 0.000158 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560f2dc198d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 0.000158 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:07:59.912408+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66797568 unmapped: 303104 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:00.912567+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66797568 unmapped: 303104 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:01.912722+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66805760 unmapped: 294912 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:02.912906+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66805760 unmapped: 294912 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:03.913112+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66805760 unmapped: 294912 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:04.913323+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66813952 unmapped: 286720 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:05.913519+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66813952 unmapped: 286720 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:06.913682+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66822144 unmapped: 278528 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:07.913957+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66822144 unmapped: 278528 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:08.914340+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66822144 unmapped: 278528 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:09.914592+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66830336 unmapped: 270336 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:10.914789+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66830336 unmapped: 270336 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:11.915047+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66838528 unmapped: 262144 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:12.915225+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66838528 unmapped: 262144 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:13.915459+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66838528 unmapped: 262144 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:14.915621+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66846720 unmapped: 253952 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:15.915931+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66846720 unmapped: 253952 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:16.916116+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66846720 unmapped: 253952 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:17.916311+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66854912 unmapped: 245760 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:18.916532+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66854912 unmapped: 245760 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:19.916844+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66863104 unmapped: 237568 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:20.917054+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66863104 unmapped: 237568 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:21.917268+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66871296 unmapped: 229376 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:22.917540+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66871296 unmapped: 229376 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:23.917812+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66871296 unmapped: 229376 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:24.918012+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66879488 unmapped: 221184 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:25.918210+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66879488 unmapped: 221184 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:26.918392+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66887680 unmapped: 212992 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:27.918597+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66887680 unmapped: 212992 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:28.918805+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66887680 unmapped: 212992 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:29.919046+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66895872 unmapped: 204800 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:30.919217+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66895872 unmapped: 204800 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:31.919418+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66904064 unmapped: 196608 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:32.919720+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66904064 unmapped: 196608 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:33.919930+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66904064 unmapped: 196608 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:34.920055+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66912256 unmapped: 188416 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:35.920183+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66912256 unmapped: 188416 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:36.920330+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66904064 unmapped: 196608 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:37.920484+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66912256 unmapped: 188416 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:38.920635+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66912256 unmapped: 188416 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:39.920776+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66920448 unmapped: 180224 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:40.920923+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66920448 unmapped: 180224 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:41.921069+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66920448 unmapped: 180224 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:42.921206+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66928640 unmapped: 172032 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:43.921391+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:44.921624+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66928640 unmapped: 172032 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:45.921854+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66936832 unmapped: 163840 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:46.922106+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66936832 unmapped: 163840 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:47.922294+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66936832 unmapped: 163840 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:48.922480+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66945024 unmapped: 155648 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:49.922771+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66945024 unmapped: 155648 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:50.923001+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66953216 unmapped: 147456 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:51.923131+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66953216 unmapped: 147456 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:52.923360+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66953216 unmapped: 147456 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:53.923630+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66961408 unmapped: 139264 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:54.923756+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66961408 unmapped: 139264 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:55.923879+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66961408 unmapped: 139264 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:56.924023+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66969600 unmapped: 131072 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:57.924186+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66969600 unmapped: 131072 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:58.924385+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66977792 unmapped: 122880 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:08:59.924562+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66977792 unmapped: 122880 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:00.924806+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66985984 unmapped: 114688 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:01.925051+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66985984 unmapped: 114688 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:02.925193+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66985984 unmapped: 114688 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:03.925421+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66994176 unmapped: 106496 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:04.925610+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66994176 unmapped: 106496 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:05.925845+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 66994176 unmapped: 106496 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:06.925996+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67002368 unmapped: 98304 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:07.926169+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67002368 unmapped: 98304 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:08.926362+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67018752 unmapped: 81920 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:09.926534+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67018752 unmapped: 81920 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:10.926730+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67026944 unmapped: 73728 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:11.926875+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67026944 unmapped: 73728 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:12.926999+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67026944 unmapped: 73728 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:13.927218+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67035136 unmapped: 65536 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:14.927420+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67035136 unmapped: 65536 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:15.927581+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67043328 unmapped: 57344 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:16.927774+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67043328 unmapped: 57344 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:17.927952+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67043328 unmapped: 57344 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:18.928080+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67051520 unmapped: 49152 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:19.928223+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67051520 unmapped: 49152 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:20.928404+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67051520 unmapped: 49152 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:21.928595+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67059712 unmapped: 40960 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:22.928773+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67059712 unmapped: 40960 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:23.929115+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67067904 unmapped: 32768 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:24.929266+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67067904 unmapped: 32768 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:25.929436+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67067904 unmapped: 32768 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:26.929606+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67076096 unmapped: 24576 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:27.929816+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67076096 unmapped: 24576 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:28.929990+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67076096 unmapped: 24576 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:29.930139+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67084288 unmapped: 16384 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:30.930321+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67084288 unmapped: 16384 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:31.930468+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67092480 unmapped: 8192 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:32.930623+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67092480 unmapped: 8192 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:33.930879+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67092480 unmapped: 8192 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:34.931093+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67100672 unmapped: 0 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:35.931276+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67100672 unmapped: 0 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:36.931511+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67108864 unmapped: 1040384 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:37.931746+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67108864 unmapped: 1040384 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:38.931957+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67108864 unmapped: 1040384 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:39.932139+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67117056 unmapped: 1032192 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:40.932406+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67117056 unmapped: 1032192 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:41.932605+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67125248 unmapped: 1024000 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:42.932819+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67125248 unmapped: 1024000 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:43.933057+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67125248 unmapped: 1024000 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:44.933178+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 1015808 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:45.933370+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 1015808 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:46.933520+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 1015808 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:47.933692+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 1007616 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:48.933909+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 1007616 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:49.934108+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67149824 unmapped: 999424 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:50.934261+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67149824 unmapped: 999424 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:51.934402+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67158016 unmapped: 991232 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:52.934556+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67158016 unmapped: 991232 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:53.934774+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67158016 unmapped: 991232 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:54.934927+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67166208 unmapped: 983040 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:55.935119+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67166208 unmapped: 983040 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:56.935357+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67166208 unmapped: 983040 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:57.935492+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67174400 unmapped: 974848 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:58.935671+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67174400 unmapped: 974848 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:09:59.935868+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67182592 unmapped: 966656 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:00.936065+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67182592 unmapped: 966656 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:01.936251+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67182592 unmapped: 966656 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:02.936386+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67182592 unmapped: 966656 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:03.936541+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67182592 unmapped: 966656 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:04.936723+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67182592 unmapped: 966656 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:05.936879+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 958464 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:06.937113+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 958464 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:07.937278+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 958464 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:08.937427+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 958464 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:09.937598+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 958464 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:10.937772+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 958464 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:11.937916+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 958464 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:12.938104+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 958464 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:13.938312+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 958464 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:14.938455+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 958464 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:15.938770+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 958464 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:16.938924+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 958464 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:17.939111+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 958464 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:18.939301+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 958464 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:19.939965+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 958464 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:20.940198+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 958464 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:21.940358+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 958464 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:22.940834+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 958464 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:23.941165+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 958464 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:24.941334+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 958464 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:25.941571+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 958464 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:26.941776+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 958464 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:27.942227+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 958464 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:28.942395+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 958464 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:29.942541+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 958464 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:30.942765+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 958464 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:31.942921+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 958464 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:32.943070+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 958464 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:33.943383+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 958464 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:34.943579+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 958464 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:35.943793+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 958464 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:36.943984+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 958464 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:37.944146+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 958464 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:38.944352+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 958464 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:39.944490+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 958464 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:40.944646+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 958464 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:41.944951+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 958464 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:42.945169+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 958464 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:43.945393+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 958464 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:44.945550+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 958464 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:45.945804+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 958464 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:46.945968+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 958464 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:47.946118+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 958464 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:48.946277+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 958464 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:49.946386+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 958464 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:50.946505+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 958464 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:51.946656+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 958464 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:52.946820+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 958464 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:53.947006+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 958464 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:54.947157+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 958464 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:55.947288+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 958464 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:56.947451+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 958464 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:57.947625+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 958464 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:58.947809+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 958464 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:10:59.947973+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 958464 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:00.948156+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 958464 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:01.948307+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 958464 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:02.948449+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 958464 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:03.948643+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 958464 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:04.948782+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 958464 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:05.948922+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 958464 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:06.949140+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 958464 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:07.949294+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 958464 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:08.949465+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67198976 unmapped: 950272 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:09.949655+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67198976 unmapped: 950272 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:10.949773+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67198976 unmapped: 950272 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:11.949941+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67198976 unmapped: 950272 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:12.950080+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67198976 unmapped: 950272 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:13.950302+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67198976 unmapped: 950272 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:14.950528+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:15.950693+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67198976 unmapped: 950272 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:16.950884+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67198976 unmapped: 950272 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:17.951038+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67198976 unmapped: 950272 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:18.951185+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67198976 unmapped: 950272 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:19.951518+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67198976 unmapped: 950272 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:20.951793+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67198976 unmapped: 950272 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:21.951954+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67198976 unmapped: 950272 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:22.952142+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67198976 unmapped: 950272 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:23.952389+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67198976 unmapped: 950272 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:24.952544+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67198976 unmapped: 950272 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:25.952747+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67198976 unmapped: 950272 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:26.953156+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67198976 unmapped: 950272 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:27.953332+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67198976 unmapped: 950272 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:28.953467+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67198976 unmapped: 950272 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:29.953689+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67198976 unmapped: 950272 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:30.953895+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67198976 unmapped: 950272 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:31.954043+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67198976 unmapped: 950272 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:32.954175+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67198976 unmapped: 950272 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:33.954360+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67198976 unmapped: 950272 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:34.954511+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67198976 unmapped: 950272 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:35.954785+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67198976 unmapped: 950272 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:36.954969+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67198976 unmapped: 950272 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:37.955151+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67198976 unmapped: 950272 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:38.955326+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67198976 unmapped: 950272 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:39.955488+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67198976 unmapped: 950272 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:40.955671+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67198976 unmapped: 950272 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:41.955823+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67198976 unmapped: 950272 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:42.955976+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67198976 unmapped: 950272 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:43.956190+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67198976 unmapped: 950272 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:44.956357+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67198976 unmapped: 950272 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:45.956626+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67198976 unmapped: 950272 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:46.956786+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67198976 unmapped: 950272 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:47.956992+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67198976 unmapped: 950272 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:48.957131+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67198976 unmapped: 950272 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:49.957283+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 942080 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:50.957538+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 942080 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:51.957879+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 942080 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:52.958075+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 942080 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:53.958269+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 942080 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:54.958463+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 942080 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:55.958626+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 942080 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:56.958791+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 942080 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:57.958924+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 942080 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:58.959065+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 942080 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:11:59.959232+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 942080 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:00.959398+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 942080 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:01.959529+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 942080 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:02.959739+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 942080 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:03.960009+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 942080 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:04.960169+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 942080 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:05.960334+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 942080 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:06.960512+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 942080 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:07.960669+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 942080 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:08.960868+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 942080 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:09.961047+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 942080 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:10.961296+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 942080 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:11.961456+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 942080 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:12.961594+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 942080 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:13.961808+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 942080 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:14.961984+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 942080 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:15.962264+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 942080 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:16.962425+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 942080 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:17.962585+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 942080 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:18.962794+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 942080 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:19.962931+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 942080 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:20.963130+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 942080 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:21.963374+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 942080 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:22.963543+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 942080 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:23.963771+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 942080 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:24.963934+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 942080 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:25.964087+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 942080 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:26.964277+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 942080 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:27.964468+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 942080 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:28.964662+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 942080 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:29.964856+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 942080 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:30.964977+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 942080 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:31.965167+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 942080 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:32.965366+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 942080 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:33.965562+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67215360 unmapped: 933888 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:34.965735+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67215360 unmapped: 933888 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:35.965877+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67215360 unmapped: 933888 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:36.966012+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67215360 unmapped: 933888 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:37.966192+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67215360 unmapped: 933888 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:38.966322+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67215360 unmapped: 933888 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:39.966448+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67215360 unmapped: 933888 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:40.966625+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67215360 unmapped: 933888 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:41.966833+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67215360 unmapped: 933888 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:42.967080+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67215360 unmapped: 933888 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:43.967379+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67215360 unmapped: 933888 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:44.967571+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67215360 unmapped: 933888 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:45.967774+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67215360 unmapped: 933888 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:46.967955+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67215360 unmapped: 933888 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:47.968157+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67215360 unmapped: 933888 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:48.968367+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67215360 unmapped: 933888 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:49.968538+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67215360 unmapped: 933888 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:50.968735+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67215360 unmapped: 933888 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:51.968923+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67215360 unmapped: 933888 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:52.969119+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67215360 unmapped: 933888 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:53.969367+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67215360 unmapped: 933888 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:54.969519+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67215360 unmapped: 933888 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:55.969730+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67215360 unmapped: 933888 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:56.969944+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67215360 unmapped: 933888 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:57.970105+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67215360 unmapped: 933888 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:58.970265+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67215360 unmapped: 933888 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:12:59.970498+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67215360 unmapped: 933888 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:00.970690+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67215360 unmapped: 933888 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:01.970941+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67215360 unmapped: 933888 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:02.971097+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67215360 unmapped: 933888 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:03.971359+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67215360 unmapped: 933888 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:04.971615+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67215360 unmapped: 933888 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:05.971851+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67215360 unmapped: 933888 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:06.972061+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: mgrc ms_handle_reset ms_handle_reset con 0x560f2f55e000
Jan 10 17:32:09 compute-0 ceph-osd[85764]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3703679480
Jan 10 17:32:09 compute-0 ceph-osd[85764]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3703679480,v1:192.168.122.100:6801/3703679480]
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: get_auth_request con 0x560f2ea1b400 auth_method 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: mgrc handle_mgr_configure stats_period=5
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67395584 unmapped: 753664 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:07.972246+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67395584 unmapped: 753664 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:08.972438+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67395584 unmapped: 753664 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:09.972665+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67395584 unmapped: 753664 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:10.972835+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67395584 unmapped: 753664 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:11.972974+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67395584 unmapped: 753664 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:12.973154+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67395584 unmapped: 753664 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:13.973372+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67395584 unmapped: 753664 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:14.973570+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67395584 unmapped: 753664 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:15.973762+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67395584 unmapped: 753664 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:16.973944+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67395584 unmapped: 753664 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:17.974859+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 ms_handle_reset con 0x560f2f55f000 session 0x560f3060f180
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: handle_auth_request added challenge on 0x560f2f66cc00
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:18.975026+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:19.975176+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:20.975349+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:21.975506+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:22.975655+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:23.975933+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:24.976108+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:25.976253+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:26.976427+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:27.976590+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:28.976754+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:29.976902+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:30.977080+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:31.977314+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:32.977598+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:33.977753+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:34.977952+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:35.978109+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:36.978339+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:37.978513+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:38.978675+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:39.978873+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:40.979051+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:41.979209+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:42.979364+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:43.979547+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:44.979736+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:45.979909+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:46.980049+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:47.980228+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:48.980390+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:49.980587+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:50.980776+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:51.980953+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:52.981085+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:53.981260+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:54.981420+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:55.981558+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:56.981722+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:57.981827+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:58.981982+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:13:59.982130+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:00.982284+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:01.982469+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:02.982669+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:03.982927+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:04.983170+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:05.983352+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:06.983504+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:07.983637+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:08.983877+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:09.984038+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:10.984189+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:11.984405+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:12.984589+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:13.984780+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:14.984901+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:15.985013+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:16.985160+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:17.985328+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:18.985538+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:19.985810+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:20.986025+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:21.986189+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:22.986315+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:23.986513+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:24.986737+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:25.986904+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:26.987066+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:27.987266+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:28.987443+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:29.987574+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:30.987792+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:31.987933+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:32.988069+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:33.988304+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:34.988476+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:35.988641+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:36.988834+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:37.989054+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:38.989210+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:39.989435+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:40.989597+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:41.989785+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:42.989968+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:43.990236+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:44.990451+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:45.990614+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:46.990840+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:47.990982+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:48.991107+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:49.991250+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:50.991380+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:51.991547+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:52.991689+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:53.991905+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:54.992048+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:55.992313+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:56.992581+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:57.992757+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:58.992922+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:14:59.993191+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:00.993681+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:01.993887+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:02.994094+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:03.994334+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:04.994524+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:05.994725+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:06.995256+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:07.995450+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:08.995612+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67534848 unmapped: 614400 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:09.995960+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67534848 unmapped: 614400 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:10.996162+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67534848 unmapped: 614400 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:11.996330+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67534848 unmapped: 614400 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:12.996472+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67534848 unmapped: 614400 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:13.996669+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67534848 unmapped: 614400 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:14.997180+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67534848 unmapped: 614400 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:15.997467+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67534848 unmapped: 614400 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:16.997658+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67534848 unmapped: 614400 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:17.997812+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67534848 unmapped: 614400 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:18.997996+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67534848 unmapped: 614400 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:19.998282+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67534848 unmapped: 614400 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:20.998457+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67534848 unmapped: 614400 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:21.998664+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67534848 unmapped: 614400 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:22.998849+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67534848 unmapped: 614400 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:23.999102+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67534848 unmapped: 614400 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:24.999285+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67534848 unmapped: 614400 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:25.999431+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67534848 unmapped: 614400 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:26.999618+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67534848 unmapped: 614400 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:27.999848+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67534848 unmapped: 614400 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:29.000066+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67534848 unmapped: 614400 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:30.000237+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67534848 unmapped: 614400 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:31.000444+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67534848 unmapped: 614400 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:32.000689+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67534848 unmapped: 614400 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:33.000939+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67534848 unmapped: 614400 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:34.001174+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67534848 unmapped: 614400 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:35.001380+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67534848 unmapped: 614400 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:36.001597+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 606208 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:37.002156+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 606208 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:38.002315+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 606208 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:39.002974+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 606208 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:40.004013+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 606208 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:41.005224+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 606208 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:42.006289+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 606208 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:43.006568+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 606208 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:44.006981+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 606208 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:45.007475+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 606208 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:46.007687+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 606208 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:47.007892+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 606208 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:48.008029+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 606208 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:49.008233+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 606208 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:50.008752+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 606208 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:51.009018+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 606208 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:52.009352+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67485696 unmapped: 663552 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:53.009612+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67485696 unmapped: 663552 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:54.009815+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67485696 unmapped: 663552 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:55.009964+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67485696 unmapped: 663552 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:56.010192+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67485696 unmapped: 663552 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:57.010433+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67485696 unmapped: 663552 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:58.010589+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67485696 unmapped: 663552 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:15:59.010787+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67485696 unmapped: 663552 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:00.010967+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67485696 unmapped: 663552 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:01.011256+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67485696 unmapped: 663552 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:02.011484+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67485696 unmapped: 663552 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:03.011620+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67485696 unmapped: 663552 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:04.011809+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67485696 unmapped: 663552 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:05.012010+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67485696 unmapped: 663552 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:06.012238+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67485696 unmapped: 663552 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:07.012420+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67485696 unmapped: 663552 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:08.012616+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67485696 unmapped: 663552 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:09.012835+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 655360 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:10.013034+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 655360 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:11.013211+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 655360 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:12.013366+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 655360 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:13.013558+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 655360 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:14.013885+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 655360 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:15.014023+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 655360 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:16.014158+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 655360 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:17.014348+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 655360 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:18.014502+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 655360 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:19.014668+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 655360 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:20.014829+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 655360 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:21.015005+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 655360 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:22.015242+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 655360 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:23.015381+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 655360 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:24.015538+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 655360 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:25.015785+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 655360 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:26.015959+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 655360 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:27.016129+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 655360 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:28.016299+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 655360 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:29.016441+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 655360 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:30.016614+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 655360 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:31.016750+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 655360 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:32.016881+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 655360 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:33.017078+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 655360 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:34.017357+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 655360 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:35.017552+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 655360 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:36.017757+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 655360 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:37.017900+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 655360 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:38.018099+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 655360 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:39.018275+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 655360 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:40.018438+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 655360 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:41.018644+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 655360 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:42.019177+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 655360 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:43.019441+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 655360 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:44.019683+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 655360 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:45.019909+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 655360 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:46.020199+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 655360 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:47.020541+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 655360 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:48.020835+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 655360 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:49.021031+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 655360 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:50.021782+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 655360 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:51.022059+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 655360 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:52.022298+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 655360 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:53.022515+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 655360 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:54.022747+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 655360 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:55.022942+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 655360 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:56.023150+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 655360 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:57.023336+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 655360 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:58.023558+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 655360 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:16:59.023813+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 655360 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:00.023987+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 655360 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:01.024268+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 655360 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:02.024607+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 655360 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:03.024917+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 655360 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:04.025163+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 655360 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:05.025492+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 655360 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:06.025672+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 655360 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:07.025859+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 655360 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:08.026049+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 655360 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:09.026205+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:10.026402+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 655360 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:11.026619+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67502080 unmapped: 647168 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:12.026782+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67502080 unmapped: 647168 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:13.026949+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67502080 unmapped: 647168 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:14.027191+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67502080 unmapped: 647168 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:15.027325+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67502080 unmapped: 647168 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:16.027482+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67502080 unmapped: 647168 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:17.027640+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67502080 unmapped: 647168 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:18.027848+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67502080 unmapped: 647168 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:19.028024+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67502080 unmapped: 647168 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:20.028212+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67502080 unmapped: 647168 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:21.028350+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67502080 unmapped: 647168 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:22.028498+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67502080 unmapped: 647168 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:23.028641+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67502080 unmapped: 647168 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:24.028855+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67502080 unmapped: 647168 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:25.029032+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67502080 unmapped: 647168 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:26.029199+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67502080 unmapped: 647168 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:27.029357+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67502080 unmapped: 647168 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:28.029553+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67502080 unmapped: 647168 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:29.029793+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67502080 unmapped: 647168 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:30.029950+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67502080 unmapped: 647168 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:31.030084+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67502080 unmapped: 647168 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:32.030255+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67502080 unmapped: 647168 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:33.030418+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67502080 unmapped: 647168 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:34.030588+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67502080 unmapped: 647168 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:35.030783+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67502080 unmapped: 647168 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:36.030950+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67502080 unmapped: 647168 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:37.031091+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67502080 unmapped: 647168 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:38.031236+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67502080 unmapped: 647168 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:39.031403+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67502080 unmapped: 647168 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:40.031567+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67502080 unmapped: 647168 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:41.031765+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67502080 unmapped: 647168 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:42.031972+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67502080 unmapped: 647168 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:43.032166+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67502080 unmapped: 647168 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:44.032343+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67502080 unmapped: 647168 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:45.032504+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67502080 unmapped: 647168 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:46.032633+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67502080 unmapped: 647168 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:47.034175+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67502080 unmapped: 647168 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:48.034291+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67502080 unmapped: 647168 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:49.034449+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67502080 unmapped: 647168 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:50.034643+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67502080 unmapped: 647168 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:51.034788+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67502080 unmapped: 647168 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:52.034909+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67502080 unmapped: 647168 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:53.035135+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67502080 unmapped: 647168 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:54.035383+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67502080 unmapped: 647168 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:55.035636+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67502080 unmapped: 647168 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:56.035935+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67502080 unmapped: 647168 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:57.036199+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67502080 unmapped: 647168 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:58.036432+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67502080 unmapped: 647168 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:17:59.036816+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67502080 unmapped: 647168 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Cumulative writes: 4379 writes, 20K keys, 4379 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 4379 writes, 468 syncs, 9.36 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560f2dc198d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 0.000197 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560f2dc198d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 0.000197 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560f2dc198d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 0.000197 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560f2dc198d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 0.000197 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560f2dc198d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 0.000197 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560f2dc198d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 0.000197 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560f2dc198d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 0.000197 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560f2dc19a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560f2dc19a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560f2dc19a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560f2dc198d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 0.000197 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560f2dc198d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 0.000197 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:00.037053+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67534848 unmapped: 614400 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:01.037240+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67534848 unmapped: 614400 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:02.070507+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67534848 unmapped: 614400 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:03.070870+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67534848 unmapped: 614400 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:04.071177+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67534848 unmapped: 614400 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:05.071373+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67534848 unmapped: 614400 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:06.071599+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67534848 unmapped: 614400 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:07.071770+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67534848 unmapped: 614400 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:08.072028+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67534848 unmapped: 614400 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:09.072242+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67534848 unmapped: 614400 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:10.072419+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67534848 unmapped: 614400 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:11.072645+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67534848 unmapped: 614400 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:12.072880+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67534848 unmapped: 614400 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:13.073107+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67534848 unmapped: 614400 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:14.073403+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67534848 unmapped: 614400 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:15.073597+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67534848 unmapped: 614400 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:16.073801+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67534848 unmapped: 614400 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:17.074056+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67534848 unmapped: 614400 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:18.074269+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67534848 unmapped: 614400 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:19.074528+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67534848 unmapped: 614400 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:20.074738+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67534848 unmapped: 614400 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:21.075000+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67534848 unmapped: 614400 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:22.075325+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 606208 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:23.075610+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 606208 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:24.076192+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 606208 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:25.076503+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 606208 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:26.076811+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 606208 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:27.077097+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 606208 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:28.077339+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 606208 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:29.077628+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 606208 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:30.077931+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 606208 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:31.078233+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 606208 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:32.078477+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 606208 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:33.078827+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 606208 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:34.079214+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 606208 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:35.079398+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 606208 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:36.079572+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 606208 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:37.079756+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 606208 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:38.079893+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 606208 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:39.080672+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 606208 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:40.080816+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 606208 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:41.080988+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 606208 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:42.081124+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 606208 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:43.081475+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 606208 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:44.081843+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 606208 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:45.082017+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 606208 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:46.082296+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 606208 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:47.082567+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 606208 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:48.082786+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 606208 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:49.082984+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 606208 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:50.083112+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 606208 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:51.084516+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 606208 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:52.085117+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 606208 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:53.085297+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 606208 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:54.085838+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 606208 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:55.085962+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 606208 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:56.086152+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 606208 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:57.086460+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 606208 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:58.086648+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 606208 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:18:59.086805+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 606208 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:00.087035+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 606208 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:01.087247+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 606208 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:02.087390+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 606208 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:03.087599+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 606208 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:04.087795+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 606208 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:05.088014+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528517 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 606208 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:06.088198+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 606208 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:07.088467+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 606208 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:08.088646+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 606208 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:09.088885+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67551232 unmapped: 598016 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 68 handle_osd_map epochs [69,69], i have 68, src has [1,69]
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 1031.683105469s of 1031.696044922s, submitted: 6
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:10.089085+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 69 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4da59/0xbb000, compress 0x0/0x0/0x0, omap 0xb767, meta 0x1a24899), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 532009 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 69 heartbeat osd_stat(store_statfs(0x4fe10c000/0x0/0x4ffc00000, data 0x4f026/0xbe000, compress 0x0/0x0/0x0, omap 0xb817, meta 0x1a247e9), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67551232 unmapped: 598016 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 69 handle_osd_map epochs [69,70], i have 69, src has [1,70]
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:11.089303+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: handle_auth_request added challenge on 0x560f31728c00
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67813376 unmapped: 17121280 heap: 84934656 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 70 handle_osd_map epochs [70,71], i have 70, src has [1,71]
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:12.089514+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 71 ms_handle_reset con 0x560f31728c00 session 0x560f31817500
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _renew_subs
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67846144 unmapped: 17088512 heap: 84934656 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: handle_auth_request added challenge on 0x560f2f5bfc00
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:13.089754+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 68059136 unmapped: 16875520 heap: 84934656 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:14.090037+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 71 heartbeat osd_stat(store_statfs(0x4fd903000/0x0/0x4ffc00000, data 0x851c53/0x8c5000, compress 0x0/0x0/0x0, omap 0xb7f0, meta 0x1a24810), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 71 handle_osd_map epochs [72,72], i have 71, src has [1,72]
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _renew_subs
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 72 ms_handle_reset con 0x560f2f5bfc00 session 0x560f3177fdc0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 68075520 unmapped: 16859136 heap: 84934656 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:15.090260+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 585074 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 68075520 unmapped: 16859136 heap: 84934656 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 72 heartbeat osd_stat(store_statfs(0x4fd902000/0x0/0x4ffc00000, data 0x85323c/0x8c8000, compress 0x0/0x0/0x0, omap 0xb837, meta 0x1a247c9), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:16.090482+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 68075520 unmapped: 16859136 heap: 84934656 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:17.090717+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 68075520 unmapped: 16859136 heap: 84934656 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:18.090915+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 68075520 unmapped: 16859136 heap: 84934656 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:19.091126+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 72 handle_osd_map epochs [72,73], i have 72, src has [1,73]
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67944448 unmapped: 16990208 heap: 84934656 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:20.091373+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 587846 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67944448 unmapped: 16990208 heap: 84934656 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:21.091588+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67944448 unmapped: 16990208 heap: 84934656 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 73 heartbeat osd_stat(store_statfs(0x4fd8ff000/0x0/0x4ffc00000, data 0x8546ec/0x8cb000, compress 0x0/0x0/0x0, omap 0xb93b, meta 0x1a246c5), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:22.091827+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _renew_subs
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67952640 unmapped: 16982016 heap: 84934656 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 73 heartbeat osd_stat(store_statfs(0x4fd8ff000/0x0/0x4ffc00000, data 0x8546ec/0x8cb000, compress 0x0/0x0/0x0, omap 0xb93b, meta 0x1a246c5), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:23.092041+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67952640 unmapped: 16982016 heap: 84934656 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 73 heartbeat osd_stat(store_statfs(0x4fd8ff000/0x0/0x4ffc00000, data 0x8546ec/0x8cb000, compress 0x0/0x0/0x0, omap 0xb93b, meta 0x1a246c5), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:24.092244+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67952640 unmapped: 16982016 heap: 84934656 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:25.092457+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 587846 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 73 heartbeat osd_stat(store_statfs(0x4fd8ff000/0x0/0x4ffc00000, data 0x8546ec/0x8cb000, compress 0x0/0x0/0x0, omap 0xb93b, meta 0x1a246c5), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67952640 unmapped: 16982016 heap: 84934656 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:26.092630+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67952640 unmapped: 16982016 heap: 84934656 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:27.092858+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67952640 unmapped: 16982016 heap: 84934656 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 73 heartbeat osd_stat(store_statfs(0x4fd8ff000/0x0/0x4ffc00000, data 0x8546ec/0x8cb000, compress 0x0/0x0/0x0, omap 0xb93b, meta 0x1a246c5), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:28.093083+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67952640 unmapped: 16982016 heap: 84934656 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:29.093225+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67952640 unmapped: 16982016 heap: 84934656 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:30.093359+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 587846 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67952640 unmapped: 16982016 heap: 84934656 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:31.093470+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67952640 unmapped: 16982016 heap: 84934656 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 73 heartbeat osd_stat(store_statfs(0x4fd8ff000/0x0/0x4ffc00000, data 0x8546ec/0x8cb000, compress 0x0/0x0/0x0, omap 0xb93b, meta 0x1a246c5), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:32.093654+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67952640 unmapped: 16982016 heap: 84934656 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:33.093829+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67952640 unmapped: 16982016 heap: 84934656 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:34.094074+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67952640 unmapped: 16982016 heap: 84934656 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:35.094243+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 587846 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67952640 unmapped: 16982016 heap: 84934656 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:36.094413+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67952640 unmapped: 16982016 heap: 84934656 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:37.094590+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67952640 unmapped: 16982016 heap: 84934656 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 73 heartbeat osd_stat(store_statfs(0x4fd8ff000/0x0/0x4ffc00000, data 0x8546ec/0x8cb000, compress 0x0/0x0/0x0, omap 0xb93b, meta 0x1a246c5), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:38.094824+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67952640 unmapped: 16982016 heap: 84934656 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:39.095005+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67952640 unmapped: 16982016 heap: 84934656 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:40.095143+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 587846 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67952640 unmapped: 16982016 heap: 84934656 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:41.095347+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 73 heartbeat osd_stat(store_statfs(0x4fd8ff000/0x0/0x4ffc00000, data 0x8546ec/0x8cb000, compress 0x0/0x0/0x0, omap 0xb93b, meta 0x1a246c5), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67952640 unmapped: 16982016 heap: 84934656 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:42.095581+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67952640 unmapped: 16982016 heap: 84934656 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:43.095775+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67952640 unmapped: 16982016 heap: 84934656 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:44.095960+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 73 heartbeat osd_stat(store_statfs(0x4fd8ff000/0x0/0x4ffc00000, data 0x8546ec/0x8cb000, compress 0x0/0x0/0x0, omap 0xb93b, meta 0x1a246c5), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67952640 unmapped: 16982016 heap: 84934656 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:45.096123+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 587846 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67952640 unmapped: 16982016 heap: 84934656 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:46.096284+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67952640 unmapped: 16982016 heap: 84934656 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:47.096508+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67952640 unmapped: 16982016 heap: 84934656 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 73 heartbeat osd_stat(store_statfs(0x4fd8ff000/0x0/0x4ffc00000, data 0x8546ec/0x8cb000, compress 0x0/0x0/0x0, omap 0xb93b, meta 0x1a246c5), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:48.096767+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67952640 unmapped: 16982016 heap: 84934656 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:49.096917+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67952640 unmapped: 16982016 heap: 84934656 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:50.097130+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 587846 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 73 heartbeat osd_stat(store_statfs(0x4fd8ff000/0x0/0x4ffc00000, data 0x8546ec/0x8cb000, compress 0x0/0x0/0x0, omap 0xb93b, meta 0x1a246c5), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67952640 unmapped: 16982016 heap: 84934656 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:51.097279+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67952640 unmapped: 16982016 heap: 84934656 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:52.097512+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67952640 unmapped: 16982016 heap: 84934656 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:53.097813+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67952640 unmapped: 16982016 heap: 84934656 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:54.098129+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67952640 unmapped: 16982016 heap: 84934656 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 73 heartbeat osd_stat(store_statfs(0x4fd8ff000/0x0/0x4ffc00000, data 0x8546ec/0x8cb000, compress 0x0/0x0/0x0, omap 0xb93b, meta 0x1a246c5), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:55.098344+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 73 heartbeat osd_stat(store_statfs(0x4fd8ff000/0x0/0x4ffc00000, data 0x8546ec/0x8cb000, compress 0x0/0x0/0x0, omap 0xb93b, meta 0x1a246c5), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 587846 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 67952640 unmapped: 16982016 heap: 84934656 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:56.098660+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: handle_auth_request added challenge on 0x560f2f5bec00
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 46.543552399s of 46.655376434s, submitted: 33
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 68091904 unmapped: 16842752 heap: 84934656 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 73 handle_osd_map epochs [74,74], i have 73, src has [1,74]
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 74 ms_handle_reset con 0x560f2f5bec00 session 0x560f2ffde8c0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:57.099040+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: handle_auth_request added challenge on 0x560f2f5be000
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 74 ms_handle_reset con 0x560f2f5be000 session 0x560f317d4380
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: handle_auth_request added challenge on 0x560f2f5be400
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 74 ms_handle_reset con 0x560f2f5be400 session 0x560f317f0380
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 16834560 heap: 84934656 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:58.099214+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: handle_auth_request added challenge on 0x560f2f5be000
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _renew_subs
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 74 handle_osd_map epochs [75,75], i have 74, src has [1,75]
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 16834560 heap: 84934656 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _renew_subs
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 75 handle_osd_map epochs [76,76], i have 75, src has [1,76]
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:19:59.099370+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 76 ms_handle_reset con 0x560f2f5be000 session 0x560f3178d880
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 16834560 heap: 84934656 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:00.099767+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: handle_auth_request added challenge on 0x560f2f5be400
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 600858 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 16834560 heap: 84934656 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 76 handle_osd_map epochs [76,77], i have 76, src has [1,77]
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:01.099911+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 77 ms_handle_reset con 0x560f2f5be400 session 0x560f317d41c0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 77 heartbeat osd_stat(store_statfs(0x4fd8f4000/0x0/0x4ffc00000, data 0x8588da/0x8d6000, compress 0x0/0x0/0x0, omap 0xb51b, meta 0x1a24ae5), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: handle_auth_request added challenge on 0x560f2f5bfc00
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 77 heartbeat osd_stat(store_statfs(0x4fd8ef000/0x0/0x4ffc00000, data 0x859efb/0x8d9000, compress 0x0/0x0/0x0, omap 0xb1ab, meta 0x1a24e55), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 16801792 heap: 84934656 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:02.100106+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _renew_subs
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 77 handle_osd_map epochs [78,78], i have 77, src has [1,78]
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 78 ms_handle_reset con 0x560f2f5bfc00 session 0x560f30088e00
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _renew_subs
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 78 heartbeat osd_stat(store_statfs(0x4fd8f0000/0x0/0x4ffc00000, data 0x85b4e9/0x8da000, compress 0x0/0x0/0x0, omap 0xae93, meta 0x1a2516d), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 16793600 heap: 84934656 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:03.100380+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 16793600 heap: 84934656 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 78 heartbeat osd_stat(store_statfs(0x4fd8f0000/0x0/0x4ffc00000, data 0x85b4e9/0x8da000, compress 0x0/0x0/0x0, omap 0xae93, meta 0x1a2516d), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:04.100689+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: handle_auth_request added challenge on 0x560f2f66d400
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 78 handle_osd_map epochs [79,79], i have 78, src has [1,79]
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 79 ms_handle_reset con 0x560f2f66d400 session 0x560f317f1a40
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 79 handle_osd_map epochs [79,80], i have 79, src has [1,80]
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: handle_auth_request added challenge on 0x560f31bebc00
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 16482304 heap: 84934656 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:05.100905+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 80 handle_osd_map epochs [80,81], i have 80, src has [1,81]
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 81 ms_handle_reset con 0x560f31bebc00 session 0x560f3178c700
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 622265 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 68509696 unmapped: 16424960 heap: 84934656 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: handle_auth_request added challenge on 0x560f2f5be000
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:06.101302+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 81 handle_osd_map epochs [81,82], i have 81, src has [1,82]
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 82 ms_handle_reset con 0x560f2f5be000 session 0x560f3177e700
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 16384000 heap: 84934656 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: handle_auth_request added challenge on 0x560f31709c00
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:07.101466+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.323926926s of 10.448411942s, submitted: 71
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: handle_auth_request added challenge on 0x560f30369c00
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 82 handle_osd_map epochs [82,83], i have 82, src has [1,83]
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 83 ms_handle_reset con 0x560f31709c00 session 0x560f3060fa40
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 83 ms_handle_reset con 0x560f30369c00 session 0x560f30089880
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 68739072 unmapped: 16195584 heap: 84934656 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:08.101765+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 83 handle_osd_map epochs [84,84], i have 83, src has [1,84]
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: handle_auth_request added challenge on 0x560f3063fc00
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 16171008 heap: 84934656 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:09.101983+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 84 heartbeat osd_stat(store_statfs(0x4fd8d3000/0x0/0x4ffc00000, data 0x86383f/0x8f3000, compress 0x0/0x0/0x0, omap 0xb11d, meta 0x1a24ee3), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 84 handle_osd_map epochs [84,85], i have 84, src has [1,85]
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 85 ms_handle_reset con 0x560f3063fc00 session 0x560f2f35ec40
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: handle_auth_request added challenge on 0x560f3063f800
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 68943872 unmapped: 15990784 heap: 84934656 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:10.102185+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 772835 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 69140480 unmapped: 24190976 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 85 heartbeat osd_stat(store_statfs(0x4fc8ce000/0x0/0x4ffc00000, data 0x186646d/0x18fa000, compress 0x0/0x0/0x0, omap 0xa68f, meta 0x1a25971), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:11.102360+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 85 ms_handle_reset con 0x560f3063f800 session 0x560f317b6540
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 69009408 unmapped: 24322048 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:12.102511+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: handle_auth_request added challenge on 0x560f2f5be000
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _renew_subs
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 85 handle_osd_map epochs [86,86], i have 85, src has [1,86]
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _renew_subs
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 86 handle_osd_map epochs [87,87], i have 86, src has [1,87]
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 87 heartbeat osd_stat(store_statfs(0x4fb8cd000/0x0/0x4ffc00000, data 0x2867a3a/0x28fd000, compress 0x0/0x0/0x0, omap 0xa6e7, meta 0x1a25919), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: handle_auth_request added challenge on 0x560f30369c00
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 87 ms_handle_reset con 0x560f2f5be000 session 0x560f317b61c0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 68968448 unmapped: 24363008 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: handle_auth_request added challenge on 0x560f31be8c00
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:13.102678+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 87 handle_osd_map epochs [87,88], i have 87, src has [1,88]
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 88 heartbeat osd_stat(store_statfs(0x4fd8c9000/0x0/0x4ffc00000, data 0x86904b/0x8ff000, compress 0x0/0x0/0x0, omap 0xa167, meta 0x1a25e99), peers [1,2] op hist [1,1])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 88 ms_handle_reset con 0x560f30369c00 session 0x560f31817c00
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 88 ms_handle_reset con 0x560f31be8c00 session 0x560f3113ec40
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 70066176 unmapped: 23265280 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:14.102970+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: handle_auth_request added challenge on 0x560f31be8800
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _renew_subs
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 88 handle_osd_map epochs [89,89], i have 88, src has [1,89]
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 89 ms_handle_reset con 0x560f31be8800 session 0x560f317428c0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 89 heartbeat osd_stat(store_statfs(0x4fd8c6000/0x0/0x4ffc00000, data 0x86a28c/0x900000, compress 0x0/0x0/0x0, omap 0x11d7f, meta 0x1a1e281), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 23175168 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:15.103140+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: handle_auth_request added challenge on 0x560f31be8400
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 89 heartbeat osd_stat(store_statfs(0x4fd8c6000/0x0/0x4ffc00000, data 0x86a28c/0x900000, compress 0x0/0x0/0x0, omap 0x11d7f, meta 0x1a1e281), peers [1,2] op hist [1])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _renew_subs
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 89 handle_osd_map epochs [90,90], i have 89, src has [1,90]
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 663562 data_alloc: 218103808 data_used: 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 90 ms_handle_reset con 0x560f31be8400 session 0x560f31816e00
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 71606272 unmapped: 21725184 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:16.103323+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: handle_auth_request added challenge on 0x560f2f5be000
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 90 handle_osd_map epochs [90,91], i have 90, src has [1,91]
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 91 ms_handle_reset con 0x560f2f5be000 session 0x560f2ffdefc0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 71712768 unmapped: 21618688 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:17.103485+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: handle_auth_request added challenge on 0x560f30369c00
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.685506821s of 10.200369835s, submitted: 215
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: handle_auth_request added challenge on 0x560f31be8800
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 91 handle_osd_map epochs [91,92], i have 91, src has [1,92]
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 92 ms_handle_reset con 0x560f30369c00 session 0x560f317d5500
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 71753728 unmapped: 21577728 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:18.103642+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: handle_auth_request added challenge on 0x560f31be8c00
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 92 ms_handle_reset con 0x560f31be8800 session 0x560f316daa80
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _renew_subs
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 92 handle_osd_map epochs [93,93], i have 92, src has [1,93]
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 93 ms_handle_reset con 0x560f31be8c00 session 0x560f2ffdea80
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 71802880 unmapped: 21528576 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:19.103837+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 71802880 unmapped: 21528576 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:20.104114+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fc720000/0x0/0x4ffc00000, data 0x86feab/0x908000, compress 0x0/0x0/0x0, omap 0x1311a, meta 0x2bbcee6), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 93 handle_osd_map epochs [94,94], i have 93, src has [1,94]
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 93 handle_osd_map epochs [94,94], i have 94, src has [1,94]
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 671629 data_alloc: 218103808 data_used: 8122
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 71802880 unmapped: 21528576 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:21.104478+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 71802880 unmapped: 21528576 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:22.104754+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 94 heartbeat osd_stat(store_statfs(0x4fc71f000/0x0/0x4ffc00000, data 0x871377/0x90b000, compress 0x0/0x0/0x0, omap 0x1345f, meta 0x2bbcba1), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _renew_subs
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 94 heartbeat osd_stat(store_statfs(0x4fc71f000/0x0/0x4ffc00000, data 0x871377/0x90b000, compress 0x0/0x0/0x0, omap 0x1345f, meta 0x2bbcba1), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 71802880 unmapped: 21528576 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:23.104939+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 71802880 unmapped: 21528576 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: handle_auth_request added challenge on 0x560f31be8000
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:24.105139+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 94 ms_handle_reset con 0x560f31be8000 session 0x560f317b7340
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: handle_auth_request added challenge on 0x560f2f5be000
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: handle_auth_request added challenge on 0x560f31be8000
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 94 handle_osd_map epochs [95,95], i have 94, src has [1,95]
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 94 handle_osd_map epochs [94,95], i have 95, src has [1,95]
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 71933952 unmapped: 21397504 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:25.105301+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 95 handle_osd_map epochs [95,96], i have 95, src has [1,96]
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 96 ms_handle_reset con 0x560f31be8000 session 0x560f318161c0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 96 ms_handle_reset con 0x560f2f5be000 session 0x560f3171bdc0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: handle_auth_request added challenge on 0x560f31be8800
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 96 ms_handle_reset con 0x560f31be8800 session 0x560f3171ac40
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 681074 data_alloc: 218103808 data_used: 8138
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 72056832 unmapped: 21274624 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:26.105486+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 96 heartbeat osd_stat(store_statfs(0x4fc716000/0x0/0x4ffc00000, data 0x873fad/0x912000, compress 0x0/0x0/0x0, omap 0x13c02, meta 0x2bbc3fe), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 72056832 unmapped: 21274624 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:27.105662+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 72056832 unmapped: 21274624 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 96 heartbeat osd_stat(store_statfs(0x4fc716000/0x0/0x4ffc00000, data 0x873fad/0x912000, compress 0x0/0x0/0x0, omap 0x13c02, meta 0x2bbc3fe), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:28.105828+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 72056832 unmapped: 21274624 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:29.105985+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 72056832 unmapped: 21274624 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:30.106186+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 96 heartbeat osd_stat(store_statfs(0x4fc716000/0x0/0x4ffc00000, data 0x873fad/0x912000, compress 0x0/0x0/0x0, omap 0x13c02, meta 0x2bbc3fe), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 681074 data_alloc: 218103808 data_used: 8138
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 72056832 unmapped: 21274624 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:31.106393+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 72056832 unmapped: 21274624 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 96 heartbeat osd_stat(store_statfs(0x4fc716000/0x0/0x4ffc00000, data 0x873fad/0x912000, compress 0x0/0x0/0x0, omap 0x13c02, meta 0x2bbc3fe), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:32.106560+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _renew_subs
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 72056832 unmapped: 21274624 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "quorum_status"} v 0)
Jan 10 17:32:09 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3741490794' entity='client.admin' cmd={"prefix": "quorum_status"} : dispatch
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:33.106769+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 96 heartbeat osd_stat(store_statfs(0x4fc716000/0x0/0x4ffc00000, data 0x873fad/0x912000, compress 0x0/0x0/0x0, omap 0x13c02, meta 0x2bbc3fe), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 72056832 unmapped: 21274624 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:34.106997+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 96 handle_osd_map epochs [97,97], i have 96, src has [1,97]
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.961437225s of 17.086120605s, submitted: 79
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 73121792 unmapped: 20209664 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:35.107162+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 683238 data_alloc: 218103808 data_used: 8138
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 73121792 unmapped: 20209664 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:36.107300+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 73121792 unmapped: 20209664 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:37.107438+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 97 heartbeat osd_stat(store_statfs(0x4fc715000/0x0/0x4ffc00000, data 0x87545d/0x915000, compress 0x0/0x0/0x0, omap 0x13f54, meta 0x2bbc0ac), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 73121792 unmapped: 20209664 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:38.107598+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: handle_auth_request added challenge on 0x560f31be8c00
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _renew_subs
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _renew_subs
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 97 handle_osd_map epochs [99,99], i have 97, src has [1,99]
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 97 handle_osd_map epochs [98,99], i have 97, src has [1,99]
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 99 ms_handle_reset con 0x560f31be8c00 session 0x560f3171bc00
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 73211904 unmapped: 20119552 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:39.107778+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: handle_auth_request added challenge on 0x560f31728c00
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 73211904 unmapped: 20119552 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:40.107934+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 99 heartbeat osd_stat(store_statfs(0x4fc70f000/0x0/0x4ffc00000, data 0x87804b/0x91b000, compress 0x0/0x0/0x0, omap 0x14184, meta 0x2bbbe7c), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 690988 data_alloc: 218103808 data_used: 8154
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 73211904 unmapped: 20119552 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:41.108073+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 73228288 unmapped: 20103168 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:42.108210+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: handle_auth_request added challenge on 0x560f2f66c000
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 73359360 unmapped: 19972096 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:43.108376+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: handle_auth_request added challenge on 0x560f2f66d000
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 99 handle_osd_map epochs [99,100], i have 99, src has [1,100]
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 100 ms_handle_reset con 0x560f2f66d000 session 0x560f2f652a80
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 100 ms_handle_reset con 0x560f2f66c000 session 0x560f3180bc00
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 73555968 unmapped: 19775488 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:44.108519+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 100 handle_osd_map epochs [100,101], i have 100, src has [1,101]
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.925825119s of 10.007178307s, submitted: 36
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: handle_auth_request added challenge on 0x560f2f66d000
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 101 ms_handle_reset con 0x560f2f66d000 session 0x560f3180a000
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:45.108772+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 73588736 unmapped: 19742720 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: handle_auth_request added challenge on 0x560f2f66d400
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 101 ms_handle_reset con 0x560f2f66d400 session 0x560f2f652a80
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: handle_auth_request added challenge on 0x560f31be9400
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 101 heartbeat osd_stat(store_statfs(0x4fc709000/0x0/0x4ffc00000, data 0x87ac47/0x921000, compress 0x0/0x0/0x0, omap 0x14abf, meta 0x2bbb541), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 697182 data_alloc: 218103808 data_used: 8138
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:46.108933+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 73654272 unmapped: 19677184 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _renew_subs
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 101 handle_osd_map epochs [102,102], i have 101, src has [1,102]
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 102 ms_handle_reset con 0x560f31be9400 session 0x560f2f3521c0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: handle_auth_request added challenge on 0x560f31be8c00
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:47.109067+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 73703424 unmapped: 19628032 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 102 handle_osd_map epochs [102,103], i have 102, src has [1,103]
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 103 ms_handle_reset con 0x560f31be8c00 session 0x560f303ee540
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:48.109220+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 73744384 unmapped: 19587072 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:49.109388+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 73744384 unmapped: 19587072 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 103 heartbeat osd_stat(store_statfs(0x4fc704000/0x0/0x4ffc00000, data 0x87d887/0x926000, compress 0x0/0x0/0x0, omap 0x152df, meta 0x2bbad21), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 103 handle_osd_map epochs [104,104], i have 103, src has [1,104]
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 103 handle_osd_map epochs [104,104], i have 104, src has [1,104]
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 104 heartbeat osd_stat(store_statfs(0x4fc704000/0x0/0x4ffc00000, data 0x87d887/0x926000, compress 0x0/0x0/0x0, omap 0x152df, meta 0x2bbad21), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:50.109527+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 73621504 unmapped: 19709952 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 705131 data_alloc: 218103808 data_used: 8138
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:51.109735+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 73637888 unmapped: 19693568 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:52.109896+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 73637888 unmapped: 19693568 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _renew_subs
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:53.110088+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 73637888 unmapped: 19693568 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:54.110324+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 73637888 unmapped: 19693568 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: handle_auth_request added challenge on 0x560f2f66c000
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 104 ms_handle_reset con 0x560f2f66c000 session 0x560f3178ddc0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: handle_auth_request added challenge on 0x560f2f66d000
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 104 ms_handle_reset con 0x560f2f66d000 session 0x560f3178cfc0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: handle_auth_request added challenge on 0x560f2f66d400
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 104 ms_handle_reset con 0x560f2f66d400 session 0x560f3178da40
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 104 handle_osd_map epochs [105,105], i have 104, src has [1,105]
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.917822838s of 10.005137444s, submitted: 54
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: handle_auth_request added challenge on 0x560f31be9400
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 105 ms_handle_reset con 0x560f31be9400 session 0x560f31848380
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:55.110520+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 73646080 unmapped: 19685376 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: handle_auth_request added challenge on 0x560f31be8800
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 105 ms_handle_reset con 0x560f31be8800 session 0x560f318488c0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 105 heartbeat osd_stat(store_statfs(0x4fc6fe000/0x0/0x4ffc00000, data 0x880374/0x92c000, compress 0x0/0x0/0x0, omap 0x1584b, meta 0x2bba7b5), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 707441 data_alloc: 218103808 data_used: 8138
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:56.110685+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 73777152 unmapped: 19554304 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: handle_auth_request added challenge on 0x560f2f66c000
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 105 ms_handle_reset con 0x560f2f66c000 session 0x560f31848c40
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: handle_auth_request added challenge on 0x560f2f66d000
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 105 ms_handle_reset con 0x560f2f66d000 session 0x560f31848fc0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: handle_auth_request added challenge on 0x560f2f66d400
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 105 ms_handle_reset con 0x560f2f66d400 session 0x560f31849340
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:57.110841+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 19439616 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: handle_auth_request added challenge on 0x560f31be9400
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: handle_auth_request added challenge on 0x560f31be8000
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:58.111057+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 73916416 unmapped: 19415040 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:20:59.111231+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 73916416 unmapped: 19415040 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 105 heartbeat osd_stat(store_statfs(0x4fc6dc000/0x0/0x4ffc00000, data 0x8a4374/0x950000, compress 0x0/0x0/0x0, omap 0x15a0d, meta 0x2bba5f3), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:00.111416+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 73916416 unmapped: 19415040 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 711202 data_alloc: 218103808 data_used: 10186
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:01.111627+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 73916416 unmapped: 19415040 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: handle_auth_request added challenge on 0x560f31701800
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 105 ms_handle_reset con 0x560f31701800 session 0x560f3171a8c0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: handle_auth_request added challenge on 0x560f31701c00
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _renew_subs
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 105 handle_osd_map epochs [106,106], i have 105, src has [1,106]
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: handle_auth_request added challenge on 0x560f31a4c000
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 106 ms_handle_reset con 0x560f31a4c000 session 0x560f31848380
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 106 ms_handle_reset con 0x560f31701c00 session 0x560f2f653a40
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: handle_auth_request added challenge on 0x560f2f66c000
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:02.111859+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74178560 unmapped: 19152896 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 106 ms_handle_reset con 0x560f2f66c000 session 0x560f303eefc0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: handle_auth_request added challenge on 0x560f2f66d000
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _renew_subs
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 106 handle_osd_map epochs [107,107], i have 106, src has [1,107]
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 106 handle_osd_map epochs [106,107], i have 107, src has [1,107]
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 107 ms_handle_reset con 0x560f2f66d000 session 0x560f31742380
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:03.112083+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74211328 unmapped: 19120128 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 107 handle_osd_map epochs [107,108], i have 107, src has [1,108]
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:04.112351+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74211328 unmapped: 19120128 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 108 heartbeat osd_stat(store_statfs(0x4fc6cf000/0x0/0x4ffc00000, data 0x8a854b/0x959000, compress 0x0/0x0/0x0, omap 0x161d1, meta 0x2bb9e2f), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:05.112519+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74211328 unmapped: 19120128 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: handle_auth_request added challenge on 0x560f2f66d400
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.317912102s of 11.386803627s, submitted: 46
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 108 ms_handle_reset con 0x560f2f66d400 session 0x560f3171a700
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 719524 data_alloc: 218103808 data_used: 10186
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:06.112729+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: handle_auth_request added challenge on 0x560f31701800
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74244096 unmapped: 19087360 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _renew_subs
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 108 handle_osd_map epochs [109,109], i have 108, src has [1,109]
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 109 ms_handle_reset con 0x560f31701800 session 0x560f317b6a80
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fc6d3000/0x0/0x4ffc00000, data 0x8a854b/0x959000, compress 0x0/0x0/0x0, omap 0x1636c, meta 0x2bb9c94), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:07.112871+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74260480 unmapped: 19070976 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:08.113039+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 109 ms_handle_reset con 0x560f31be9400 session 0x560f31bfba40
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 109 ms_handle_reset con 0x560f31be8000 session 0x560f3171ae00
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74260480 unmapped: 19070976 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: handle_auth_request added challenge on 0x560f2f66c000
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 109 handle_osd_map epochs [109,110], i have 109, src has [1,110]
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 110 ms_handle_reset con 0x560f2f66c000 session 0x560f31848c40
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:09.113195+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 19054592 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 110 heartbeat osd_stat(store_statfs(0x4fc6ef000/0x0/0x4ffc00000, data 0x8871c5/0x93b000, compress 0x0/0x0/0x0, omap 0x17b0c, meta 0x2bb84f4), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 110 handle_osd_map epochs [111,111], i have 110, src has [1,111]
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:10.113353+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 19054592 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 726486 data_alloc: 218103808 data_used: 8138
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 111 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x888691/0x93e000, compress 0x0/0x0/0x0, omap 0x17ec5, meta 0x2bb813b), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:11.113517+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 19054592 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 111 ms_handle_reset con 0x560f31728c00 session 0x560f2ffde8c0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: handle_auth_request added challenge on 0x560f2f66d000
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 111 ms_handle_reset con 0x560f2f66d000 session 0x560f31bfb180
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:12.113657+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 19054592 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _renew_subs
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: handle_auth_request added challenge on 0x560f2f66c000
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 111 ms_handle_reset con 0x560f2f66c000 session 0x560f3180b500
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: handle_auth_request added challenge on 0x560f31be9400
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:13.113842+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 18980864 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 111 handle_osd_map epochs [112,112], i have 111, src has [1,112]
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 112 ms_handle_reset con 0x560f31be9400 session 0x560f3171b340
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:14.114087+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 18980864 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:15.114235+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 18980864 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 112 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x889cb2/0x940000, compress 0x0/0x0/0x0, omap 0x18501, meta 0x2bb7aff), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 112 handle_osd_map epochs [113,113], i have 112, src has [1,113]
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 112 handle_osd_map epochs [113,113], i have 113, src has [1,113]
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.869652748s of 10.011584282s, submitted: 101
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 731600 data_alloc: 218103808 data_used: 8122
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:16.114393+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 18980864 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:17.114564+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 18980864 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:18.114816+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 18980864 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:19.114991+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 18980864 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:20.115178+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 18980864 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fc6e7000/0x0/0x4ffc00000, data 0x88b17e/0x943000, compress 0x0/0x0/0x0, omap 0x187b3, meta 0x2bb784d), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 113 handle_osd_map epochs [114,114], i have 113, src has [1,114]
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:21.115384+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 734374 data_alloc: 218103808 data_used: 8122
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 18980864 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:22.115536+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 18980864 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _renew_subs
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:23.115760+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 18980864 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:24.115993+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 18980864 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:25.116246+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 18980864 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:26.116547+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 734374 data_alloc: 218103808 data_used: 8122
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 18980864 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:27.116792+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 18980864 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:28.117034+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 18980864 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:29.117218+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 18980864 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:30.117383+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 18980864 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:31.117613+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 734374 data_alloc: 218103808 data_used: 8122
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 18980864 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:32.117840+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 18980864 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:33.118002+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 18980864 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:34.118295+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 18980864 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:35.118523+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 18980864 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:36.118811+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 734374 data_alloc: 218103808 data_used: 8122
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 18980864 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:37.119020+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 18980864 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:38.119232+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 18980864 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:39.119524+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 18980864 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:40.119791+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 18980864 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:41.120034+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 734374 data_alloc: 218103808 data_used: 8122
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 18980864 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:42.120212+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 18980864 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:43.120471+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 18980864 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:44.120771+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 18980864 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:45.120994+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 18980864 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:46.121200+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 734374 data_alloc: 218103808 data_used: 8122
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 18980864 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:47.121357+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 18980864 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:48.121554+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 18980864 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:49.121838+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 18980864 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:50.122021+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 18980864 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:51.122193+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 734374 data_alloc: 218103808 data_used: 8122
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 18980864 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:52.122393+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 18980864 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:53.122543+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 18980864 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:54.122840+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 18980864 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:55.123056+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 18980864 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:56.123254+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 734374 data_alloc: 218103808 data_used: 8122
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 18980864 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:57.123485+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 18980864 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:58.123688+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 18980864 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:21:59.124017+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 18980864 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:00.124208+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 18980864 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:01.124357+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 734374 data_alloc: 218103808 data_used: 8122
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 18980864 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:02.124516+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 18980864 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:03.124657+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 18980864 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:04.124947+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 18980864 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:05.125079+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 18980864 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:06.125286+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 734374 data_alloc: 218103808 data_used: 8122
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 18980864 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:07.125496+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 18980864 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:08.126389+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 18980864 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:09.126626+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 18972672 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:10.126830+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 18972672 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:11.127010+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 734374 data_alloc: 218103808 data_used: 8122
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 18972672 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:12.127218+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 18972672 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:13.127372+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 18972672 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:14.127592+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 18972672 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:15.127832+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 18972672 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:16.127993+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 734374 data_alloc: 218103808 data_used: 8122
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 18972672 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:17.128164+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 18972672 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:18.128326+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 18972672 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:19.128493+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 18972672 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:20.128667+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 18972672 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:21.128846+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 734374 data_alloc: 218103808 data_used: 8122
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 18972672 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:22.129044+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 18972672 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:23.129204+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 18972672 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:24.129386+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 18972672 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:25.129593+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 18972672 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:26.129779+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 734374 data_alloc: 218103808 data_used: 8122
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 18972672 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:27.129939+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 18972672 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:28.130064+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 18972672 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:29.130217+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 18972672 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:30.130393+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 18972672 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:31.130622+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 734374 data_alloc: 218103808 data_used: 8122
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 18972672 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:32.130783+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 18972672 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:33.130962+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 18972672 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:34.131266+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 18972672 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:35.131475+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 18972672 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:36.131666+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 734374 data_alloc: 218103808 data_used: 8122
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 18972672 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:37.131866+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 18972672 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:38.132038+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 18972672 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:39.132401+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 18972672 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:40.132561+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 18972672 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:41.132730+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 734374 data_alloc: 218103808 data_used: 8122
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 18972672 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:42.132873+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 18972672 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:43.146758+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 18972672 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:44.146957+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 18972672 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:45.147086+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 18972672 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:46.147249+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 734374 data_alloc: 218103808 data_used: 8122
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 18972672 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:47.147392+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 18972672 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:48.147529+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 18972672 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:49.147637+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 18972672 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:50.147753+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74448896 unmapped: 18882560 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: do_command 'config diff' '{prefix=config diff}'
Jan 10 17:32:09 compute-0 ceph-osd[85764]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:51.147884+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: do_command 'config show' '{prefix=config show}'
Jan 10 17:32:09 compute-0 ceph-osd[85764]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Jan 10 17:32:09 compute-0 ceph-osd[85764]: do_command 'counter dump' '{prefix=counter dump}'
Jan 10 17:32:09 compute-0 ceph-osd[85764]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Jan 10 17:32:09 compute-0 ceph-osd[85764]: do_command 'counter schema' '{prefix=counter schema}'
Jan 10 17:32:09 compute-0 ceph-osd[85764]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 734374 data_alloc: 218103808 data_used: 8122
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75055104 unmapped: 18276352 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:52.148015+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74858496 unmapped: 18472960 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:53.148152+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: do_command 'log dump' '{prefix=log dump}'
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 85901312 unmapped: 7430144 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: do_command 'log dump' '{prefix=log dump}' result is 0 bytes
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:54.148337+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: do_command 'perf dump' '{prefix=perf dump}'
Jan 10 17:32:09 compute-0 ceph-osd[85764]: do_command 'perf dump' '{prefix=perf dump}' result is 0 bytes
Jan 10 17:32:09 compute-0 ceph-osd[85764]: do_command 'perf histogram dump' '{prefix=perf histogram dump}'
Jan 10 17:32:09 compute-0 ceph-osd[85764]: do_command 'perf histogram dump' '{prefix=perf histogram dump}' result is 0 bytes
Jan 10 17:32:09 compute-0 ceph-osd[85764]: do_command 'perf schema' '{prefix=perf schema}'
Jan 10 17:32:09 compute-0 ceph-osd[85764]: do_command 'perf schema' '{prefix=perf schema}' result is 0 bytes
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74915840 unmapped: 18415616 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:55.148512+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74924032 unmapped: 18407424 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:56.148648+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 734374 data_alloc: 218103808 data_used: 8122
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74924032 unmapped: 18407424 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:57.148789+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74924032 unmapped: 18407424 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:58.148908+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74924032 unmapped: 18407424 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:22:59.149026+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74924032 unmapped: 18407424 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:00.149155+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74924032 unmapped: 18407424 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:01.149278+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 734374 data_alloc: 218103808 data_used: 8122
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74924032 unmapped: 18407424 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:02.149391+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74924032 unmapped: 18407424 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:03.149531+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74924032 unmapped: 18407424 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:04.149832+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74924032 unmapped: 18407424 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:05.149994+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74924032 unmapped: 18407424 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:06.150124+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 734374 data_alloc: 218103808 data_used: 8122
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74924032 unmapped: 18407424 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:07.150289+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74924032 unmapped: 18407424 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:08.150451+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74924032 unmapped: 18407424 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:09.150603+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74924032 unmapped: 18407424 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:10.150732+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74924032 unmapped: 18407424 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:11.150866+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 734374 data_alloc: 218103808 data_used: 8122
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74924032 unmapped: 18407424 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:12.151028+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74924032 unmapped: 18407424 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:13.151257+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:14.151467+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74924032 unmapped: 18407424 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:15.151618+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74924032 unmapped: 18407424 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:16.151770+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74924032 unmapped: 18407424 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 734374 data_alloc: 218103808 data_used: 8122
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:17.151927+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74924032 unmapped: 18407424 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:18.152095+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74924032 unmapped: 18407424 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:19.152310+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74924032 unmapped: 18407424 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread fragmentation_score=0.000123 took=0.000055s
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:20.152433+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74924032 unmapped: 18407424 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:21.152560+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74924032 unmapped: 18407424 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 734374 data_alloc: 218103808 data_used: 8122
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:22.152923+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74924032 unmapped: 18407424 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:23.153145+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74924032 unmapped: 18407424 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:24.155043+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74924032 unmapped: 18407424 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:25.155234+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74924032 unmapped: 18407424 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:26.155416+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74924032 unmapped: 18407424 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 734374 data_alloc: 218103808 data_used: 8122
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:27.155602+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74924032 unmapped: 18407424 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:28.155810+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74924032 unmapped: 18407424 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:29.155963+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74924032 unmapped: 18407424 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:30.156137+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74924032 unmapped: 18407424 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:31.156496+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74924032 unmapped: 18407424 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 734374 data_alloc: 218103808 data_used: 8122
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:32.156641+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74924032 unmapped: 18407424 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:33.156916+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74924032 unmapped: 18407424 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:34.157188+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74924032 unmapped: 18407424 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:35.157345+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74924032 unmapped: 18407424 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:36.157562+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74924032 unmapped: 18407424 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 734374 data_alloc: 218103808 data_used: 8122
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:37.157791+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74924032 unmapped: 18407424 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:38.158020+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74924032 unmapped: 18407424 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:39.158263+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74924032 unmapped: 18407424 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:40.158505+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74924032 unmapped: 18407424 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:41.158777+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74924032 unmapped: 18407424 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 734374 data_alloc: 218103808 data_used: 8122
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:42.158958+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74924032 unmapped: 18407424 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:43.159141+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74924032 unmapped: 18407424 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:44.159484+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74924032 unmapped: 18407424 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:45.159856+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74924032 unmapped: 18407424 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:46.160062+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74924032 unmapped: 18407424 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 734374 data_alloc: 218103808 data_used: 8122
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:47.160269+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74924032 unmapped: 18407424 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:48.160473+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74924032 unmapped: 18407424 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:49.160774+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74924032 unmapped: 18407424 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:50.160967+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74924032 unmapped: 18407424 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:51.161126+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74924032 unmapped: 18407424 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 734374 data_alloc: 218103808 data_used: 8122
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:52.161368+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74924032 unmapped: 18407424 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:53.161538+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74924032 unmapped: 18407424 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:54.161798+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74924032 unmapped: 18407424 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:55.162024+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74924032 unmapped: 18407424 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:56.162219+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74924032 unmapped: 18407424 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 734374 data_alloc: 218103808 data_used: 8122
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:57.162455+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74924032 unmapped: 18407424 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:58.162737+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74924032 unmapped: 18407424 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:23:59.162990+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74924032 unmapped: 18407424 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:00.163224+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74924032 unmapped: 18407424 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:01.163374+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74924032 unmapped: 18407424 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 734374 data_alloc: 218103808 data_used: 8122
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:02.163587+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74924032 unmapped: 18407424 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:03.163853+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74924032 unmapped: 18407424 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:04.164166+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74924032 unmapped: 18407424 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:05.164341+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74924032 unmapped: 18407424 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:06.166574+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74924032 unmapped: 18407424 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 734374 data_alloc: 218103808 data_used: 8122
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:07.166785+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74924032 unmapped: 18407424 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:08.167010+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74924032 unmapped: 18407424 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:09.167290+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74932224 unmapped: 18399232 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:10.167525+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74932224 unmapped: 18399232 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:11.167803+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74932224 unmapped: 18399232 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 734374 data_alloc: 218103808 data_used: 8122
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:12.168016+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74932224 unmapped: 18399232 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:13.168333+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74932224 unmapped: 18399232 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:14.168748+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74932224 unmapped: 18399232 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:15.169202+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74932224 unmapped: 18399232 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:16.169660+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74932224 unmapped: 18399232 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 734374 data_alloc: 218103808 data_used: 8122
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:17.170130+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74932224 unmapped: 18399232 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:18.170516+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74932224 unmapped: 18399232 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:19.170870+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74932224 unmapped: 18399232 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:20.171074+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74932224 unmapped: 18399232 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:21.171304+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74932224 unmapped: 18399232 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 734374 data_alloc: 218103808 data_used: 8122
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:22.171500+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74932224 unmapped: 18399232 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:23.171815+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74932224 unmapped: 18399232 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:24.172106+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74932224 unmapped: 18399232 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:25.172323+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74932224 unmapped: 18399232 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:26.172534+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74932224 unmapped: 18399232 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 734374 data_alloc: 218103808 data_used: 8122
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:27.172747+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74932224 unmapped: 18399232 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:28.172924+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74932224 unmapped: 18399232 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:29.173099+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74932224 unmapped: 18399232 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:30.173255+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74932224 unmapped: 18399232 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:31.173384+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74932224 unmapped: 18399232 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 734374 data_alloc: 218103808 data_used: 8122
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:32.173517+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74932224 unmapped: 18399232 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:33.173739+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74932224 unmapped: 18399232 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:34.173967+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74932224 unmapped: 18399232 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:35.174204+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74932224 unmapped: 18399232 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:36.174408+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74932224 unmapped: 18399232 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 734374 data_alloc: 218103808 data_used: 8122
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:37.174620+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74932224 unmapped: 18399232 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:38.174820+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74932224 unmapped: 18399232 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:39.175080+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74932224 unmapped: 18399232 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:40.175235+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74932224 unmapped: 18399232 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:41.175393+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74932224 unmapped: 18399232 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 734374 data_alloc: 218103808 data_used: 8122
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:42.175557+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74932224 unmapped: 18399232 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:43.175826+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74932224 unmapped: 18399232 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:44.176073+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74932224 unmapped: 18399232 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:45.176265+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74932224 unmapped: 18399232 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:46.176520+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74932224 unmapped: 18399232 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 734374 data_alloc: 218103808 data_used: 8122
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:47.176784+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74932224 unmapped: 18399232 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:48.176988+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74932224 unmapped: 18399232 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:49.177141+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74932224 unmapped: 18399232 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:50.177311+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74932224 unmapped: 18399232 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:51.177488+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74932224 unmapped: 18399232 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 734374 data_alloc: 218103808 data_used: 8122
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:52.177620+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74932224 unmapped: 18399232 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:53.177789+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74932224 unmapped: 18399232 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:54.177997+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74932224 unmapped: 18399232 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:55.178182+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74932224 unmapped: 18399232 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:56.178303+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74932224 unmapped: 18399232 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 734374 data_alloc: 218103808 data_used: 8122
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:57.178475+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74932224 unmapped: 18399232 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:58.178765+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74932224 unmapped: 18399232 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:24:59.178979+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74932224 unmapped: 18399232 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:00.179159+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74932224 unmapped: 18399232 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:01.179348+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74932224 unmapped: 18399232 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 734374 data_alloc: 218103808 data_used: 8122
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:02.179544+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74932224 unmapped: 18399232 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:03.179746+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74932224 unmapped: 18399232 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:04.179934+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74932224 unmapped: 18399232 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:05.180128+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74932224 unmapped: 18399232 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:06.180358+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74932224 unmapped: 18399232 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 734374 data_alloc: 218103808 data_used: 8122
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:07.180495+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74932224 unmapped: 18399232 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:08.180792+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74932224 unmapped: 18399232 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:09.180981+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74932224 unmapped: 18399232 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:10.181195+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74932224 unmapped: 18399232 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:11.181457+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74932224 unmapped: 18399232 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:12.181786+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 734374 data_alloc: 218103808 data_used: 8122
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74932224 unmapped: 18399232 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:13.181972+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74932224 unmapped: 18399232 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:14.182222+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74932224 unmapped: 18399232 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:15.182437+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74932224 unmapped: 18399232 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:16.182679+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74932224 unmapped: 18399232 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:17.183838+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 734374 data_alloc: 218103808 data_used: 8122
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74932224 unmapped: 18399232 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:18.184098+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74932224 unmapped: 18399232 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:19.184828+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74932224 unmapped: 18399232 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:20.185490+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74932224 unmapped: 18399232 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:21.185923+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74932224 unmapped: 18399232 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:22.186349+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 734374 data_alloc: 218103808 data_used: 8122
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74932224 unmapped: 18399232 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:23.186688+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74932224 unmapped: 18399232 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:24.187070+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74932224 unmapped: 18399232 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:25.187343+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74932224 unmapped: 18399232 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:26.187551+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74932224 unmapped: 18399232 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:27.187819+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 734374 data_alloc: 218103808 data_used: 8122
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74940416 unmapped: 18391040 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:28.188039+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74940416 unmapped: 18391040 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:29.188206+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74940416 unmapped: 18391040 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:30.188382+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74940416 unmapped: 18391040 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:31.188629+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74940416 unmapped: 18391040 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:32.189164+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 734374 data_alloc: 218103808 data_used: 8122
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74940416 unmapped: 18391040 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:33.189542+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74940416 unmapped: 18391040 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:34.189811+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74940416 unmapped: 18391040 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:35.189991+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74940416 unmapped: 18391040 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:36.190199+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74940416 unmapped: 18391040 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:37.190425+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 734374 data_alloc: 218103808 data_used: 8122
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74940416 unmapped: 18391040 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:38.190688+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74940416 unmapped: 18391040 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:39.190936+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74940416 unmapped: 18391040 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:40.191194+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74940416 unmapped: 18391040 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:41.191416+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74940416 unmapped: 18391040 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:42.191689+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 734374 data_alloc: 218103808 data_used: 8122
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74940416 unmapped: 18391040 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:43.192018+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74940416 unmapped: 18391040 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:44.192263+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74940416 unmapped: 18391040 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:45.192479+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74940416 unmapped: 18391040 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:46.192674+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74940416 unmapped: 18391040 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:47.192951+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 734374 data_alloc: 218103808 data_used: 8122
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74940416 unmapped: 18391040 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:48.193127+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74940416 unmapped: 18391040 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:49.193296+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74940416 unmapped: 18391040 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:50.193449+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74940416 unmapped: 18391040 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:51.193597+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74940416 unmapped: 18391040 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:52.193883+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 734374 data_alloc: 218103808 data_used: 8122
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74948608 unmapped: 18382848 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:53.194077+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74948608 unmapped: 18382848 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:54.194364+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74948608 unmapped: 18382848 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:55.194613+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74948608 unmapped: 18382848 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:56.194815+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74948608 unmapped: 18382848 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:57.195033+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 734374 data_alloc: 218103808 data_used: 8122
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74948608 unmapped: 18382848 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:58.195185+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74948608 unmapped: 18382848 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:25:59.195467+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74948608 unmapped: 18382848 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:00.195822+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74948608 unmapped: 18382848 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:01.196090+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74948608 unmapped: 18382848 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:02.262771+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 734374 data_alloc: 218103808 data_used: 8122
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74948608 unmapped: 18382848 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:03.263034+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74948608 unmapped: 18382848 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:04.263323+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74948608 unmapped: 18382848 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:05.263603+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74948608 unmapped: 18382848 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:06.263869+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74948608 unmapped: 18382848 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:07.264089+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 734374 data_alloc: 218103808 data_used: 8122
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74948608 unmapped: 18382848 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:08.264314+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74948608 unmapped: 18382848 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:09.264547+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74948608 unmapped: 18382848 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:10.264781+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74948608 unmapped: 18382848 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:11.264969+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74948608 unmapped: 18382848 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:12.265205+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 734374 data_alloc: 218103808 data_used: 8122
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74956800 unmapped: 18374656 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:13.265428+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74956800 unmapped: 18374656 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:14.265783+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74956800 unmapped: 18374656 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:15.266129+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74956800 unmapped: 18374656 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:16.266418+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74956800 unmapped: 18374656 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:17.266775+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 734374 data_alloc: 218103808 data_used: 8122
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74956800 unmapped: 18374656 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:18.267080+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74956800 unmapped: 18374656 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:19.267366+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74956800 unmapped: 18374656 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:20.267600+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74956800 unmapped: 18374656 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:21.267806+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74956800 unmapped: 18374656 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:22.268038+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 734374 data_alloc: 218103808 data_used: 8122
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74956800 unmapped: 18374656 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:23.268194+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74956800 unmapped: 18374656 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:24.268520+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74956800 unmapped: 18374656 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:25.268774+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74956800 unmapped: 18374656 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:26.268969+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74956800 unmapped: 18374656 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:27.269187+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 734374 data_alloc: 218103808 data_used: 8122
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74956800 unmapped: 18374656 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:28.269329+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:29.269541+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74956800 unmapped: 18374656 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:30.269778+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74956800 unmapped: 18374656 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:31.269942+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74956800 unmapped: 18374656 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:32.270137+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74956800 unmapped: 18374656 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 734374 data_alloc: 218103808 data_used: 8122
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:33.270299+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74956800 unmapped: 18374656 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:34.270519+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74956800 unmapped: 18374656 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:35.270736+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74956800 unmapped: 18374656 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:36.270889+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74956800 unmapped: 18374656 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:37.271104+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74956800 unmapped: 18374656 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 734374 data_alloc: 218103808 data_used: 8122
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:38.271333+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74964992 unmapped: 18366464 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:39.271483+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74964992 unmapped: 18366464 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:40.271614+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74964992 unmapped: 18366464 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:41.271766+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74964992 unmapped: 18366464 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:42.271885+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74964992 unmapped: 18366464 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 734374 data_alloc: 218103808 data_used: 8122
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:43.272083+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74964992 unmapped: 18366464 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:44.272324+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74964992 unmapped: 18366464 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:45.272483+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74964992 unmapped: 18366464 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:46.272672+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74964992 unmapped: 18366464 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:47.272838+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74964992 unmapped: 18366464 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 734374 data_alloc: 218103808 data_used: 8122
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:48.273057+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74964992 unmapped: 18366464 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:49.273307+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74964992 unmapped: 18366464 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:50.273561+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74964992 unmapped: 18366464 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:51.273784+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74964992 unmapped: 18366464 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:52.273987+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74964992 unmapped: 18366464 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 734374 data_alloc: 218103808 data_used: 8122
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:53.274244+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74964992 unmapped: 18366464 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:54.274633+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74964992 unmapped: 18366464 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:55.274895+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74964992 unmapped: 18366464 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:56.275346+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74964992 unmapped: 18366464 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:57.275687+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74964992 unmapped: 18366464 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 734374 data_alloc: 218103808 data_used: 8122
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:58.276032+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74973184 unmapped: 18358272 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:26:59.276373+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74973184 unmapped: 18358272 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:00.276602+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74973184 unmapped: 18358272 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:01.276801+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74973184 unmapped: 18358272 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:02.395435+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74973184 unmapped: 18358272 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 734374 data_alloc: 218103808 data_used: 8122
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:03.395669+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74973184 unmapped: 18358272 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:04.396039+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74973184 unmapped: 18358272 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:05.396478+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74973184 unmapped: 18358272 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:06.396870+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74973184 unmapped: 18358272 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:07.397126+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74973184 unmapped: 18358272 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 734374 data_alloc: 218103808 data_used: 8122
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:08.397428+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74973184 unmapped: 18358272 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:09.397660+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74973184 unmapped: 18358272 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:10.397937+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74973184 unmapped: 18358272 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:11.398170+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74973184 unmapped: 18358272 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:12.398510+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74973184 unmapped: 18358272 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 734374 data_alloc: 218103808 data_used: 8122
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:13.398865+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74973184 unmapped: 18358272 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:14.399311+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74973184 unmapped: 18358272 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:15.399621+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74973184 unmapped: 18358272 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:16.400064+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74973184 unmapped: 18358272 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:17.400452+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74973184 unmapped: 18358272 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 734374 data_alloc: 218103808 data_used: 8122
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:18.400790+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74973184 unmapped: 18358272 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:19.401067+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74973184 unmapped: 18358272 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:20.401386+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74973184 unmapped: 18358272 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:21.401600+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74981376 unmapped: 18350080 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:22.401830+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74981376 unmapped: 18350080 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 734374 data_alloc: 218103808 data_used: 8122
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:23.402075+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74989568 unmapped: 18341888 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:24.402404+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74989568 unmapped: 18341888 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:25.402677+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74989568 unmapped: 18341888 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:26.402986+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74989568 unmapped: 18341888 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:27.403208+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74989568 unmapped: 18341888 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 734374 data_alloc: 218103808 data_used: 8122
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:28.403457+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74989568 unmapped: 18341888 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:29.403821+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74989568 unmapped: 18341888 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:30.404238+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74989568 unmapped: 18341888 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:31.404584+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74989568 unmapped: 18341888 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:32.404865+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74989568 unmapped: 18341888 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 734374 data_alloc: 218103808 data_used: 8122
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:33.405232+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74989568 unmapped: 18341888 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:34.405566+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74989568 unmapped: 18341888 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:35.405887+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74989568 unmapped: 18341888 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:36.406259+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74989568 unmapped: 18341888 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:37.406558+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74989568 unmapped: 18341888 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 734374 data_alloc: 218103808 data_used: 8122
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:38.407045+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74989568 unmapped: 18341888 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:39.407404+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74989568 unmapped: 18341888 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:40.407737+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74989568 unmapped: 18341888 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:41.408136+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74989568 unmapped: 18341888 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:42.408480+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74997760 unmapped: 18333696 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 734374 data_alloc: 218103808 data_used: 8122
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:43.408877+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74997760 unmapped: 18333696 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:44.409370+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74997760 unmapped: 18333696 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:45.409629+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74997760 unmapped: 18333696 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:46.409872+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74997760 unmapped: 18333696 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:47.410142+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74997760 unmapped: 18333696 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 734374 data_alloc: 218103808 data_used: 8122
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:48.410408+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74997760 unmapped: 18333696 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:49.410763+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74997760 unmapped: 18333696 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:50.411070+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74997760 unmapped: 18333696 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:51.411297+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74997760 unmapped: 18333696 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:52.411513+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74997760 unmapped: 18333696 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 734374 data_alloc: 218103808 data_used: 8122
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:53.411867+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74997760 unmapped: 18333696 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:54.412307+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74997760 unmapped: 18333696 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:55.412648+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74997760 unmapped: 18333696 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:56.413034+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74997760 unmapped: 18333696 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:57.413378+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74997760 unmapped: 18333696 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 734374 data_alloc: 218103808 data_used: 8122
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:58.413768+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74997760 unmapped: 18333696 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:27:59.414131+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.1 total, 600.0 interval
                                           Cumulative writes: 5552 writes, 23K keys, 5552 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 5552 writes, 988 syncs, 5.62 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1173 writes, 3350 keys, 1173 commit groups, 1.0 writes per commit group, ingest: 1.88 MB, 0.00 MB/s
                                           Interval WAL: 1173 writes, 520 syncs, 2.26 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 74997760 unmapped: 18333696 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:00.414376+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75005952 unmapped: 18325504 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:01.414570+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75005952 unmapped: 18325504 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:02.414804+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75005952 unmapped: 18325504 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 734374 data_alloc: 218103808 data_used: 8122
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:03.415042+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75005952 unmapped: 18325504 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:04.415386+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75005952 unmapped: 18325504 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:05.415625+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75005952 unmapped: 18325504 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:06.415901+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75005952 unmapped: 18325504 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: mgrc ms_handle_reset ms_handle_reset con 0x560f2ea1b400
Jan 10 17:32:09 compute-0 ceph-osd[85764]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3703679480
Jan 10 17:32:09 compute-0 ceph-osd[85764]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3703679480,v1:192.168.122.100:6801/3703679480]
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: get_auth_request con 0x560f2f66d000 auth_method 0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: mgrc handle_mgr_configure stats_period=5
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:07.416158+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75202560 unmapped: 18128896 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 734374 data_alloc: 218103808 data_used: 8122
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:08.416538+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75202560 unmapped: 18128896 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:09.416825+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75202560 unmapped: 18128896 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:10.417118+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75202560 unmapped: 18128896 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:11.417452+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75202560 unmapped: 18128896 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:12.417838+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75210752 unmapped: 18120704 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 734374 data_alloc: 218103808 data_used: 8122
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:13.418194+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75210752 unmapped: 18120704 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:14.418591+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75210752 unmapped: 18120704 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:15.418951+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75210752 unmapped: 18120704 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:16.419309+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75218944 unmapped: 18112512 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:17.419525+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75218944 unmapped: 18112512 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 734374 data_alloc: 218103808 data_used: 8122
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 ms_handle_reset con 0x560f2f66cc00 session 0x560f30089340
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: handle_auth_request added challenge on 0x560f2f66d400
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:18.419874+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 18259968 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:19.420149+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 18259968 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:20.420412+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 18259968 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:21.420767+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 18259968 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:22.421017+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 18259968 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 734374 data_alloc: 218103808 data_used: 8122
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:23.421323+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 18259968 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:24.421689+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 18259968 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:25.422045+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 18259968 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:26.422261+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 18259968 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:27.422570+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 18259968 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 734374 data_alloc: 218103808 data_used: 8122
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:28.422982+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 18259968 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:29.423247+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 18259968 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:30.423797+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 18259968 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:31.423967+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 18259968 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:32.424368+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 18259968 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 734374 data_alloc: 218103808 data_used: 8122
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:33.424741+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 18259968 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:34.425775+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 18259968 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:35.426300+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 18259968 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:36.426479+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 18259968 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:37.426642+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 18259968 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 734374 data_alloc: 218103808 data_used: 8122
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:38.426999+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 18259968 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:39.427438+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 18259968 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:40.428028+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 18259968 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:41.428335+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 18259968 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:42.428914+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 18259968 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 734374 data_alloc: 218103808 data_used: 8122
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:43.429307+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 18259968 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:44.429621+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 18259968 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:45.429773+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 18259968 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:46.430040+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 18259968 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:47.430284+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 18259968 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 734374 data_alloc: 218103808 data_used: 8122
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:48.430607+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 18259968 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 ms_handle_reset con 0x560f30366400 session 0x560f300881c0
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: handle_auth_request added challenge on 0x560f31701c00
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:49.430924+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 18259968 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:50.431327+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 18259968 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:51.431679+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 18259968 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:52.431947+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 18259968 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 734374 data_alloc: 218103808 data_used: 8122
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:53.432131+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 18259968 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:54.432385+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 18259968 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:55.432552+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 18259968 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:56.432754+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 18259968 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:57.432951+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 18259968 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 734374 data_alloc: 218103808 data_used: 8122
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:58.433140+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 18259968 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:28:59.433396+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 18259968 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:00.433719+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 18259968 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:01.434566+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 18259968 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:02.434880+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 18259968 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 734374 data_alloc: 218103808 data_used: 8122
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:03.435271+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 18259968 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:04.435777+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 18259968 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:05.436311+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 18259968 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:06.436613+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 18259968 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:07.436922+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 18259968 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 734374 data_alloc: 218103808 data_used: 8122
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:08.437206+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 18259968 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:09.437407+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 18259968 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:10.437882+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 18259968 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:11.438374+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 18259968 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:12.438921+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 18259968 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 734374 data_alloc: 218103808 data_used: 8122
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:13.439495+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 18259968 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:14.440203+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 18259968 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:15.440659+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 18259968 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:16.441024+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 18259968 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:17.441228+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 18259968 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 734374 data_alloc: 218103808 data_used: 8122
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:18.441558+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 18259968 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:19.441812+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 18259968 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:20.442236+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 18259968 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:21.442658+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 18259968 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:22.443183+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 18259968 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 734374 data_alloc: 218103808 data_used: 8122
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:23.443477+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 18259968 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:24.443919+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 18259968 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:25.444556+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 18259968 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:26.444950+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 18259968 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:27.445210+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 18259968 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:28.445520+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 734374 data_alloc: 218103808 data_used: 8122
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 18259968 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:29.445911+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 18259968 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:30.446342+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 18259968 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:31.446589+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 18259968 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:32.446944+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 18259968 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:33.447470+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 734374 data_alloc: 218103808 data_used: 8122
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 18259968 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:34.448291+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 18259968 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:35.448967+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 18259968 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:36.450022+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 18259968 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:37.451227+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 18259968 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:38.451536+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 734374 data_alloc: 218103808 data_used: 8122
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 18259968 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:39.451942+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 18259968 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:40.452839+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 18259968 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:41.453286+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 18259968 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:42.454405+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 18259968 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:43.454992+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 734374 data_alloc: 218103808 data_used: 8122
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 18259968 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:44.455675+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 18259968 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:45.456175+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 18259968 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:46.456632+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 18259968 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:47.457267+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 18259968 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:48.457927+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 734374 data_alloc: 218103808 data_used: 8122
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 18259968 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:49.458323+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 18259968 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:50.458683+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 18259968 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:51.459327+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 18259968 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:52.459992+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 18259968 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:53.460533+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 734374 data_alloc: 218103808 data_used: 8122
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 18259968 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:54.460979+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 18259968 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:55.461260+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 18259968 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:56.461569+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 18259968 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:57.463218+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 18259968 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:58.463607+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 734374 data_alloc: 218103808 data_used: 8122
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 18259968 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:29:59.464079+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 18259968 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:00.464482+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 18259968 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:01.464875+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 18259968 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:02.465284+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 18259968 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:03.465515+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 734374 data_alloc: 218103808 data_used: 8122
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75079680 unmapped: 18251776 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:04.466470+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75079680 unmapped: 18251776 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:05.466643+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75079680 unmapped: 18251776 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:06.466866+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75079680 unmapped: 18251776 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:07.466983+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:08.468223+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75079680 unmapped: 18251776 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 734374 data_alloc: 218103808 data_used: 8122
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:09.468844+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75079680 unmapped: 18251776 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:10.470361+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75079680 unmapped: 18251776 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:11.472185+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75079680 unmapped: 18251776 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:12.472840+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75079680 unmapped: 18251776 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:13.473574+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75079680 unmapped: 18251776 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 734374 data_alloc: 218103808 data_used: 8122
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:14.474065+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75079680 unmapped: 18251776 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:15.474330+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75079680 unmapped: 18251776 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:16.475034+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75079680 unmapped: 18251776 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:17.475617+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75079680 unmapped: 18251776 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:18.476086+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75079680 unmapped: 18251776 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 734374 data_alloc: 218103808 data_used: 8122
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:19.476457+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75079680 unmapped: 18251776 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:20.476733+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75079680 unmapped: 18251776 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:21.476997+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75079680 unmapped: 18251776 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:22.477226+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75079680 unmapped: 18251776 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:23.477480+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75079680 unmapped: 18251776 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 734374 data_alloc: 218103808 data_used: 8122
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:24.477869+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75079680 unmapped: 18251776 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:25.478180+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75079680 unmapped: 18251776 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:26.478463+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75079680 unmapped: 18251776 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:27.478812+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75079680 unmapped: 18251776 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:28.479015+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75079680 unmapped: 18251776 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 734374 data_alloc: 218103808 data_used: 8122
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:29.479215+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75079680 unmapped: 18251776 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:30.479447+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75079680 unmapped: 18251776 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:31.479653+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75079680 unmapped: 18251776 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:32.479996+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75079680 unmapped: 18251776 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:33.480187+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75079680 unmapped: 18251776 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 734374 data_alloc: 218103808 data_used: 8122
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:34.480438+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75079680 unmapped: 18251776 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:35.480686+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75079680 unmapped: 18251776 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:36.480928+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75079680 unmapped: 18251776 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:37.481165+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75079680 unmapped: 18251776 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:38.481387+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75079680 unmapped: 18251776 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 734374 data_alloc: 218103808 data_used: 8122
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:39.481567+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75079680 unmapped: 18251776 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:40.481909+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75079680 unmapped: 18251776 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:41.482420+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75079680 unmapped: 18251776 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:42.482769+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75087872 unmapped: 18243584 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:43.483018+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75087872 unmapped: 18243584 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 734374 data_alloc: 218103808 data_used: 8122
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:44.483368+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75087872 unmapped: 18243584 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:45.483643+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75087872 unmapped: 18243584 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:46.483876+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75087872 unmapped: 18243584 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:47.484124+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75087872 unmapped: 18243584 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:48.484433+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75087872 unmapped: 18243584 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 734374 data_alloc: 218103808 data_used: 8122
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:49.485020+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75087872 unmapped: 18243584 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:50.485399+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75087872 unmapped: 18243584 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:51.485822+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75087872 unmapped: 18243584 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:52.486121+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75096064 unmapped: 18235392 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:53.486299+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75096064 unmapped: 18235392 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 734374 data_alloc: 218103808 data_used: 8122
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:54.486631+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75096064 unmapped: 18235392 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:55.487204+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75096064 unmapped: 18235392 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:56.487582+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75096064 unmapped: 18235392 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:57.487910+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75096064 unmapped: 18235392 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:58.488208+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75096064 unmapped: 18235392 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 734374 data_alloc: 218103808 data_used: 8122
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:30:59.488481+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75096064 unmapped: 18235392 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:31:00.489028+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75096064 unmapped: 18235392 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:31:01.489215+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75096064 unmapped: 18235392 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:31:02.527938+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75096064 unmapped: 18235392 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:31:03.528488+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75096064 unmapped: 18235392 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 734374 data_alloc: 218103808 data_used: 8122
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:31:04.528808+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75096064 unmapped: 18235392 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:31:05.529052+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75096064 unmapped: 18235392 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:31:06.529363+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75096064 unmapped: 18235392 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:31:07.529654+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75096064 unmapped: 18235392 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:31:08.529845+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75096064 unmapped: 18235392 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 734374 data_alloc: 218103808 data_used: 8122
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:31:09.530005+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75096064 unmapped: 18235392 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:31:10.530160+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75096064 unmapped: 18235392 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:31:11.530342+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75096064 unmapped: 18235392 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:31:12.530929+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75096064 unmapped: 18235392 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:31:13.531118+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75096064 unmapped: 18235392 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 734374 data_alloc: 218103808 data_used: 8122
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:31:14.531494+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75096064 unmapped: 18235392 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:31:15.531679+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75096064 unmapped: 18235392 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:31:16.531879+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75096064 unmapped: 18235392 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:31:17.532078+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75096064 unmapped: 18235392 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:31:18.532258+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75096064 unmapped: 18235392 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 734374 data_alloc: 218103808 data_used: 8122
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:31:19.532442+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75096064 unmapped: 18235392 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:31:20.532626+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75096064 unmapped: 18235392 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:31:21.532921+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75096064 unmapped: 18235392 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:31:22.533103+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75096064 unmapped: 18235392 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:31:23.533276+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75096064 unmapped: 18235392 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 734374 data_alloc: 218103808 data_used: 8122
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:31:24.533471+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75096064 unmapped: 18235392 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:31:25.533648+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75096064 unmapped: 18235392 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:31:26.533778+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75096064 unmapped: 18235392 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:31:27.534030+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75096064 unmapped: 18235392 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:31:28.534266+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75096064 unmapped: 18235392 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 734374 data_alloc: 218103808 data_used: 8122
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:31:29.534419+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75096064 unmapped: 18235392 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:31:30.534650+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75096064 unmapped: 18235392 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:31:31.534913+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75096064 unmapped: 18235392 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:31:32.535090+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75096064 unmapped: 18235392 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:31:33.535248+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75096064 unmapped: 18235392 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 734374 data_alloc: 218103808 data_used: 8122
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:31:34.535444+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75096064 unmapped: 18235392 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:31:35.535622+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fc6e4000/0x0/0x4ffc00000, data 0x88c62e/0x946000, compress 0x0/0x0/0x0, omap 0x18aa8, meta 0x2bb7558), peers [1,2] op hist [])
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75104256 unmapped: 18227200 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:31:36.535816+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: do_command 'config diff' '{prefix=config diff}'
Jan 10 17:32:09 compute-0 ceph-osd[85764]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Jan 10 17:32:09 compute-0 ceph-osd[85764]: do_command 'config show' '{prefix=config show}'
Jan 10 17:32:09 compute-0 ceph-osd[85764]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Jan 10 17:32:09 compute-0 ceph-osd[85764]: do_command 'counter dump' '{prefix=counter dump}'
Jan 10 17:32:09 compute-0 ceph-osd[85764]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Jan 10 17:32:09 compute-0 ceph-osd[85764]: do_command 'counter schema' '{prefix=counter schema}'
Jan 10 17:32:09 compute-0 ceph-osd[85764]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75251712 unmapped: 18079744 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:31:37.535972+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75292672 unmapped: 18038784 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:31:38.536988+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 17:32:09 compute-0 ceph-osd[85764]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 17:32:09 compute-0 ceph-osd[85764]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 734374 data_alloc: 218103808 data_used: 8122
Jan 10 17:32:09 compute-0 ceph-osd[85764]: prioritycache tune_memory target: 4294967296 mapped: 75358208 unmapped: 17973248 heap: 93331456 old mem: 2845415832 new mem: 2845415832
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: tick
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_tickets
Jan 10 17:32:09 compute-0 ceph-osd[85764]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-10T17:31:39.537171+0000)
Jan 10 17:32:09 compute-0 ceph-osd[85764]: do_command 'log dump' '{prefix=log dump}'
Jan 10 17:32:10 compute-0 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.15150 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 10 17:32:10 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "versions"} v 0)
Jan 10 17:32:10 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3951297787' entity='client.admin' cmd={"prefix": "versions"} : dispatch
Jan 10 17:32:10 compute-0 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.15154 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 10 17:32:10 compute-0 ceph-mon[75249]: from='client.15146 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 10 17:32:10 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/3741490794' entity='client.admin' cmd={"prefix": "quorum_status"} : dispatch
Jan 10 17:32:10 compute-0 ceph-mon[75249]: from='client.15150 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 10 17:32:10 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/3951297787' entity='client.admin' cmd={"prefix": "versions"} : dispatch
Jan 10 17:32:10 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1141: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:32:10 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0)
Jan 10 17:32:10 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3528723675' entity='client.admin' cmd={"prefix": "health", "detail": "detail", "format": "json-pretty"} : dispatch
Jan 10 17:32:11 compute-0 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.15158 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 10 17:32:11 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0)
Jan 10 17:32:11 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1205121458' entity='client.admin' cmd={"prefix": "osd tree", "format": "json-pretty"} : dispatch
Jan 10 17:32:11 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 10 17:32:11 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 10 17:32:11 compute-0 ceph-mon[75249]: from='client.15154 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 10 17:32:11 compute-0 ceph-mon[75249]: pgmap v1141: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:32:11 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/3528723675' entity='client.admin' cmd={"prefix": "health", "detail": "detail", "format": "json-pretty"} : dispatch
Jan 10 17:32:11 compute-0 ceph-mon[75249]: from='client.15158 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 10 17:32:11 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/1205121458' entity='client.admin' cmd={"prefix": "osd tree", "format": "json-pretty"} : dispatch
Jan 10 17:32:11 compute-0 ceph-mon[75249]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 10 17:32:11 compute-0 ceph-mon[75249]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 10 17:32:11 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 10 17:32:11 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 10 17:32:11 compute-0 sshd-session[260222]: Invalid user admin from 216.36.124.133 port 32906
Jan 10 17:32:12 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump"} v 0)
Jan 10 17:32:12 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2451017032' entity='client.admin' cmd={"prefix": "config dump"} : dispatch
Jan 10 17:32:12 compute-0 sshd-session[260222]: Connection closed by invalid user admin 216.36.124.133 port 32906 [preauth]
Jan 10 17:32:12 compute-0 systemd[1]: Starting Hostname Service...
Jan 10 17:32:12 compute-0 ceph-mon[75249]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 10 17:32:12 compute-0 ceph-mon[75249]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 10 17:32:12 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/2451017032' entity='client.admin' cmd={"prefix": "config dump"} : dispatch
Jan 10 17:32:12 compute-0 systemd[1]: Started Hostname Service.
Jan 10 17:32:12 compute-0 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.15172 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 17:32:12 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1142: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:32:13 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0)
Jan 10 17:32:13 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/92559751' entity='client.admin' cmd={"prefix": "df", "detail": "detail"} : dispatch
Jan 10 17:32:13 compute-0 ceph-mon[75249]: from='client.15172 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 17:32:13 compute-0 ceph-mon[75249]: pgmap v1142: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:32:13 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/92559751' entity='client.admin' cmd={"prefix": "df", "detail": "detail"} : dispatch
Jan 10 17:32:13 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df"} v 0)
Jan 10 17:32:13 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4126133898' entity='client.admin' cmd={"prefix": "df"} : dispatch
Jan 10 17:32:14 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs dump"} v 0)
Jan 10 17:32:14 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3878839587' entity='client.admin' cmd={"prefix": "fs dump"} : dispatch
Jan 10 17:32:14 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/4126133898' entity='client.admin' cmd={"prefix": "df"} : dispatch
Jan 10 17:32:14 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/3878839587' entity='client.admin' cmd={"prefix": "fs dump"} : dispatch
Jan 10 17:32:14 compute-0 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 17:32:14 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs ls"} v 0)
Jan 10 17:32:14 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2477830072' entity='client.admin' cmd={"prefix": "fs ls"} : dispatch
Jan 10 17:32:14 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1143: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:32:15 compute-0 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.15182 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 17:32:15 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/2477830072' entity='client.admin' cmd={"prefix": "fs ls"} : dispatch
Jan 10 17:32:15 compute-0 ceph-mon[75249]: pgmap v1143: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:32:15 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds stat"} v 0)
Jan 10 17:32:15 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1325886679' entity='client.admin' cmd={"prefix": "mds stat"} : dispatch
Jan 10 17:32:16 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump"} v 0)
Jan 10 17:32:16 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1996759322' entity='client.admin' cmd={"prefix": "mon dump"} : dispatch
Jan 10 17:32:16 compute-0 ceph-mon[75249]: from='client.15182 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 17:32:16 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/1325886679' entity='client.admin' cmd={"prefix": "mds stat"} : dispatch
Jan 10 17:32:16 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/1996759322' entity='client.admin' cmd={"prefix": "mon dump"} : dispatch
Jan 10 17:32:16 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1144: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:32:16 compute-0 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.15188 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 17:32:17 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd blocklist ls"} v 0)
Jan 10 17:32:17 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3351039257' entity='client.admin' cmd={"prefix": "osd blocklist ls"} : dispatch
Jan 10 17:32:17 compute-0 ceph-mon[75249]: pgmap v1144: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:32:17 compute-0 ceph-mon[75249]: from='client.15188 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 17:32:17 compute-0 ceph-mon[75249]: from='client.? 192.168.122.100:0/3351039257' entity='client.admin' cmd={"prefix": "osd blocklist ls"} : dispatch
Jan 10 17:32:17 compute-0 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.15192 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 17:32:18 compute-0 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.15194 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 17:32:18 compute-0 ceph-mon[75249]: from='client.15192 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 17:32:18 compute-0 ceph-mon[75249]: from='client.15194 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 17:32:18 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd dump"} v 0)
Jan 10 17:32:18 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3371869035' entity='client.admin' cmd={"prefix": "osd dump"} : dispatch
Jan 10 17:32:18 compute-0 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1145: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 17:32:19 compute-0 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd numa-status"} v 0)
Jan 10 17:32:19 compute-0 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3025709342' entity='client.admin' cmd={"prefix": "osd numa-status"} : dispatch
